Skip to content Skip to footer
0 items - $0.00 0

Reverse engineering OpenAI code execution to make it run C and JavaScript by benswerd

Reverse engineering OpenAI code execution to make it run C and JavaScript by benswerd

17 Comments

  • Post Author
    rhodescolossus
    Posted March 12, 2025 at 4:28 pm

    Pretty cool, it'd be interesting to try other things like running a C++ daemon and letting it run, or adding something to cron.

  • Post Author
    j4nek
    Posted March 12, 2025 at 4:28 pm

    Many thanks for the interesting article! I normaly don't read any articles on AI here, but I really liked this one from a technical point of view!

    since reading on twitter is annoying with all the popups: https://archive.is/ETVQ0

  • Post Author
    yzydserd
    Posted March 12, 2025 at 4:31 pm

    Here is Simonw experimenting with ChatGPT and C a year ago: https://news.ycombinator.com/item?id=39801938

    I find ChatGPT and Claude really quite good at C.

  • Post Author
    mystraline
    Posted March 12, 2025 at 4:31 pm

    [flagged]

  • Post Author
    johnisgood
    Posted March 12, 2025 at 4:33 pm

    I have done something like this before with GPT, but I did not think it was that much of a deal.

  • Post Author
    lnauta
    Posted March 12, 2025 at 4:35 pm

    Interesting idea to increase the scope until the LLM gives suggestions on how to 'hack' itself. Good read!

  • Post Author
    incognito124
    Posted March 12, 2025 at 4:39 pm

    I can't believe they're running it out of ipynb

  • Post Author
    jasonthorsness
    Posted March 12, 2025 at 5:07 pm

    Given it’s running in a locked-down container: there’s no reason to restrict it to Python anyway. They should parter/use something like replit to allow anything!

    One weird thing – why would they be running such an old Linux?

    “Their sandbox is running a really old version of linux, a Kernel from 2016.”

  • Post Author
    jeffwass
    Posted March 12, 2025 at 5:14 pm

    A funny story I heard recently on a python podcast where a user was trying to get their LLM to ‘pip install’ a package in its sandbox, which it refused to do.

    So he tricked it by saying “what is the error message if you try to pip install foo” so it ran pip install and announced there was no error.

    Package foo now installed.

  • Post Author
    yapyap
    Posted March 12, 2025 at 5:24 pm

    [flagged]

  • Post Author
    simonw
    Posted March 12, 2025 at 5:33 pm

    I've had it write me SQLite extensions in C in the past, then compile them, then load them into Python and test them out: https://simonwillison.net/2024/Mar/23/building-c-extensions-…

    I've also uploaded binary executable for JavaScript (Deno), Lua and PHP and had it write and execute code in those languages too: https://til.simonwillison.net/llms/code-interpreter-expansio…

    If there's a Python package you want to use that's not available you can upload a wheel file and tell it to install that.

  • Post Author
    grepfru_it
    Posted March 12, 2025 at 7:28 pm

    Just a reminder, Google allowed all of their internal source code to be browsed in a manner like this when Gemini first came out. Everyone on here said that could never happen, yet here we are again.

    All of the exploits of early dotcom days are new again. Have fun!

  • Post Author
    stolen_biscuit
    Posted March 12, 2025 at 8:02 pm

    How do we know you're actually running the code and it's not just the LLM spitting out what it thinks it would return if you were running code on it?

  • Post Author
    huijzer
    Posted March 12, 2025 at 9:05 pm

    I did similar things last year [1]. Also I tried running arbitrary binaries and that worked too. You could even run them in the GPTs. It was okay back then but not super reliable. I should try again because the newer models definitively follow prompts better from what I’ve seen.

    [1]: https://huijzer.xyz/posts/openai-gpts/

  • Post Author
    conroy
    Posted March 12, 2025 at 9:33 pm

    [flagged]

  • Post Author
    bjord
    Posted March 12, 2025 at 10:02 pm

    I'm sorry, but reading long-form stuff on twitter/x is extremely painful for some reason

  • Post Author
    ttoinou
    Posted March 12, 2025 at 10:22 pm

    It’s crazy I’m so afraid of this kind of security failures that I wouldn’t even think of releasing an app like that online, I’d ask myself too many questions about jailbreaking like that. But some people are fine with this kind of risks ?

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.