2nd March 2025
A surprisingly common complaint I see from developers who have tried using LLMs for code is that they encountered a hallucination—usually the LLM inventing a method or even a full software library that doesn’t exist—and it crashed their confidence in LLMs as a tool for writing code. How could anyone productively use these things if they invent methods that don’t exist?
Hallucinations in code are the least harmful hallucinations you can encounter from a model.
The moment you run that code, any hallucinated methods will be instantly obvious: you’ll get an error. You can fix that yourself or you can feed the error back into the LLM and watch it correct itself.
Compare this to hallucinations in regular prose, where you need a critical eye, strong intuitions and well developed fact checking skills to avoid sharing information that’s incorrect and directly harmful to your reputation.
With code you get a powerful form of fact checking for free. Run the code, see if it works.
In some setups—ChatGPT Code Interpreter, Claude Code, any of the growing number of “agentic” code systems that write and then execute code in a loop—the LLM system itself will spot the error and automatically correct itself.
If you’re using an LLM to write code without even running it yourself, what are you doing?
Hallucinated methods are such a tiny roadblock that when people complain about them I assume they’ve spent minimal time learning how to ef