Hallucinations in code are the least dangerous form of LLM mistakes
March 4, 2025
A surprisingly common complaint I see from developers who have tried using LLMs for code is that they encountered a hallucination—usually the LLM inventing a method or even a full software library that doesn’t exist—and it crashed their confidence in LLMs as a tool for writing code. How could anyone productively use these things if they invent methods that don’t exist?
Hallucinations in code are the least harmful hallucinations you can encounter from a model.The real risk from using LLMs for code is that they’ll make mistakes that aren’t instantly caught by the language compiler or interpreter. And these happen all the time!
Source: Hallucinations in code are the least dangerous form of LLM mistakes
I definitely find the LLM tools I use hallucinating APIs, and solutions to problems I’m attempting to work with them to solve.
It feels this is happening less over time.
Here Simon Willison observes that these are among the least problematic of situations for hallucination.