LLM predictions for 2026, shared with Oxide and Friends

January 13, 2026

A beautiful green Kākāpō surrounded by candles gazes into a crystal ball

In 2023, saying that LLMs write garbage code was entirely correct. For most of 2024 that stayed true. In 2025 that changed, but you could be forgiven for continuing to hold out. In 2026 the quality of LLM-generated code will become impossible to deny.

I base this on my own experience—I’ve spent more time exploring AI-assisted programming than most.

The key change in 2025 (see my overview for the year) was the introduction of “reasoning models” trained specifically against code using Reinforcement Learning. The major labs spent a full year competing with each other on who could get the best code capabilities from their models, and that problem turns out to be perfectly attuned to RL since code challenges come with built-in verifiable success conditions.

Source

Here’s a series of predictions by Simon Wilson and others about what we might see when it comes to large language models in the context of software engineering in 2026.

It’s not to say that Simon and Co are right about these things, but I think it is increasingly imperative, if we’re software engineers, to think about what might be coming because that will shape our choices and actions. The timeframes here are relatively short, months, maybe a couple of years. And typically, we don’t have to respond in such short timeframes to such very significant changes in, for example, the practise of software engineering.