What If A.I. Doesn’t Get Much Better Than This? | The New Yorker
August 27, 2025

Much of the euphoria and dread swirling around today’s artificial-intelligence technologies can be traced back to January, 2020, when a team of researchers at OpenAI published a thirty-page report titled “Scaling Laws for Neural Language Models.” The team was led by the A.I. researcher Jared Kaplan, and included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a fairly nerdy question: What happens to the performance of language models when you increase their size and the intensity of their training?
Source: What If A.I. Doesn’t Get Much Better Than This? | The New Yorker
The recently released GPT-5 from OpenAI was among the more hotly anticipated technological releases in some time. Given the rapid increase in capability from ChatGPT and GPT-3.5 in late 2022 to GPT-4 in March 2023, the two years since seems like an eternity.
Along the way, we did see new models from OpenAI, as well as other models from Google, Anthropic, DeepSeek, and other labs, which all seem to suggest that the upside in new large language models was unbounded.
And yet GPT-5 seems to have landed with a thud, particularly amongst those who are the most avid users of these technologies.
So, what if that’s all there is? What if the scaling laws of large language models mean that simply adding more data and more compute ultimately has an asymptote? And what we see with today’s models is largely as good as it’s going to get?
Cal Newport considers this in the New Yorker this week.
My feeling is that even if this is all there is, then certainly when it comes to specific use cases like software development, we’ve already come a very long way and likely transformed the nature of software engineering.