Does AI already have human-level intelligence? The evidence is clear

February 5, 2026

A stylised futuristic illustration showing a human face constructed from small digital blocks rising out of a computer microchip, with bright eyes and glowing circuit lines around it, suggesting artificial intelligence.

In 1950, in a paper entitled ‘Computing Machinery and Intelligence’1, Alan Turing proposed his ‘imitation game’. Now known as the Turing test, it addressed a question that seemed purely hypothetical: could machines display the kind of flexible, general cognitive competence that is characteristic of human thought, such that they could pass themselves off as humans to unaware humans?

Three-quarters of a century later, the answer looks like ‘yes’. In March 2025, the large language model (LLM) GPT-4.5, developed by OpenAI in San Francisco, California, was judged by humans in a Turing test to be human 73% of the time — more often than actual humans were2. Moreover, readers even preferred literary texts generated by LLMs over those written by human experts3.

Source

I’ve been working extensively with most of the major models—from Google, OpenAI, Anthropic, and others—for several years now. I’ve got a great deal of value from them in all kinds of ways. But until December last year, I would have characterised these as often valuable, sometimes problematic, power tools.

With the release of the most recent models from OpenAI, Google, and Anthropic, and new harnesses like Anti-Gravity, Co-Work, and now Codex from OpenAI, I wouldn’t characterise them as power tools anymore.
I’ve been doing technology stuff for the vast majority of my life—well over 40 years. In all that time, I’ve had perhaps three truly jaw-drop moments when I saw something that pointed in a completely new direction.

Two of them have happened in the last four to six weeks. Working with Claude Code and Co-Work, we now have systems that are autonomous, that take responsibility, that interact the way intelligent, educated, capable humans would. And in the last two weeks or so, Claude Clawdbot has brought something different: an immediacy, an always-on quality.

I’m not fooled into thinking it’s a person. But if you took one of these systems back to November 2022, when ChatGPT first became widely available, I’d argue that everyone you showed it to—including researchers in the field, people who’d been working on machine learning and AI for years or decades—would have said this is intelligent.

I’m not sure the conversation about whether it is intelligent even matters that much, except perhaps as a relatively abstract philosophical one. But I think it’s quite extraordinary that Nature—one of the longest-running and highest-profile science publications, with a history going back over 150 years—is essentially saying that this contemporary set of technologies has human-level intelligence.