Against the protection of stocking frames

October 6, 2025

Portrait of Ethan Marcotte standing in front of green foliage, next to a dark panel with the date "18 September 2025," the text "Against the protection of stocking frames," and a logo featuring a stylized fox with his name below.

I think it’s long past time I start discussing “artificial intelligence” (“AI”) as a failed technology. Specifically, that large language models (LLMs) have repeatedly and consistently failed to demonstrate value to anyone other than their investors and shareholders. The technology is a failure, and I’d like to invite you to join me in treating it as such.

Source: Against the protection of stocking frames. — Ethan Marcotte

There’s much here I agree with Ethan, which might sound curious given that we focus a lot on AI and large language models on Conffab and at our conferences and in many of the things that I write and the things we collect here at Elsewhere.

Many of the things that Ethan observes about how AI has been positioned and sold are undoubtedly correct. I think they are often overhyped. I think often they are imposed top-down in organisations. I think a lot of the technologies we’re seeing broadly are solutions in search of problems. We hear the word “imagine” a lot of the time from keynotes and elsewhere when trying to get people to understand the potential value and uses of these technologies.

But at the same time, there is no doubt that in the field of software engineering, these technologies are already being transformative. Yes, there are stories of inefficiencies, there are studies seeming to demonstrate that these tools may make developers feel they’re more productive than they actually are. But there are also many extremely experienced software developers who attest to the value these tools bring to their work – not least of which is a newfound enthusiasm that as developers with decades of experience they have found perhaps for the first time in many years.

I think it’s important to engage with these topics. I think it’s important not to be rhetorical, not to cherry-pick the things that suit our position and ignore the ones that don’t. That’s why, in particular, I amplify this essay by Ethan because I think, for the most part, he is attempting to genuinely grapple with these technologies.

Underneath it all, though, is an observation by Ted Chiang that I keep coming back to. Chiang, the science fiction author, perhaps best known for a short story which is the foundation for the marvellous Denis Villeneuve movie Arrival. Chang observes that fears of technology are in essence fears of capitalism, something that resonates with me and which I largely agree with.

Our problem with technology is often in reality a problem with capitalism. How many of us are willing or able to stand up and articulate that, and then articulate a project to re-imagine, reshape, and reform capitalism toward better outcomes?