“AI” is bad UX
January 13, 2026

This is in many ways a worst case scenario for user experience. An application where clicking “save” deletes your files. An icon where clicking and dragging it makes thousands of copies. A sliding control where every time you move it something different happens. Perhaps the best pre-LLM analogy for the LLM user experience is the browser game QWOP, where something immediately intuitive (in the game’s case, running), is rendered dramatically and hilariously unintuitive by the mode of interaction (in the game’s case, this is fun).
This mismatch, between this incredibly powerful user metaphor and the actual abilities of these systems, is at the heart of most of the emerging problems with ‘AI’. For most people, it is functionally impossible to break the mental connection between “this is a person you talk to” and inferences about internal states and goals. So-called ‘AI psychosis’, where the funhouse mirror agreeableness of a mindless chatbot sends people into ratiocinatory spirals of delusional ideation, stems from this failed metaphor. If somebody else is agreeing with you and expanding on and elaborating what you’re saying, it must make sense, right? They sound like they know what they’re talking about.
It’s a very thoughtful essay on user experience and AI. It’s very hard to summarise, but I highly recommend everyone reads and gives some thought to.







