“Many of us became software engineers because we found our identity in building things. Not managing things. Not overseeing things. Building things. With our own hands, our own minds, our own code.But that identity is being challenged,” wrote Annie in March 2025 in her blog post, The Software Engineering Identity Crisis, which has since been read by tens of thousands of engineers and got them thinking about this (re)defining moment in what their role means.
In this conversation, Annie dives deep into the software engineer’s identity crisis, the rise of AI agents, and how engineers can prepare for a rapidly evolving future.
When Annie Vella's article first came out, we referenced it here, and it's since been very widely read by software engineers everywhere. In this conversation, Annie explores these ideas more deeply.
There are three layers of context you can control when using AI through web interfaces like Claude, ChatGPT, or Gemini:
System Instructions — Your baseline configuration
Projects — Context that persists for specific work. Gemini calls them gems.
Prompts — Specific details for right now
Most people live entirely in Layer 3, never touching the other two. Then they wonder why their results are inconsistent.
Think of it like clothing. Most people are using off-the-rack when they could have something tailored. The tailoring isn’t even that hard — you just need to understand where the adjustment points are.
While written for non-engineers, this is a valuable overview of the kinds of contexts that we can use when interacting with large language models from Christina Wodtke.
I’m a simple boy, so I like simple things. Agents can run Bash and write code well. Bash and code are composable. So what’s simpler than having your agent just invoke CLI tools and write code? This is nothing new. We’ve all been doing this since the beginning. I’d just like to convince you that in many situations, you don’t need or even want an MCP server.
Let me illustrate this with a common MCP server use case: browser dev tools.
The following section provides a number of recommendations on how to write a good CLAUDE.md file following context engineering best practices.
Your mileage may vary. Not all of these rules are necessarily optimal for every setup. Like anything else, feel free to break the rules once…you understand when & why it’s okay to break themyou have a good reason to do so
Providing context to coding agents like Claude is an important step in getting the most out of these systems. In Claude, this is the claude.md file, elsewhere it's an agents.md file, and this article looks at a number of patterns and principles that could be valuable in developing and working with these kinds of files.
In this entertaining and relatable video, two brothers experience the frustrations of downloading new apps and the consequences of not having them. They discuss the convenience and usability of apps, the abundance of apps on their phones, and the growing trend of relying on apps for everyday tasks. They also explore alternative solutions and find a balance between technology and simplicity. #appstruggle#fustratedwithapps#alternativesolutions#newtech
Some years ago, now, my teenage daughter introduced me to fairbairnfilms the TikTok/Instagram account of two Australian brothers who do quite amusing little skits on popular culture.
So why am I citing them here? Well, just recently, their skit was about their frustration with installing mobile apps and how everything there was a mobile app. It should be a web app. Perhaps I should email them and tell them exactly why. It's generally amusing, if not entirely safe for work, given some of their expletives. I passed it around to folks I know in the industry all over the world, who all shared my amusement.
TL;DR: Building agents is still messy. SDK abstractions break once you hit real tool use. Caching works better when you manage it yourself, but differs between models. Reinforcement ends up doing more heavy lifting than expected, and failures need strict isolation to avoid derailing the loop. Shared state via a file-system-like layer is an important building block. Output tooling is surprisingly tricky, and model choice still depends on the task.
In this RedMonk conversation, Alex Russell, Partner Product Architect at Microsoft, discusses the state of mobile development, focusing on JavaScript performance, the state of Progressive Web Apps (PWAs), and the impact of major players like Apple (iOS) and Google (Android). They explore the importance of management in addressing web performance issues, the role of web standards in shaping the future, and the implications of AI on web development. Alex emphasizes the need for restraint in JavaScript usage and the importance of creating user-friendly web experiences, particularly on mobile devices.
Alex Russell has been a frequent speaker at our conferences going back many years. In 2015 at our CODE conference, he first talked about the idea of progressive web apps anywhere.
Alex has had a profound impact on the web from his work on the early JavaScript framework Dojo, through many years contributing to Chrome both at Google and now at Microsoft, and to the technological landscape of the web with his work at the W3C TC39 and on the technical architecture group at the W3C.
I know firsthand from many conversations we've had he is a very engaging conversationalist. This is an interview I would highly recommend. You can read the transcript or listen on your daily walk, commute, or drive. Or wherever you consume good podcasts.
One of the concepts that are gaining lots of discussion is the “MCP” which stands for Model Context Protocol. Anytime I’d ask what an MCP is, I’d usually hear it described as an API. So then, why isn’t it just called an API? Because it’s not really an API. Sound confusing? You bet!
Just like how I learned to code from actually building something. I decided if I was going to truly learn what this thing was, I’d have to build one. So I did and this is how that went.
Meanwhile, sites are ballooning. The median mobile page is now 2.6 MiB, blowing past the size of DOOM (2.48 MiB) in April. The 75th percentile site is now larger than two copies of DOOM, and P90+ sites are more than 4.5x larger, and sizes at each point have doubled over the past decade. Put another way, the median mobile page is now 70 times larger than the total storage of the computer that landed men on the moon.
I am so tired of hearing about AI. Unfortunately, this is a talk about AI.
I’m trying to figure out how to use generative AI as a designer without feeling like shit. I am fascinated with what it can do, impressed and repulsed by what it makes, and distrustful of its owners. I am deeply ambivalent about it all. The believers demand devotion, the critics demand abstinence, and to see AI as just another technology is to be a heretic twice over.
Today, I’d like to try to open things up a bit. I want to frame the technology more like an instrument, and get away from GenAI as an intelligence, an ideology, a tool, a crutch, or a weapon. I find the instrument framing more appealing as a person who has spent decades honing a set of skills. I want a way of working that relies on my capabilities and discernment rather than something so amorphous and transient as taste. (If taste exists in technology, it needs to be smuggled in.)
Maitre Miro gives a very nuanced and thoughtful meditation on the journey of AI and its impact on design and the creative endeavour. He uses the metaphor of AI as an instrument and the work of four renowned musicians and the lessons we might learn from their work about how we can work with AI.
How AI Changes Design AMA with Shamus Scott Grubb – YouTube
A shift is happening in Design… New tools. New workflows. New capabilities. Because… → As production gets automated → Thinking becomes more valuable → Creativity becomes the new premium That’s why I’m speaking to Shamus Scott Grubb on The Design of Everyday People Livestream. We’re diving deep into: 1. How AI separates design from production 2. What skills actually matter in this new reality 3. How to position yourself on the right side
Shamus Scott Grubb talks with Chris Nguyen from UX Playbook about what happens when AI sits between creation and production in design.
GraphQL’s third wave: Why the future of AI needs an API of intent
Every technology with real staying power goes through waves of adoption. The first wave attracts the early experimenters – the ones who can sense the future before it’s evenly distributed. The second picks up the enterprises that’ve felt enough pain to seek out something better. The third comes when the rest of the world catches up, usually because the ground itself has shifted and the old tools can no longer do the job.
GraphQL is now entering that third wave.
Most people still describe GraphQL as an alternative to REST. That was true in 2015. What’s happening today is different. In the era of LLMs and autonomous agents, GraphQL isn’t just a nicer API; it has quietly become the API layer AI was waiting for.
An argument that GraphQL is the right API abstraction for AI-based applications. This piece looks at the adoption of GraphQL over the last decade or so and why that makes it the perfect choice for AI applications.
Design Thinking for AI: The 5-Stage Framework Every Builder Needs
AI has changed the texture of design. We’re not designing for people alone anymore, we’re designing with/for intelligence. That changes everything. I’m obviously not the only person thinking about how Design frameworks evolve.
I don't think it's entirely coincidental that all of today's links are about intersection AI and design. Something I've mentioned recently, and we've been covering a lot, the software engineers have been thinking about in the context of our work, but that's not to say designers aren't also thinking deeply about this as well.
I recently attended the Enterprise UX conference in Amersfoort. The presentations made it very clear that successful AI integration requires big changes in how companies work and how we build systems. The main message was: AI is not just a new tool; it forces us to change our basic rules for design and data.
Software engineers have been exploring for several years how best to work with large language models. All categories of products, from the big frontier model labs all the way through to early-stage startups, are exploring this space.
It's a question the design field is also asking. Here from the recent Enterprise UX conference in Amsterdam are some responses that emerged across the talks.
This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI.
I will admit that I’ve been skeptical of the notion of generative user interfaces. I was imagining an app for work, like a design app, that would rearrange itself depending on the task at hand. In other words, it’s dynamic and contextual. Adobe has tried a proto-version of this with the contextual task bar. Theoretically, it surfaces up the most pertinent three or four actions based on your current task. But I find that it just gets in the way.
One of the things people are speculating about when it comes to generative AI is that perhaps our user interfaces will be themselves generated on the fly, tailored to individual users' needs. Esteemed folks like the Nielsen Norman Group have made such suggestions, but others like Roger Wong aren't so sure.
In one of the most popular episodes yet, Vitaly Friedman talked about what’s next for AI design patterns (https://www.dive.club/deep-dives/vita…) .
In that episode he frequently referenced Shape of AI (https://www.shapeof.ai/) which is an incredible database of AI design patterns.So I wanted to get to the source and go deep with the creator [Emily Campbell]( / emmiecampbell ) to learn how to design great AI experiences. Because she’s studied AI products more than just about anyone I’ve ever seen 👀
We referenced The Shape of AI and Emily Campbell's work cataloguing AI interaction patterns a few months back. Now here's an in-depth interview with her about her work.
I find it’s always important to examine why you made a mistake. The worst mistake I ever made was reading “Bitcoin: A Peer-to-Peer Electronic Cash System” in January of 2009, thinking “cool math toy, maybe someone will turn it into something useful someday” and moving on. My most recent mistake, however, was not realizing that software development would be the first field to be transformed by agentic AI. I always assumed it would be the last. Let’s examine why that was.
A Month of Chat-Oriented ProgrammingOr when did you last change your mind about something?Nick Radcliffe. 12th November 2025.
TL;DR: I spent a solid month “pair programming” with Claude Code, trying to suspend disbelief and adopt a this-will-be-productive mindset. More specifically, I got Claude to write well over 99% of the code produced during the month. I found the experience infuriating, unpleasant, and stressful before even worrying about its energy impact. Ideally, I would prefer not to do it again for at least a year out two. The only problem with that is that it “worked”. It’s hard to know exactly how well, but I (“we”) definitely produced far more than I would have been able to do unassisted, probably at higher quality, and with a fair number of pretty good tests (about 1500). Against my expectation going in, I have changed my mind. I now believe chat-oriented programming (“CHOP”) can work today, if your tolerance for pain is high enough.
The notes below describe what has and has not worked for me, working with Claude Code for an intense month (in fact, more like six weeks now).
I recently listened to this Pragmatic Engineer podcast with Flask developer Armin Ronacher (highly recommended), where he talks about how he was an ardent resistor to the use of live language models for software development until he sat down and invested some time in that a few months ago, at which point he became convinced it was the future of how he was going to develop. Here's something similar from Nick Radcliffe, who was "a fairly outspoken and public critic of large-language models (LLMs), Chatbots, and other applications of LLM".
He spent a month doing chat-oriented programming (CHOP), which is one of the various ways you can work with large language models as a software engineer. While he found it infuriating and frustrating, he acknowledges it did indeed make him productive.
He also details his experience and things that he learned that you might find valuable yourself.
If you've been working with large language models for a while in any sort of non-trivial way, then you'll likely have run into this situation where you simply cannot get it to produce something that you want it to. A classic example was until relatively recent getting an output in JSON format. Time and again I run into the issue when I ask it to produce, say, HTML that it will add extraneous content as well even when I ask it to only produce HTML.
But this framing of the challenge really helps understand perhaps what's going on and how to work around it.
Drew Breunig calls it fighting against the weights, and this makes a lot of sense.
Software Development in the Time of Strange New Angels
Five months ago, my lifelong profession of software development changed completely. My profession was born in the 1940s, created to help fight demons. Our first encounter with the strange new angels of agentic AI is changing every aspect of it.Hardly anyone has noticed yet.
Every software developer, everybody involved in the delivery and development design of software-based products should read this. It's deeply insightful.
I believe I use AI fairly typically for a software engineer, if slightly more than average. I’ve been working in the AI space since slightly before the announcement of GPT-3 in 2020.
I use AI in a few particular ways:
Coding
Research & search
Summarization & transcription
Writing
Art & music
This is, of course, purely my opinion & speculation. If you have different ways of using AI, I’d love to know!
Another "how I use AI" type post which we have been collecting because it's early days, and seeing how different software engineers use the technology is a very interesting way to expand our own possible uses and use cases.
Some concepts are easy to grasp in the abstract. Boiling water: apply heat and wait. Others you really need to try. You only think you understand how a bicycle works, until you learn to ride one.
There are big ideas in computing that are easy to get your head around. The AWS S3 API. It’s the most important storage technology of the last 20 years, and it’s like boiling water. Other technologies, you need to get your feet on the pedals first.
LLM agents are like that.
People have wildly varying opinions about LLMs and agents. But whether or not they’re snake oil, they’re a big idea. You don’t have to like them, but you should want to be right about them. To be the best hater (or stan) you can be.
So that’s one reason you should write an agent. But there’s another reason that’s even more persuasive, and that’s
This, in just a few minutes, won't just give you an understanding of what agents are but will enable you to build your first one and understand what some of the fuss is about.
You might also ask why you need MCP after seeing this.
When Everyone’s a Developer, How Do We Promote the Web Platform Over React?
2025 is a strange time to start a newsletter about web technology. The past four editions of WTN have focused on the intersection of the web and AI, because frankly that’s where most of the excitement is on today’s internet. But as a few people I link to this week point out, the web platform has improved to a point that it now does much of what frontend frameworks do. So why isn’t there as much exciting activity to report on regarding the web platform? The problem, I think, is that web platform improvements are being undermined by AI development trends. I see two main issues:
I’ve heard both Vercel and Netlify (two of the leading web developer platforms) say in recent weeks that their user bases are massively increasing. Why? Because of vibe coders. The definition of a “developer” has expanded to include people who rely on prompting rather than programming. But the problem is that vibe coders get React solutions instead of web-native ones. What’s happening is that vibe coders ask their magic lamps to build an app or agent, and the AI genie gives them a React app.
The leading large language models, like GPT-5, are defaulting to React and Next.jswhen asked to create web apps or sites. That entrenches the power that React has on the web development ecosystem, which means web platform improvements aren’t being utilised by AI. Which leads to point #2…
This is a really good round-up of some recent articles, many of which, if not all of which, we referenced here in the last few weeks.
It also includes a reference to something I wrote recently in a similar vein.
This is a real-time of transformation across all of computing, and front-end isn't alone in that. Where exactly it ends up? It's impossible to say.
Escape Velocity: Break Free from Framework Gravity
We saw this cycle with jQuery in the past, and we’re seeing it again now with React. We’ll see it with whatever comes next. Success breeds standardization, standardization breeds inertia, and inertia convinces us that progress can wait. It’s the pattern itself that’s the problem, not any single framework.But right now, React sits at the center of this dynamic, and the stakes are far higher than they ever were with jQuery. Entire product lines, architectural decisions, and career paths now depend on React-shaped assumptions. We’ve even started defining developers by their framework: many job listings ask for “React developers” instead of frontend engineers. Even AI coding agents default to React when asked to start a new frontend project, unless deliberately steered elsewhere.
I think this is a very important read for anybody who works in the front-end. Whether you are particularly invested in a framework like React or whether your focus is more towards the open web technology platform stack.
I think the web is at a real crossroads. I've got more to say about that, but I won't say that here. But I certainly think if you take your role in developing for the web seriously, the kinds of issues that are raised here are very important to consider.
Why engineers can’t be rational about programming languages
A programming language is the single most expensive choice a company makes, yet we treat it like a technical debate. After watching this mistake bankrupt dozens of companies and hurt hundreds more, I’ve learned the uncomfortable truth: these decisions are rarely about technology. They’re about identity, emotion, and ego, and they’re destroying your velocity and budget in ways you can’t see until it’s too late.
For many, many software developers, the programming language we use (even if not consciously) says something about ourselves and our identity. This can border on the tribal.
It's also a way we often gatekeep, where certain languages are considered inferior to others and therefore the users of those languages are considered inferior to the users of "real" programming languages.
A big challenge is when that interferes at a more senior level in the kind of technical decisions we're making. And this is a salutary tale. But I think it's important to note that we are all subject to biases like this. And indeed, what makes something a bias is it's something that we can't see in ourselves as much as it can be quite straightforward to see it in others.
Having stuck to Claude Code for the last few months, this post is my set of reflections on Claude Code’s entire ecosystem. We’ll cover nearly every feature I use (and, just as importantly, the ones I don’t), from the foundational CLAUDE.md file and custom slash commands to the powerful world of Subagents, Hooks, and GitHub Actions. This post ended up a bit long and I’d recommend it as more of a reference than something to read in entirety.
We've been collecting posts like this for a while. Not because we think they represent the one true way of working with a particular system, but because different lessons from different developers can provide insights into aspects of a large-language model system that we might ourselves find valuable when working with them.
This is a really comprehensive look at all of the features of Claude Code that one particular developer has used. There might be some lessons in here for you, as there I'm sure will be for me, in how we could better work with Claude Code and perhaps similar systems as well.
Agents Rule of Two: A Practical Approach to AI Agent Security
At a high level, the Agents Rule of Two states that until robustness research allows us to reliably detect and refuse prompt injection, agents must satisfy no more than two of the following three properties within a session to avoid the highest impact consequences of prompt injection.
[A] An agent can process untrustworthy inputs
[B] An agent can have access to sensitive systems or private data
[C] An agent can change state or communicate externally
It’s still possible that all three properties are necessary to carry out a request. If an agent requires all three without starting a new session (i.e., with a fresh context window), then the agent should not be permitted to operate autonomously and at a minimum requires supervision — via human-in-the-loop approval or another reliable means of validation.
Access to your private data—one of the most common purposes of tools in the first place!
Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM
The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)
Some have gone so far as to argue this makes any kind of agentic software system fundamentally insecure in all circumstances.
The security team at Meta has Developed the rule of two—an approach to minimising the security challenges. If you work with these systems, this is very much worth a read.
Agentic AI systems present unique security challenges. The fundamental security weakness of LLMs is that there is no rigorous way to separate instructions from data, so anything they read is potentially an instruction. This leads to the “Lethal Trifecta”: sensitive data, untrusted content, and external communication – the risk that the LLM will read hidden instructions that leak sensitive data to attackers. We need to take explicit steps to mitigate this risk by minimizing access to each of these three elements. It is valuable to run LLMs inside controlled containers and break up tasks so that each sub-task blocks at least one of the trifecta. Above all do small steps that can be controlled and reviewed by humans.
With the rapid rise of the MCP protocol and agentic software development–that is, where we largely set up software agents to complete a task and let them do it, calling resources locally and online–I've also seen serious concerns about the security of such processes.
Here Korny Sietsma gives a detailed overview of the problem and ways of potentially mitigating rather than eliminating the associated risks.
Optimizing Images For Web Performance: All You Need To Know
You probably know some if not all of this, but it's always great to have collected together current best practise when it comes to various techniques.
And this is what we need to know right now about optimising image performance for the web.
The way we interact with the web today is surprisingly manual. Want to book a flight? You’ll probably head to a familiar airline’s website or open Google and type in your dates. If that site also offers hotel and car rental options, great—you might stay and book everything in one place. But more likely, you’re picky. So you go off searching for that perfect boutique hotel or the restaurant you’ve read about. Click by click, tab by tab, you stitch your trip together.
For better and for worse, the web is not simply for humans anymore.
The reality is, bots have long been the most important visitors for most websites, in particular Google bot, which indexes your site, and has been for decades the most important source of traffic for most successful websites.
And then many sites have APIs which are of an interface for machines or code rather than for humans. Even if most of the time that interaction is mediated through the API, ultimately it was driven by a human.
If the promise of agent's agentic AI is real, then while a human might ask an agent to complete a task for them, and that task might involve interacting with your website, the actual interaction won't be with a human even if it is in service of them.
There are many who are aghast at this idea. But it is increasingly a reality. So, it is something you should be paying attention to if you develop websites.
Here Andy Budd reflects on the implications, and it's something we cover at our upcoming Developer Summit and Next conferences from both a developer and more of a product design and product management perspective.
And if it's interesting to you, I highly recommend you check out the upcoming book by Katja Forbes, who is speaking at both those conferences.
Inlining Critical CSS: Does It Make Your Website Faster?
We've long been advised that inlining critical CSS is a performance-enhancing technique. But there are definitely some gotchas there. So the folks from DebugBear give us more detail on what to do and what not to do.
Why don't you know that you don't know about the video element in HTML? Rob Weychert Thought he knew the element well. Turns out there was still more to know, and he shares that here.
Start implementing view transitions on your websites today – Piccalilli
The View Transition API allows us to animate between two states with relative ease. I say relative ease, but view transitions can get quite complicated fast.A view transition can be called in two ways; if you add a tiny bit of CSS, a view transition is initiated on every page change, or you can initiate it manually with JavaScript.
Yes, more on View transitions. The API we should have had about a decade ago, and it might have saved us from a lot of rabbit holes and dead ends.
Here you can get up and running with Vue Transitions with Cyd Stumpel who gave a fantastic talk at CSS Day on the possibilities of View Transitions that I highly recommend (it's right here on Conffab).
Scroll-driven animations are something that we've long needed JavaScript to do, but that's now changing with native support for scroll-driven animation in browsers here or coming soon.
Learn more from the mega-talented Cyd Stumpel
Performance Debugging With The Chrome DevTools MCP Server
If you've been thinking about how MCP servers can help you in your work as a front-end engineer, or even if you haven't, take a look at this article from Debug Bear which explores how you can use the built-in Chrome debugging tools via an MCP server.
As President of Web Components, it’s my duty to publicly comment on every Web Component library and framework that exists. Today I’m taking a look at Quiet UI, a new source-available web component library soft-launched by Cory LaViska, the creator of Shoelace WebAwesome. You might be asking “another UI library? But why?” and that’s a good question, I’ll let Cory tell you in his own words…
When Quiet UI came out a few weeks ago, we gave it a mention here.
Looks like an amazing and really valuable library of web components without any dependencies. Here, Dave Rupert looks at what it has to offer in more detail. Highly recommend you check it out.
The present and potential future of progressive image rendering – JakeArchibald.com
Progressive image formats allow the decoder to create a partial rendering when only part of the image resource is available. Sometimes it’s part of the image, and sometimes it’s a low quality/resolution version of the image. I’ve been digging into it recently, and I think there are some common misconceptions, and I’d like to see a more pragmatic solution from AVIF.Let’s dive straight into the kinds of progressive rendering that are currently available natively in browsers.
Back at the dawn of the web, in the mid-90s, if you think performance is an issue now, then ah well, I've got a history lesson for you.
In an era when hundreds of megabits, even gigabits per second, of download speed are far from rare, imagine 5 kilobits per second. Imagine a time when 10 kilobits per second was a fast shared download at an educational institution.
And imagine the kinds of techniques you'd need to come up with to manage performance, particularly of images, which could take many seconds or longer to download.
In that era, the progressive JPEG, which rendered a very low-resolution version of the image relatively quickly and then filled in the details over time, was a godsend.
Progressive image formats are not just a thing of the past. Here, Jake Archibald looks at the current state and future direction of these image formats.
As a former full-time engineer, I really enjoy coding with AI tools and the tradeoffs are worthwhile for me. AI assistance shortens my time from idea to working code, and using it has strengthened my ability to express what I want the code to do and how. But I view AI-generated code as a first draft that has much room for improvement, so I delete or refactor a good deal of it. I don’t “vibe-code” so much these days, as I prefer to fully understand what I’m building.
When AI and LLMs started creeping into the design systems discourse, the loudest use cases were about generating components and docs. But the truth is that for many teams, those aren’t actually the hardest problems to solve.At the end of 2024 and start of 2025, the attitude across the design systems community was flat-out negative toward AI. The DS space seemed deeply skeptical of claims that AI could do anything useful for us. At the time, the only examples I ever heard were: It can write components, and it can generate documentation.It can: But I don’t think either of those is actually a good use case for LLMs in design systems.
How can large-language models help teams work with design systems? Given there certainly seems to be a lot of benefit for software engineers in working with large language models, what carries over to design systems, a practise certainly adjacent to and overlapping with engineering. And what doesn't work?
Here, Elyse Holladay looks at where large language models might not do the job and where they may be valuable.
I attended Remix Jam two weeks ago, then spent this past week watching React Conf 2025 videos. I have spent the last decade shipping production code on React and the last two years on Remix.Now both ecosystems are shifting, and what seemed like different approaches has become incompatible visions.
React Conf’s technical announcements were incremental: React 19.2 APIs, View Transitions experiments, the compiler getting more sophisticated. The message was clear: React is listening to the community while accepting complexity on your behalf. Stability, Composability, Capability: those are the values.The Remix team announced something else entirely: they’re breaking with React. The mental model shifts introduced by use client and the implementation complexity of Server Components forced a choice. And Remix 3 chose Simplicity. Remix 2 users pay the price; there’s no upgrade path.
That choice, to sacrifice Stability for Simplicity, makes explicit what was already true: these values cannot coexist.
A really interesting reflection on where React and Remix are headed as they diverge, and how the values embedded in each are quite distinct.
I feel we're at a time of real disruption when it comes to how we architect for the web—something I've been saying for the last year or so. React and Remix are diverging around this, but we also need to consider the impact of large language models on how we architect for the web.
If, as I think is extremely likely, we increasingly rely on LLMs to do a lot of the heavy lifting when it comes to writing code, the key question becomes: what kind of code works best with those models?
Some people argue that React does, because there's so much training data—so much React code on the web that models will have learned from. But as one interviewee on the AI-Native Dev podcast a little while ago observed, a lot of that is just pretty average React code. And how much is enough? We don't actually know.
When I work with LLMs on web technologies and ask them to generate web content and applications with functionality using "vanilla" JavaScript, CSS, and HTML, they do a very good job. They're very steerable. I think they know enough, quite honestly. The web platform fundamentals are very well documented, and that documentation is probably very high quality, which helps.
Anyway, it's a complex and challenging time. Pieces like this are valuable in helping us think through and reason about where we go from here.