How AI Is Redefining Software Engineering with Annie Vella, Distinguished Engineer

    AI, LLMs, software engineering

    Stylized illustration of a yellow and orange biplane flying against a purple sky with clouds, a glowing sun, and a blurred propeller in motion.

    “Many of us became software engineers because we found our identity in building things. Not managing things. Not overseeing things. Building things. With our own hands, our own minds, our own code.But that identity is being challenged,” wrote Annie in March 2025 in her blog post, The Software Engineering Identity Crisis, which has since been read by tens of thousands of engineers and got them thinking about this (re)defining moment in what their role means.

    In this conversation, Annie dives deep into the software engineer’s identity crisis, the rise of AI agents, and how engineers can prepare for a rapidly evolving future.

    Source: How AI Is Redefining Software Engineering with Annie Vella, Distinguished Engineer | Aviator

    When Annie Vella's article first came out, we referenced it here, and it's since been very widely read by software engineers everywhere. In this conversation, Annie explores these ideas more deeply.

    Context Engineering for Non Engineers

    AI, context engineering, LLMs

    There are three layers of context you can control when using AI through web interfaces like Claude, ChatGPT, or Gemini:

    • System Instructions — Your baseline configuration
    • Projects — Context that persists for specific work. Gemini calls them gems.
    • Prompts — Specific details for right now

    Most people live entirely in Layer 3, never touching the other two. Then they wonder why their results are inconsistent.

    Think of it like clothing. Most people are using off-the-rack when they could have something tailored. The tailoring isn’t even that hard — you just need to understand where the adjustment points are.

    Let me show you what you’re missing.

    Source: Context Engineering for Non Engineers – Eleganthack

    While written for non-engineers, this is a valuable overview of the kinds of contexts that we can use when interacting with large language models from Christina Wodtke.  

    What if you don’t need MCP at all?

    AI, LLMs, MCP, software engineering

    Terminal-style interface displaying context usage for Claude model "claude-sonnet-4-5-20250929" with a usage bar and breakdown: 67k/200k tokens used (34%), including system prompt (2.5k), system tools (13.2k), messages (6.4k), free space (133k), and autocompact buffer (45k); total token count is 864 for the current SlashCommand Tool session.

    I’m a simple boy, so I like simple things. Agents can run Bash and write code well. Bash and code are composable. So what’s simpler than having your agent just invoke CLI tools and write code? This is nothing new. We’ve all been doing this since the beginning. I’d just like to convince you that in many situations, you don’t need or even want an MCP server.

    Let me illustrate this with a common MCP server use case: browser dev tools.

    Source: What if you don’t need MCP at all?

    While MCPs occupy much attention right now, they also have significant drawbacks, including:
    • Expense in terms of tokens
    • Occupying a significant chunk of your context window
    • Concerns about security and what Simon Wilson has coined the "lethal triad."
    But what if, in many cases, you don't actually need an MCP? We can use tools instead. That's what Mario Zechner explores here.

    Writing a good CLAUDE.md

    coding agent, context engineering, LLMs, software engineering

    Grid of line charts comparing AI models on instruction-following tasks, showing accuracy versus number of instructions with line color indicating latency; models include GPT, Claude, Gemini, Grok, LLaMA, DeepSeek, and Qwen variants, with a latency color scale from yellow (low) to purple (high) on the right.

    ## Creating a good CLAUDE.md file

    The following section provides a number of recommendations on how to write a good CLAUDE.md file following context engineering best practices.

    Your mileage may vary. Not all of these rules are necessarily optimal for every setup. Like anything else, feel free to break the rules once…you understand when & why it’s okay to break themyou have a good reason to do so

    Source: Writing a good CLAUDE.md | HumanLayer Blog

    Providing context to coding agents like Claude is an important step in getting the most out of these systems. In Claude, this is the claude.md file, elsewhere it's an agents.md file, and this article looks at a number of patterns and principles that could be valuable in developing and working with these kinds of files.

    Instagram

    PWAs

    In this entertaining and relatable video, two brothers experience the frustrations of downloading new apps and the consequences of not having them. They discuss the convenience and usability of apps, the abundance of apps on their phones, and the growing trend of relying on apps for everyday tasks. They also explore alternative solutions and find a balance between technology and simplicity.

    Source: Instagram

    Some years ago, now, my teenage daughter introduced me to fairbairnfilms the TikTok/Instagram account of two Australian brothers who do quite amusing little skits on popular culture. So why am I citing them here? Well, just recently, their skit was about their frustration with installing mobile apps and how everything there was a mobile app. It should be a web app. Perhaps I should email them and tell them exactly why. It's generally amusing, if not entirely safe for work, given some of their expletives. I passed it around to folks I know in the industry all over the world, who all shared my amusement.

    Agent Design Is Still Hard

    agents, AI, LLMs, software engineering

    Dark blue graphic with the text "Agent Design Is Still Hard" in large font and "My Agent abstractions keep breaking somewhere I don't expect." in smaller font below; bottom right shows a small circular profile photo and the name "Armin Ronacher."

    TL;DR: Building agents is still messy. SDK abstractions break once you hit real tool use. Caching works better when you manage it yourself, but differs between models. Reinforcement ends up doing more heavy lifting than expected, and failures need strict isolation to avoid derailing the loop. Shared state via a file-system-like layer is an important building block. Output tooling is surprisingly tricky, and model choice still depends on the task.

    Source: Agent Design Is Still Hard | Armin Ronacher’s Thoughts and Writings

    If you're considering building your own agent, the comprehensive article by Armin Ronacher will be very useful. You might also find this recent conversation on the Pragmatic Engineer podcast with Armin to be worth a listen.

    Alex Russell on PWAs, App Stores, and Mobile Performance – RedMonk

    front end development, JavaScript, performance, PWAs

    In this RedMonk conversation, Alex Russell, Partner Product Architect at Microsoft, discusses the state of mobile development, focusing on JavaScript performance, the state of Progressive Web Apps (PWAs), and the impact of major players like Apple (iOS) and Google (Android). They explore the importance of management in addressing web performance issues, the role of web standards in shaping the future, and the implications of AI on web development. Alex emphasizes the need for restraint in JavaScript usage and the importance of creating user-friendly web experiences, particularly on mobile devices.

    Source: Alex Russell on PWAs, App Stores, and Mobile Performance – RedMonk

    Alex Russell has been a frequent speaker at our conferences going back many years. In 2015 at our CODE conference, he first talked about the idea of progressive web apps anywhere. Alex has had a profound impact on the web from his work on the early JavaScript framework Dojo, through many years contributing to Chrome both at Google and now at Microsoft, and to the technological landscape of the web with his work at the W3C TC39 and on the technical architecture group at the W3C. I know firsthand from many conversations we've had he is a very engaging conversationalist. This is an interview I would highly recommend. You can read the transcript or listen on your daily walk, commute, or drive. Or wherever you consume good podcasts.

    Minefield Context Protocol

    coding agent, LLMs, MCP, software engineering

    One of the concepts that are gaining lots of discussion is the “MCP” which stands for Model Context Protocol. Anytime I’d ask what an MCP is, I’d usually hear it described as an API. So then, why isn’t it just called an API? Because it’s not really an API. Sound confusing? You bet!

    Just like how I learned to code from actually building something. I decided if I was going to truly learn what this thing was, I’d have to build one. So I did and this is how that went.

    Source: Minefield Context Protocol

    Donnie D'Amato shares his experience of developing with MCP, something that might be valuable in your own learning.

    The Performance Inequality Gap, 2026 – Infrequently Noted

    performance

    Bar chart titled "The Frontend Sadness Index" comparing web frameworks by Core Web Vitals (CWV) performance using FID (light blue), INP (dark blue), and FID-INP Change (red percentage labels); Astro and Stimulus perform best, while Angular shows the lowest CWV scores and a -4% change, with notable INP drops for Gatsby, Next.js App Router, and Next.js (each at -10%).

    Meanwhile, sites are ballooning. The median mobile page is now 2.6 MiB, blowing past the size of DOOM (2.48 MiB) in April. The 75th percentile site is now larger than two copies of DOOM, and P90+ sites are more than 4.5x larger, and sizes at each point have doubled over the past decade. Put another way, the median mobile page is now 70 times larger than the total storage of the computer that landed men on the moon.

    Source: The Performance Inequality Gap, 2026 – Infrequently Noted

    Alex Russell updates his page in a Quality Gap research for 2026, looking at both:
    1. Where we are at in terms of your median device and networks
    2. The size of the pages that are now commonly being delivered

    Frank Chimero · Beyond the Machine

    AI, Design, LLMs

    Beside the Machine with Brian Eno.

    I am so tired of hearing about AI. Unfortunately, this is a talk about AI.

    I’m trying to figure out how to use generative AI as a designer without feeling like shit. I am fascinated with what it can do, impressed and repulsed by what it makes, and distrustful of its owners. I am deeply ambivalent about it all. The believers demand devotion, the critics demand abstinence, and to see AI as just another technology is to be a heretic twice over.

    Today, I’d like to try to open things up a bit. I want to frame the technology more like an instrument, and get away from GenAI as an intelligence, an ideology, a tool, a crutch, or a weapon. I find the instrument framing more appealing as a person who has spent decades honing a set of skills. I want a way of working that relies on my capabilities and discernment rather than something so amorphous and transient as taste. (If taste exists in technology, it needs to be smuggled in.)

    Source: Frank Chimero · Beyond the Machine

    Maitre Miro gives a very nuanced and thoughtful meditation on the journey of AI and its impact on design and the creative endeavour. He uses the metaphor of AI as an instrument and the work of four renowned musicians and the lessons we might learn from their work about how we can work with AI.

    How AI Changes Design AMA with Shamus Scott Grubb – YouTube

    AI, Design, LLMs

    A shift is happening in Design… New tools. New workflows. New capabilities. Because… → As production gets automated → Thinking becomes more valuable → Creativity becomes the new premium That’s why I’m speaking to Shamus Scott Grubb on The Design of Everyday People Livestream. We’re diving deep into: 1. How AI separates design from production 2. What skills actually matter in this new reality 3. How to position yourself on the right side

    Shamus Scott Grubb talks with Chris Nguyen from UX Playbook about what happens when AI sits between creation and production in design.

    GraphQL’s third wave: Why the future of AI needs an API of intent

    AI Native Dev, graphql, software engineering

    Dark-themed infographic titled “Why the Future of AI needs GraphQL” by Hygraph, showing a flow diagram: “LLM / AI Agent” connects to “Introspection + Planning Layer,” which leads to a GraphQL symbol box; from there, arrows lead to “Structured Execution / Typed Responses” and then to “Data/Services/DB.”

    Every technology with real staying power goes through waves of adoption. The first wave attracts the early experimenters – the ones who can sense the future before it’s evenly distributed. The second picks up the enterprises that’ve felt enough pain to seek out something better. The third comes when the rest of the world catches up, usually because the ground itself has shifted and the old tools can no longer do the job.

    GraphQL is now entering that third wave.

    Most people still describe GraphQL as an alternative to REST. That was true in 2015. What’s happening today is different. In the era of LLMs and autonomous agents, GraphQL isn’t just a nicer API; it has quietly become the API layer AI was waiting for.

    Source: GraphQL’s third wave: Why the future of AI needs an API of intent | Hygraph

    An argument that GraphQL is the right API abstraction for AI-based applications. This piece looks at the adoption of GraphQL over the last decade or so and why that makes it the perfect choice for AI applications.

    Design Thinking for AI: The 5-Stage Framework Every Builder Needs

    AI, Design, LLMs

    Grid of labeled colored boxes under two categories: "New interpretations" includes blue boxes labeled "Design Responsibly," "Design for Mental Models," and "Design for Appropriate Trust & Reliance"; "New characteristics" includes yellow boxes labeled "Design for Generative Variability," "Design for Co-Creation," and "Design for Imperfection."

    AI has changed the texture of design. We’re not designing for people alone anymore, we’re designing with/for intelligence. That changes everything. I’m obviously not the only person thinking about how Design frameworks evolve.

    Recent research has started reframing what this looks like. Adam Fard calls it AI-First Design Thinking.” Weisz, He, and Muller (2024) propose six design principles for generative AI that move beyond the empathy-prototype-test loop:

    Source: Design Thinking for AI: The 5-Stage Framework Every Builder Needs

    I don't think it's entirely coincidental that all of today's links are about intersection AI and design. Something I've mentioned recently, and we've been covering a lot, the software engineers have been thinking about in the context of our work, but that's not to say designers aren't also thinking deeply about this as well.

    New Rules for Enterprise UX and AI

    AI, Design, LLMs

    I recently attended the Enterprise UX conference in Amersfoort. The presentations made it very clear that successful AI integration requires big changes in how companies work and how we build systems. The main message was: AI is not just a new tool; it forces us to change our basic rules for design and data.

    Source: New Rules for Enterprise UX and AI — jasha.eu

    Software engineers have been exploring for several years how best to work with large language models. All categories of products, from the big frontier model labs all the way through to early-stage startups, are exploring this space. It's a question the design field is also asking. Here from the recent Enterprise UX conference in Amsterdam are some responses that emerged across the talks.

    Generative UI and the Ephemeral Interface

    AI, Design, LLMs, UI

    Diagram comparing "Today" with "Future with GenUI": on the left, a single dark blue interface is shown as the same for all users; on the right, four different colored interfaces are tailored to individual users, illustrating personalized user interfaces.

    Generative UI and the Ephemeral Interface

    This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI.

    I will admit that I’ve been skeptical of the notion of generative user interfaces. I was imagining an app for work, like a design app, that would rearrange itself depending on the task at hand. In other words, it’s dynamic and contextual. Adobe has tried a proto-version of this with the contextual task bar. Theoretically, it surfaces up the most pertinent three or four actions based on your current task. But I find that it just gets in the way.

    Source: Generative UI and the Ephemeral Interface – Roger Wong

    One of the things people are speculating about when it comes to generative AI is that perhaps our user interfaces will be themselves generated on the fly, tailored to individual users' needs. Esteemed folks like the Nielsen Norman Group have made such suggestions, but others like Roger Wong aren't so sure.

    Emily Campbell – AI UX Deep Dive – YouTube

    AI, Design, LLMs

    In one of the most popular episodes yet, Vitaly Friedman talked about what’s next for AI design patterns (https://www.dive.club/deep-dives/vita…) .

    In that episode he frequently referenced Shape of AI (https://www.shapeof.ai/) which is an incredible database of AI design patterns.So I wanted to get to the source and go deep with the creator [Emily Campbell](  / emmiecampbell  ) to learn how to design great AI experiences. Because she’s studied AI products more than just about anyone I’ve ever seen 👀

    We referenced The Shape of AI and Emily Campbell's work cataloguing AI interaction patterns a few months back. Now here's an in-depth interview with her about her work.

    Why Software Development Fell to AI First

    AI, LLMs, software engineering

    I find it’s always important to examine why you made a mistake. The worst mistake I ever made was reading “Bitcoin: A Peer-to-Peer Electronic Cash System” in January of 2009, thinking “cool math toy, maybe someone will turn it into something useful someday” and moving on. My most recent mistake, however, was not realizing that software development would be the first field to be transformed by agentic AI. I always assumed it would be the last. Let’s examine why that was.

    Source: Why Software Development Fell to AI First

    A thoughtful essay on why LLMs work when it comes to software engineering,

    A Month of Chat-Oriented Programming

    AI Native Dev, LLMs, software engineering

    A Month of Chat-Oriented ProgrammingOr when did you last change your mind about something?Nick Radcliffe. 12th November 2025.

    TL;DR: I spent a solid month “pair programming” with Claude Code, trying to suspend disbelief and adopt a this-will-be-productive mindset. More specifically, I got Claude to write well over 99% of the code produced during the month. I found the experience infuriating, unpleasant, and stressful before even worrying about its energy impact. Ideally, I would prefer not to do it again for at least a year out two. The only problem with that is that it “worked”. It’s hard to know exactly how well, but I (“we”) definitely produced far more than I would have been able to do unassisted, probably at higher quality, and with a fair number of pretty good tests (about 1500). Against my expectation going in, I have changed my mind. I now believe chat-oriented programming (“CHOP”) can work today, if your tolerance for pain is high enough.

    The notes below describe what has and has not worked for me, working with Claude Code for an intense month (in fact, more like six weeks now).

    Source: A Month of Chat-Oriented Programming – CheckEagle

    I recently listened to this Pragmatic Engineer podcast with Flask developer Armin Ronacher (highly recommended), where he talks about how he was an ardent resistor to the use of live language models for software development until he sat down and invested some time in that a few months ago, at which point he became convinced it was the future of how he was going to develop. Here's something similar from Nick Radcliffe, who was "a fairly outspoken and public critic of large-language models (LLMs), Chatbots, and other applications of LLM". He spent a month doing chat-oriented programming (CHOP), which is one of the various ways you can work with large language models as a software engineer. While he found it infuriating and frustrating, he acknowledges it did indeed make him productive. He also details his experience and things that he learned that you might find valuable yourself.  

    Don’t Fight the Weights

    context engineering, LLMs

    Bronze sculpture of a dynamic battle between two centaurs, one rearing and grappling the other in a dramatic pose atop a rocky base.

    For context and prompt engineers (and even chatbot users) it’s helpful to be able to recognize when you’re fighting the weights.

    • Here’s some signs you might be fighting the weights:
    • You find yourself threatening or pleading with the model.
    • The model makes the same mistake, even as you change the instructions.
    • The model acknowledges its mistake when pointed out, then repeats it.
    • The model seems to ignore the few-shot examples you provide.
    • The model gets 90% of the way there, but no further.
    • You find yourself repeating instructions several times.
    • You find yourself typing in ALL CAPS.

    Source: Don’t Fight the Weights

    If you've been working with large language models for a while in any sort of non-trivial way, then you'll likely have run into this situation where you simply cannot get it to produce something that you want it to. A classic example was until relatively recent getting an output in JSON format. Time and again I run into the issue when I ask it to produce, say, HTML that it will add extraneous content as well even when I ask it to only produce HTML. But this framing of the challenge really helps understand perhaps what's going on and how to work around it. Drew Breunig calls it fighting against the weights, and this makes a lot of sense.

    Software Development in the Time of Strange New Angels

    AI Native Dev, economics, LLMs, software

    Five months ago, my lifelong profession of software development changed completely. My profession was born in the 1940s, created to help fight demons. Our first encounter with the strange new angels of agentic AI is changing every aspect of it.Hardly anyone has noticed yet.

    Source: Software Development in the Time of Strange New Angels

    Every software developer, everybody involved in the delivery and development design of software-based products should read this. It's deeply insightful.

    How I use AI (Oct 2025) – Ben Stolovitz

    AI, AI Native Dev, LLMs

    Sketch of two robots, one holding a clipboard and pointing, the other holding a box with buttons or dials.

    I believe I use AI fairly typically for a software engineer, if slightly more than average. I’ve been working in the AI space since slightly before the announcement of GPT-3 in 2020.

    I use AI in a few particular ways:

    • Coding
    • Research & search
    • Summarization & transcription
    • Writing
    • Art & music

    This is, of course, purely my opinion & speculation. If you have different ways of using AI, I’d love to know!

    Source: How I use AI (Oct 2025) – Ben Stolovitz

    Another "how I use AI" type post which we have been collecting because it's early days, and seeing how different software engineers use the technology is a very interesting way to expand our own possible uses and use cases.

    You Should Write An Agent · The Fly Blog

    AI Engineering, AI Native Dev, autonomous agents, LLMs, MCP

    Diagram of a cleaning robot sequence: sweeping, carrying folded towels, using a dustpan and spray bottle, returning tools to a supply closet, and exiting through a door marked "EXIT."

    Some concepts are easy to grasp in the abstract. Boiling water: apply heat and wait. Others you really need to try. You only think you understand how a bicycle works, until you learn to ride one.

    There are big ideas in computing that are easy to get your head around. The AWS S3 API. It’s the most important storage technology of the last 20 years, and it’s like boiling water. Other technologies, you need to get your feet on the pedals first.

    LLM agents are like that.

    People have wildly varying opinions about LLMs and agents. But whether or not they’re snake oil, they’re a big idea. You don’t have to like them, but you should want to be right about them. To be the best hater (or stan) you can be.

    So that’s one reason you should write an agent. But there’s another reason that’s even more persuasive, and that’s

    It’s Incredibly Easy

    Source: You Should Write An Agent · The Fly Blog

    This, in just a few minutes, won't just give you an understanding of what agents are but will enable you to build your first one and understand what some of the fuss is about. You might also ask why you need MCP after seeing this.

    When Everyone’s a Developer, How Do We Promote the Web Platform Over React?

    AI, front end, LLMs, react

    2025 is a strange time to start a newsletter about web technology. The past four editions of WTN have focused on the intersection of the web and AI, because frankly that’s where most of the excitement is on today’s internet. But as a few people I link to this week point out, the web platform has improved to a point that it now does much of what frontend frameworks do. So why isn’t there as much exciting activity to report on regarding the web platform? The problem, I think, is that web platform improvements are being undermined by AI development trends. I see two main issues:

    I’ve heard both Vercel and Netlify (two of the leading web developer platforms) say in recent weeks that their user bases are massively increasing. Why? Because of vibe coders. The definition of a “developer” has expanded to include people who rely on prompting rather than programming. But the problem is that vibe coders get React solutions instead of web-native ones. What’s happening is that vibe coders ask their magic lamps to build an app or agent, and the AI genie gives them a React app.

    The leading large language models, like GPT-5, are defaulting to React and Next.jswhen asked to create web apps or sites. That entrenches the power that React has on the web development ecosystem, which means web platform improvements aren’t being utilised by AI. Which leads to point

    Source: When Everyone’s a Developer, How Do We Promote the Web Platform Over React?

    This is a really good round-up of some recent articles, many of which, if not all of which, we referenced here in the last few weeks. It also includes a reference to something I wrote recently in a similar vein. This is a real-time of transformation across all of computing, and front-end isn't alone in that. Where exactly it ends up? It's impossible to say.

    Escape Velocity: Break Free from Framework Gravity

    front end, react, software engineering

    A man in a white shirt and jeans runs with an open book in hand, looking up in awe as a rocket launches into the sky behind him, trailing smoke and flames.

    We saw this cycle with jQuery in the past, and we’re seeing it again now with React. We’ll see it with whatever comes next. Success breeds standardization, standardization breeds inertia, and inertia convinces us that progress can wait. It’s the pattern itself that’s the problem, not any single framework.But right now, React sits at the center of this dynamic, and the stakes are far higher than they ever were with jQuery. Entire product lines, architectural decisions, and career paths now depend on React-shaped assumptions. We’ve even started defining developers by their framework: many job listings ask for “React developers” instead of frontend engineers. Even AI coding agents default to React when asked to start a new frontend project, unless deliberately steered elsewhere.

    Source: Escape Velocity: Break Free from Framework Gravity — Den Odell

    I think this is a very important read for anybody who works in the front-end. Whether you are particularly invested in a framework like React or whether your focus is more towards the open web technology platform stack. I think the web is at a real crossroads. I've got more to say about that, but I won't say that here. But I certainly think if you take your role in developing for the web seriously, the kinds of issues that are raised here are very important to consider.

    Why engineers can’t be rational about programming languages

    leadership, software engineering

    Paper-cut style illustration of a man in glasses and a suit pointing to a red "HYPE" sign on his left, with a white "LOGIC" sign on his right, set against layered blue background waves.

    A programming language is the single most expensive choice a company makes, yet we treat it like a technical debate. After watching this mistake bankrupt dozens of companies and hurt hundreds more, I’ve learned the uncomfortable truth: these decisions are rarely about technology. They’re about identity, emotion, and ego, and they’re destroying your velocity and budget in ways you can’t see until it’s too late.

    Source: Why engineers can’t be rational about programming languages | spf13

    For many, many software developers, the programming language we use (even if not consciously) says something about ourselves and our identity. This can border on the tribal. It's also a way we often gatekeep, where certain languages are considered inferior to others and therefore the users of those languages are considered inferior to the users of "real" programming languages. A big challenge is when that interferes at a more senior level in the kind of technical decisions we're making. And this is a salutary tale. But I think it's important to note that we are all subject to biases like this. And indeed, what makes something a bias is it's something that we can't see in ourselves as much as it can be quite straightforward to see it in others.

    How I Use Every Claude Code Feature

    AI, AI Native Dev, LLMs, software engineering

    Having stuck to Claude Code for the last few months, this post is my set of reflections on Claude Code’s entire ecosystem. We’ll cover nearly every feature I use (and, just as importantly, the ones I don’t), from the foundational CLAUDE.md file and custom slash commands to the powerful world of Subagents, Hooks, and GitHub Actions. This post ended up a bit long and I’d recommend it as more of a reference than something to read in entirety.

    Source: How I Use Every Claude Code Feature – by Shrivu Shankar

    We've been collecting posts like this for a while. Not because we think they represent the one true way of working with a particular system, but because different lessons from different developers can provide insights into aspects of a large-language model system that we might ourselves find valuable when working with them. This is a really comprehensive look at all of the features of Claude Code that one particular developer has used. There might be some lessons in here for you, as there I'm sure will be for me, in how we could better work with Claude Code and perhaps similar systems as well.

    Agents Rule of Two: A Practical Approach to AI Agent Security

    AI, autonomous agents, lethal trifecta, LLMs, security

    Venn diagram titled "Choose Two" with three intersecting circles labeled A, B, and C. Circle A represents "Process untrustworthy inputs," B represents "Access to sensitive systems or private data," and C represents "Change state or communicate externally." The overlapping areas of any two circles are labeled "Safe," while the center intersection of all three is labeled "Danger."

    At a high level, the Agents Rule of Two states that until robustness research allows us to reliably detect and refuse prompt injection, agents must satisfy no more than two of the following three properties within a session to avoid the highest impact consequences of prompt injection.

    [A] An agent can process untrustworthy inputs

    [B] An agent can have access to sensitive systems or private data

    [C] An agent can change state or communicate externally

    It’s still possible that all three properties are necessary to carry out a request. If an agent requires all three without starting a new session (i.e., with a fresh context window), then the agent should not be permitted to operate autonomously and at a minimum requires supervision — via human-in-the-loop approval or another reliable means of validation.

    Source: Agents Rule of Two: A Practical Approach to AI Agent Security

    HHTCT systems using large language models have some very fundamental security challenges. Particularly where what Simon Wilson has called the lethal trifecta exists.
    • Access to your private data—one of the most common purposes of tools in the first place!
    • Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM
    • The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)
    Some have gone so far as to argue this makes any kind of agentic software system fundamentally insecure in all circumstances. The security team at Meta has Developed the rule of two—an approach to minimising the security challenges. If you work with these systems, this is very much worth a read.

    Agentic AI and Security

    AI Native Dev, MCP, security

    “Diagram titled ‘The lethal trifecta’ showing three overlapping coloured ovals labeled ‘Access to Private Data’, ‘Exposure to Untrusted Content’, and ‘Ability to Externally Communicate’.”

    Agentic AI systems present unique security challenges. The fundamental security weakness of LLMs is that there is no rigorous way to separate instructions from data, so anything they read is potentially an instruction. This leads to the “Lethal Trifecta”: sensitive data, untrusted content, and external communication – the risk that the LLM will read hidden instructions that leak sensitive data to attackers. We need to take explicit steps to mitigate this risk by minimizing access to each of these three elements. It is valuable to run LLMs inside controlled containers and break up tasks so that each sub-task blocks at least one of the trifecta. Above all do small steps that can be controlled and reviewed by humans.

    Source: Agentic AI and Security

    With the rapid rise of the MCP protocol and agentic software development–that is, where we largely set up software agents to complete a task and let them do it, calling resources locally and online–I've also seen serious concerns about the security of such processes. Here Korny Sietsma gives a detailed overview of the problem and ways of potentially mitigating rather than eliminating the associated risks.

    Optimizing Images For Web Performance: All You Need To Know

    image compression, image optimisation, performance

    Side-by-side view of the Cawston Press website displayed on a desktop monitor and a smartphone, featuring a rhubarb drink can with the tagline “Life’s Best Pressed” and fruit imagery in the background, emphasizing responsive design across devices.

    There are a few strategies you can follow to speed up loading your images:

    • Reducing file size so images load more quickly
    • Prioritizing important images to use network resources efficiently
    • Avoiding layout shift when images render
    • Setting up caching properly
    • Not embedding large images in HTML or CSS
    • Using optimization features inside your CMS or plugins

    Let’s look at each of these in detail.

    Source: Optimizing Images For Web Performance: All You Need To Know | DebugBear

    You probably know some if not all of this, but it's always great to have collected together current best practise when it comes to various techniques. And this is what we need to know right now about optimising image performance for the web.

    Will AI Agents Kill the Web as We Know It?

    AI, autonomous agents, Design, LLMs

    The way we interact with the web today is surprisingly manual. Want to book a flight? You’ll probably head to a familiar airline’s website or open Google and type in your dates. If that site also offers hotel and car rental options, great—you might stay and book everything in one place. But more likely, you’re picky. So you go off searching for that perfect boutique hotel or the restaurant you’ve read about. Click by click, tab by tab, you stitch your trip together.

    Source: Will AI Agents Kill the Web as We Know It? | Andy Budd

    For better and for worse, the web is not simply for humans anymore. The reality is, bots have long been the most important visitors for most websites, in particular Google bot, which indexes your site, and has been for decades the most important source of traffic for most successful websites. And then many sites have APIs which are of an interface for machines or code rather than for humans. Even if most of the time that interaction is mediated through the API, ultimately it was driven by a human. If the promise of agent's agentic AI is real, then while a human might ask an agent to complete a task for them, and that task might involve interacting with your website, the actual interaction won't be with a human even if it is in service of them. There are many who are aghast at this idea. But it is increasingly a reality. So, it is something you should be paying attention to if you develop websites. Here Andy Budd reflects on the implications, and it's something we cover at our upcoming Developer Summit and Next conferences from both a developer and more of a product design and product management perspective. And if it's interesting to you, I highly recommend you check out the upcoming book by Katja Forbes, who is speaking at both those conferences.

    Inlining Critical CSS: Does It Make Your Website Faster?

    CSS, performance

    We've long been advised that inlining critical CSS is a performance-enhancing technique. But there are definitely some gotchas there. So the folks from DebugBear give us more detail on what to do and what not to do.

    V7: Video Killed the Web Browser Star

    HTML, video

    Why don't you know that you don't know about the video element in HTML? Rob Weychert Thought he knew the element well. Turns out there was still more to know, and he shares that here.  

    Start implementing view transitions on your websites today – Piccalilli

    CSS, CSS Animation, View Transitions

    Bold black text on a light background reads “Start implementing view transitions on your websites today” above a yellow horizontal line; below the line, the text “Piccalilli From set.studio” appears on the left, and “Cyd Stumpel 28 October 2025” on the right.

    The View Transition API allows us to animate between two states with relative ease. I say relative ease, but view transitions can get quite complicated fast.A view transition can be called in two ways; if you add a tiny bit of CSS, a view transition is initiated on every page change, or you can initiate it manually with JavaScript.

    Source: Start implementing view transitions on your websites today – Piccalilli

    Yes, more on View transitions. The API we should have had about a decade ago, and it might have saved us from a lot of rabbit holes and dead ends. Here you can get up and running with Vue Transitions with Cyd Stumpel who gave a fantastic talk at CSS Day on the possibilities of View Transitions that I highly recommend (it's right here on Conffab).

    Start using Scroll-driven animations today!

    animation, CSS, CSS Animation, scroll-driven animation

    Scroll-driven animations are something that we've long needed JavaScript to do, but that's now changing with native support for scroll-driven animation in browsers here or coming soon. Learn more from the mega-talented Cyd Stumpel

    Performance Debugging With The Chrome DevTools MCP Server

    AI, MCP, performance, software engineering

    If you've been thinking about how MCP servers can help you in your work as a front-end engineer, or even if you haven't, take a look at this article from Debug Bear which explores how you can use the built-in Chrome debugging tools via an MCP server.

    Lots to shout about in Quiet UI – daverupert.com

    web components

    Illustration of a cartoon mouse wearing a hoodie and using a laptop, surrounded by UI elements like buttons, chat bubbles, and rating stars, with the text "A UI library for the Web" and subtext "Focusing on accessibility, longevity, performance, and simplicity."

    As President of Web Components, it’s my duty to publicly comment on every Web Component library and framework that exists. Today I’m taking a look at Quiet UI, a new source-available web component library soft-launched by Cory LaViska, the creator of Shoelace WebAwesome. You might be asking “another UI library? But why?” and that’s a good question, I’ll let Cory tell you in his own words…

    Source: Lots to shout about in Quiet UI – daverupert.com

    When Quiet UI came out a few weeks ago, we gave it a mention here. Looks like an amazing and really valuable library of web components without any dependencies. Here, Dave Rupert looks at what it has to offer in more detail. Highly recommend you check it out.

    The present and potential future of progressive image rendering – JakeArchibald.com

    image compression, image optimisation, performance

    Williams-Renault FW14B Formula 1 car in blue, yellow, and white livery with Canon and Camel branding, racing on a tarmac track lined with hay bales and blurred spectators in the background.

    Progressive image formats allow the decoder to create a partial rendering when only part of the image resource is available. Sometimes it’s part of the image, and sometimes it’s a low quality/resolution version of the image. I’ve been digging into it recently, and I think there are some common misconceptions, and I’d like to see a more pragmatic solution from AVIF.Let’s dive straight into the kinds of progressive rendering that are currently available natively in browsers.

    Source: The present and potential future of progressive image rendering – JakeArchibald.com

    Back at the dawn of the web, in the mid-90s, if you think performance is an issue now, then ah well, I've got a history lesson for you. In an era when hundreds of megabits, even gigabits per second, of download speed are far from rare, imagine 5 kilobits per second. Imagine a time when 10 kilobits per second was a fast shared download at an educational institution. And imagine the kinds of techniques you'd need to come up with to manage performance, particularly of images, which could take many seconds or longer to download. In that era, the progressive JPEG, which rendered a very low-resolution version of the image relatively quickly and then filled in the details over time, was a godsend. Progressive image formats are not just a thing of the past. Here, Jake Archibald looks at the current state and future direction of these image formats.

    Measured AI | Note to Self

    AI, AI Native Dev, LLMs, software engineering

    Small mosaic of a pixelated black creature with white eyes and teeth, bordered in light blue, affixed to a concrete wall near the ground surrounded by leaves and a metal fence

    As a former full-time engineer, I really enjoy coding with AI tools and the tradeoffs are worthwhile for me. AI assistance shortens my time from idea to working code, and using it has strengthened my ability to express what I want the code to do and how. But I view AI-generated code as a first draft that has much room for improvement, so I delete or refactor a good deal of it. I don’t “vibe-code” so much these days, as I prefer to fully understand what I’m building.

    Source: Measured AI | Note to Self

    3 practical ways LLMs can support design systems teams today

    AI, Design Systems, LLMs

    Screenshot of a website interface with a dark red header containing navigation links for Foundation, Components, Patterns, Resources & tools, and a Search icon; below are colorful cards for Patterns and Resources & Tools sections.

    When AI and LLMs started creeping into the design systems discourse, the loudest use cases were about generating components and docs. But the truth is that for many teams, those aren’t actually the hardest problems to solve.At the end of 2024 and start of 2025, the attitude across the design systems community was flat-out negative toward AI. The DS space seemed deeply skeptical of claims that AI could do anything useful for us. At the time, the only examples I ever heard were: It can write components, and it can generate documentation.It can: But I don’t think either of those is actually a good use case for LLMs in design systems.

    Source: 3 practical ways LLMs can support design systems teams today – zeroheight

    How can large-language models help teams work with design systems? Given there certainly seems to be a lot of benefit for software engineers in working with large language models, what carries over to design systems, a practise certainly adjacent to and overlapping with engineering. And what doesn't work? Here, Elyse Holladay looks at where large language models might not do the job and where they may be valuable.

    React and Remix Choose Different Futures

    AI Native Dev, architecture, react, remix

    Split image artwork with Hokusai’s traditional woodblock wave on the left transforming into a pixelated and then digital-style wave on the right, featuring vivid colors and grid lines; text at the bottom reads “THE · WAVE · OF · THE · FUTURE.”

    I attended Remix Jam two weeks ago, then spent this past week watching React Conf 2025 videos. I have spent the last decade shipping production code on React and the last two years on Remix.Now both ecosystems are shifting, and what seemed like different approaches has become incompatible visions.

    React Conf’s technical announcements were incremental: React 19.2 APIs, View Transitions experiments, the compiler getting more sophisticated. The message was clear: React is listening to the community while accepting complexity on your behalf. Stability, Composability, Capability: those are the values.The Remix team announced something else entirely: they’re breaking with React. The mental model shifts introduced by use client and the implementation complexity of Server Components forced a choice. And Remix 3 chose Simplicity. Remix 2 users pay the price; there’s no upgrade path.

    That choice, to sacrifice Stability for Simplicity, makes explicit what was already true: these values cannot coexist.

    Source: React and Remix Choose Different Futures

    A really interesting reflection on where React and Remix are headed as they diverge, and how the values embedded in each are quite distinct. I feel we're at a time of real disruption when it comes to how we architect for the web—something I've been saying for the last year or so. React and Remix are diverging around this, but we also need to consider the impact of large language models on how we architect for the web. If, as I think is extremely likely, we increasingly rely on LLMs to do a lot of the heavy lifting when it comes to writing code, the key question becomes: what kind of code works best with those models? Some people argue that React does, because there's so much training data—so much React code on the web that models will have learned from. But as one interviewee on the AI-Native Dev podcast a little while ago observed, a lot of that is just pretty average React code. And how much is enough? We don't actually know. When I work with LLMs on web technologies and ask them to generate web content and applications with functionality using "vanilla" JavaScript, CSS, and HTML, they do a very good job. They're very steerable. I think they know enough, quite honestly. The web platform fundamentals are very well documented, and that documentation is probably very high quality, which helps. Anyway, it's a complex and challenging time. Pieces like this are valuable in helping us think through and reason about where we go from here.