Agentic AI systems present unique security challenges. The fundamental security weakness of LLMs is that there is no rigorous way to separate instructions from data, so anything they read is potentially an instruction. This leads to the “Lethal Trifecta”: sensitive data, untrusted content, and external communication – the risk that the LLM will read hidden instructions that leak sensitive data to attackers. We need to take explicit steps to mitigate this risk by minimizing access to each of these three elements. It is valuable to run LLMs inside controlled containers and break up tasks so that each sub-task blocks at least one of the trifecta. Above all do small steps that can be controlled and reviewed by humans.
With the rapid rise of the MCP protocol and agentic software development–that is, where we largely set up software agents to complete a task and let them do it, calling resources locally and online–I've also seen serious concerns about the security of such processes.
Here Korny Sietsma gives a detailed overview of the problem and ways of potentially mitigating rather than eliminating the associated risks.
Optimizing Images For Web Performance: All You Need To Know
You probably know some if not all of this, but it's always great to have collected together current best practise when it comes to various techniques.
And this is what we need to know right now about optimising image performance for the web.
The way we interact with the web today is surprisingly manual. Want to book a flight? You’ll probably head to a familiar airline’s website or open Google and type in your dates. If that site also offers hotel and car rental options, great—you might stay and book everything in one place. But more likely, you’re picky. So you go off searching for that perfect boutique hotel or the restaurant you’ve read about. Click by click, tab by tab, you stitch your trip together.
For better and for worse, the web is not simply for humans anymore.
The reality is, bots have long been the most important visitors for most websites, in particular Google bot, which indexes your site, and has been for decades the most important source of traffic for most successful websites.
And then many sites have APIs which are of an interface for machines or code rather than for humans. Even if most of the time that interaction is mediated through the API, ultimately it was driven by a human.
If the promise of agent's agentic AI is real, then while a human might ask an agent to complete a task for them, and that task might involve interacting with your website, the actual interaction won't be with a human even if it is in service of them.
There are many who are aghast at this idea. But it is increasingly a reality. So, it is something you should be paying attention to if you develop websites.
Here Andy Budd reflects on the implications, and it's something we cover at our upcoming Developer Summit and Next conferences from both a developer and more of a product design and product management perspective.
And if it's interesting to you, I highly recommend you check out the upcoming book by Katja Forbes, who is speaking at both those conferences.
Inlining Critical CSS: Does It Make Your Website Faster?
We've long been advised that inlining critical CSS is a performance-enhancing technique. But there are definitely some gotchas there. So the folks from DebugBear give us more detail on what to do and what not to do.
Why don't you know that you don't know about the video element in HTML? Rob Weychert Thought he knew the element well. Turns out there was still more to know, and he shares that here.
Start implementing view transitions on your websites today – Piccalilli
The View Transition API allows us to animate between two states with relative ease. I say relative ease, but view transitions can get quite complicated fast.A view transition can be called in two ways; if you add a tiny bit of CSS, a view transition is initiated on every page change, or you can initiate it manually with JavaScript.
Yes, more on View transitions. The API we should have had about a decade ago, and it might have saved us from a lot of rabbit holes and dead ends.
Here you can get up and running with Vue Transitions with Cyd Stumpel who gave a fantastic talk at CSS Day on the possibilities of View Transitions that I highly recommend (it's right here on Conffab).
Scroll-driven animations are something that we've long needed JavaScript to do, but that's now changing with native support for scroll-driven animation in browsers here or coming soon.
Learn more from the mega-talented Cyd Stumpel
Performance Debugging With The Chrome DevTools MCP Server
If you've been thinking about how MCP servers can help you in your work as a front-end engineer, or even if you haven't, take a look at this article from Debug Bear which explores how you can use the built-in Chrome debugging tools via an MCP server.
As President of Web Components, it’s my duty to publicly comment on every Web Component library and framework that exists. Today I’m taking a look at Quiet UI, a new source-available web component library soft-launched by Cory LaViska, the creator of Shoelace WebAwesome. You might be asking “another UI library? But why?” and that’s a good question, I’ll let Cory tell you in his own words…
When Quiet UI came out a few weeks ago, we gave it a mention here.
Looks like an amazing and really valuable library of web components without any dependencies. Here, Dave Rupert looks at what it has to offer in more detail. Highly recommend you check it out.
The present and potential future of progressive image rendering – JakeArchibald.com
Progressive image formats allow the decoder to create a partial rendering when only part of the image resource is available. Sometimes it’s part of the image, and sometimes it’s a low quality/resolution version of the image. I’ve been digging into it recently, and I think there are some common misconceptions, and I’d like to see a more pragmatic solution from AVIF.Let’s dive straight into the kinds of progressive rendering that are currently available natively in browsers.
Back at the dawn of the web, in the mid-90s, if you think performance is an issue now, then ah well, I've got a history lesson for you.
In an era when hundreds of megabits, even gigabits per second, of download speed are far from rare, imagine 5 kilobits per second. Imagine a time when 10 kilobits per second was a fast shared download at an educational institution.
And imagine the kinds of techniques you'd need to come up with to manage performance, particularly of images, which could take many seconds or longer to download.
In that era, the progressive JPEG, which rendered a very low-resolution version of the image relatively quickly and then filled in the details over time, was a godsend.
Progressive image formats are not just a thing of the past. Here, Jake Archibald looks at the current state and future direction of these image formats.
As a former full-time engineer, I really enjoy coding with AI tools and the tradeoffs are worthwhile for me. AI assistance shortens my time from idea to working code, and using it has strengthened my ability to express what I want the code to do and how. But I view AI-generated code as a first draft that has much room for improvement, so I delete or refactor a good deal of it. I don’t “vibe-code” so much these days, as I prefer to fully understand what I’m building.
When AI and LLMs started creeping into the design systems discourse, the loudest use cases were about generating components and docs. But the truth is that for many teams, those aren’t actually the hardest problems to solve.At the end of 2024 and start of 2025, the attitude across the design systems community was flat-out negative toward AI. The DS space seemed deeply skeptical of claims that AI could do anything useful for us. At the time, the only examples I ever heard were: It can write components, and it can generate documentation.It can: But I don’t think either of those is actually a good use case for LLMs in design systems.
How can large-language models help teams work with design systems? Given there certainly seems to be a lot of benefit for software engineers in working with large language models, what carries over to design systems, a practise certainly adjacent to and overlapping with engineering. And what doesn't work?
Here, Elyse Holladay looks at where large language models might not do the job and where they may be valuable.
I attended Remix Jam two weeks ago, then spent this past week watching React Conf 2025 videos. I have spent the last decade shipping production code on React and the last two years on Remix.Now both ecosystems are shifting, and what seemed like different approaches has become incompatible visions.
React Conf’s technical announcements were incremental: React 19.2 APIs, View Transitions experiments, the compiler getting more sophisticated. The message was clear: React is listening to the community while accepting complexity on your behalf. Stability, Composability, Capability: those are the values.The Remix team announced something else entirely: they’re breaking with React. The mental model shifts introduced by use client and the implementation complexity of Server Components forced a choice. And Remix 3 chose Simplicity. Remix 2 users pay the price; there’s no upgrade path.
That choice, to sacrifice Stability for Simplicity, makes explicit what was already true: these values cannot coexist.
A really interesting reflection on where React and Remix are headed as they diverge, and how the values embedded in each are quite distinct.
I feel we're at a time of real disruption when it comes to how we architect for the web—something I've been saying for the last year or so. React and Remix are diverging around this, but we also need to consider the impact of large language models on how we architect for the web.
If, as I think is extremely likely, we increasingly rely on LLMs to do a lot of the heavy lifting when it comes to writing code, the key question becomes: what kind of code works best with those models?
Some people argue that React does, because there's so much training data—so much React code on the web that models will have learned from. But as one interviewee on the AI-Native Dev podcast a little while ago observed, a lot of that is just pretty average React code. And how much is enough? We don't actually know.
When I work with LLMs on web technologies and ask them to generate web content and applications with functionality using "vanilla" JavaScript, CSS, and HTML, they do a very good job. They're very steerable. I think they know enough, quite honestly. The web platform fundamentals are very well documented, and that documentation is probably very high quality, which helps.
Anyway, it's a complex and challenging time. Pieces like this are valuable in helping us think through and reason about where we go from here.
Founders Over Funders. Inventors Over Investors. – Anil Dash
I’ve been following tech news for decades, and one of the worst trends in the broader cultural conversation about technology — one that’s markedly accelerated over the last decade — is the shift from talking about people who create tech to focusing on those who merely finance it.
I literally have essentially nothing to add to this. This is very strongly my feeling as well. And I guess my hope–and I know a lot of people are rather sceptical about A.I.–but my hope is that A.I. makes those people who make stuff and do stuff, who build things more productive or less needful of financing, and ultimately gives rise to a new and extraordinary kind of flourishing.
Because right now we are optimising for the kind of people who can seek funding, the kind of products and businesses where funding makes sense from the VC perspective, not from the product perspective, and where's that led us over the last fifteen or twenty years? I think a very moribund, boring place, far less interesting than the web of let's say 20 years ago.
So go and read this and if you're the kind of person who likes to build things, think about what you can build in this new era of large language models.
The following design might be simple to create in a tool like Figma, but getting them to work fluidly in the browser is a different story. It’s not complicated, but there are a few things that we need to consider.
A lot of people say AI will make us all “managers” or “editors”…but I think this is a dangerously incomplete view!Personally, I’m trying to code like a surgeon.A surgeon isn’t a manager, they do the actual work! But their skills and time are highly leveraged with a support team that handles prep, secondary tasks, admin. The surgeon focuses on the important stuff they are uniquely good at.
We collect quite a few reflections like these from software engineers about how they work with large language models and how they think about that work.
I think this analogy of a surgeon is actually very interesting and possibly quite helpful. Surgery is increasingly automated and supported. A surgeon is surrounded by experts with anaesthetics, surgical nursing teams, very often using robotic arms and devices and using magnifying technology rather than their unaided eye. And yet it's ultimately the knowledge and specific skill of the surgeon that leads to successful outcomes.
Write Code That Runs in the Browser, or Write Code the Browser Runs
I’ve been thinking about a note from Alex Russell where he says:any time you’re running JS on the main thread, you’re at risk of being left behind by progress.The zen of web development is to spend a little time in your own code, and instead to glue the big C++/Rust subsystems together, then get out of the bloody way.
In his thread on Bluesky, Alex continues:How do we do this? Using the declarative systems that connect to those big piles of C++/Rust: CSS for the compositor (including scrolling & animations), HTML parser to build DOM, and for various media, dishing off to the high-level systems in ways that don’t call back into your JS.I keep thinking about this difference:I need to write code that does X.I need to write code that calls a browser API to do X.
There’s a big difference between A) making suggestions for the browser, and B) being its micromanager.Hence the title: you can write code that will run in the browser, or you can write code that calls the browser to run.
Back in 2009, I ran a competition at a conference called "No-Bit Sherlock." The premise was simple: as SVG was gaining browser support, we should minimize—or ideally eliminate—our reliance on bitmap images on the web.
By "bits," I mean raster graphics. At the time, web pages were littered with images of text, typically saved as JPEGs or PNGs. Illustrations that could have been rendered as vectors were routinely served as bitmaps instead.
While SVG adoption has increased dramatically since then, the unnecessary use of bitmap images remains surprisingly common. It's become an instinct—a default we reach for without questioning.
So what does this have to do with the article I'm discussing? It's about instinct too. This piece captures a crucial distinction between two approaches to web development:
One mindset is "write code that does X." The other is "write code that calls a browser API to do X."
Is this a difference without a distinction? Absolutely not. These represent fundamentally different philosophies about what we do as front-end engineers. Our instinct should be to understand the browser's capabilities first, then write the minimal code necessary to leverage them.
Animation illustrates this perfectly. Fifteen to twenty years ago, web animation barely existed beyond hover states. Any movement, appearance, or transformation required JavaScript manipulating DOM properties directly. With JavaScript as our only tool, we blocked the main thread with every animation.
Fast-forward to today. Browsers offer rich built-in animation capabilities we can trigger with JavaScript or even CSS alone, keeping the main thread clear and performance smooth.
View transitions represent the latest iteration—essentially the holy grail. We can now animate page transitions without implementing the animation ourselves. We simply declare what we want, and the browser handles it. The result? High performance, better developer experience, and progressive enhancement. In Firefox (where multi-page view transitions aren't yet implemented as of October 2025), users simply get traditional page transitions—no breakage, no compromise.
The pattern repeats across features: form validation, layout systems, accessibility features. There's extensive built-in functionality we can enhance with JavaScript when needed, rather than replacing entirely.
Our job as professional front-end developers is to know the platform's capabilities and use them fully. The browser will always be orders of magnitude more performant than our custom implementations. But this requires stepping outside the comfort zone of your preferred framework or library to understand the web platform itself—the foundation everything else is built on.
Last month, I embarked on an AI-assisted code safari. I tried different applications (Claude Code, Codex, Cursor, Cline, Amp, etc.) and different models (Opus, GPT-5, Qwen Coder, Kimi K2, etc.), trying to get a better lay of the land. I find it useful to take these macro views occasionally, time-boxing them explicitly, to build a mental model of the domain and to prevent me from getting rabbit-holed by tool selection during project work.
The takeaway from this safari was that we are undervaluing speed.We talk constantly about model accuracy, their ability to reliably solve significant PRs, and their ability to solve bugs or dig themselves out of holes. Coupled with this conversation is the related discussion about what we do while an agent churns on a task. We sip coffee, catch up on our favorite shows, or make breakfast for our family all while the agent chugs away. Others spin up more agents and attack multiple tasks at once, across a grid of terminal windows. Still others go full async, handing off Github issues to OpenAI’s Codex, which works in the cloud by itself… often for hours.
Along with Simon Willison, Jeff Huntley, and a handful of other folks, I find Drew Breunig's insights and shared experience of working with large language models as a software engineer very valuable.
Here Drew reports back on a month of intensive use of multiple models and applications.
The takeaway from this safari was that we are undervaluing speed
Drew examines the swarm pattern for working, particularly with slower models, and predicts what the future of working with large language models as a software engineer might look like.
Even though AI has been the most-talked-about topic in tech for a few years now, we’re in an unusual situation where the most common opinion about AI within the tech industry is barely ever mentioned.
Most people who actually have technical roles within the tech industry, like engineers, product managers, and others who actually make the technologies we all use, are fluent in the latest technologies like LLMs. They aren’t the big, loud billionaires that usually get treated as the spokespeople for all of tech.And what they all share is an extraordinary degree of consistency in their feelings about AI, which can be pretty succinctly summed up:
Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.
In early 1995 I was 23 and living in a terraced house in Bristol with four friends, about 18 months after leaving university. I’d given up on trying to be an illustrator, had a bit of freelance work making models for Aardman Animations, and would soon be the only one of my friends not to have permanent work. I was increasingly interested in technology and this brand new thing: Internet.
It was about a year or 18 months before this, set in 1995, that I first got online in any modern sense.
At university back in the 1980s, we'd had access to email and bulletin boards, but it was in the early 1990s that, like the writer, I first connected to the web.
We had remarkably similar computers and probably very similar experiences as well. I recall Adam Engst's book and how you bought it as much because you could get access to Mac TCP rather than having to pay for it.
The observation.
Since getting a modem at the start of the month, and hooking up to the Internet, I’ve spent about an hour every evening actually online (which I guess is costing me about £1 a night), and much of the days and early evenings fiddling about with things. It’s so complicated. All the hype never mentioned that
Definitely rings true over three decades later. It was expensive and it was complicated, but relative to anything else related to computing, it probably wasn't any more complicated or any more expensive. Things were just like that back then.
Whether for a bit of reminiscence of the early days of the web, or to get a sense of what the web was like before you used it and perhaps before you were even born, I highly recommend this lovely reflexion.
Improving the trustworthiness of Javascript on the Web
It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential LLMs, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains.In this post, we will provide an early look at such a system, called Web Application Integrity, Consistency, and Transparency (WAICT) that we have helped author. WAICT is a W3C-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web. We will discuss the problem we need to solve, and build up to a solution resembling the current transparency specification draft. We hope to build even wider consensus on the solution design in the near future.
Web application security can be particularly challenging because of the integrity of the resources that may be included in a web application.
Native apps are typically self-contained bundles where once the app is built and deployed, the resources (like the code that runs, images, and so on) don't, indeed can't change.
Web applications are inherently dynamic. Whatever leaves at the end of a URL, for example, JavaScript (whether that's our own or third-party script), CSS, images, and so on, can change at any time.
And this presents a very significant challenge to security for web applications.
There are techniques we can use to mitigate these challenges, but the W3C and a number of major cloud providers are working on WAICT (Web Application Integrity, Consistency, and Transparency) to address these challenges at a more fundamental level.
Identifying Accessibility Data Gaps in CodeGen Models
Rather than relying on anecdotal evidence or cherry-picked examples, I built a systematic approach to evaluate how well LLMs — starting with GPT-4 — generate accessible HTML. The methodology is straightforward but comprehensive: I created a Python testing framework that sent carefully crafted prompts to Azure OpenAI’s GPT 4 model, collected the generated HTML responses, and then manually analyzed these responses for accessibility compliance.
This thesis investigates the impact of accessibility overlays on the usability and user experience (UX) for individuals with permanent visual impairments, thereby addressing a gap in academic research. Given the rise in visual impairments due to population growth and ageing, this focus is relevant and timely. The conducted research involved an evaluation study that comprised two parts: a technical evaluation of accessibility overlays against the WCAG 2.1 standard, and a user study that assessed the usability and UX of 21 individuals with permanent visual impairments when interacting with websites that employ an accessibility overlay. Furthermore, interviews with two accessibility consultants and two accessibility overlay company representatives provided supplementary information to the discussion.
To say that accessibility overlays are controversial in the accessibility industry is to put it mildly. Overlays underwhelm, as Adrienne Roselli put it in a talk here on Conffab, available with a free membership.
This master's thesis on the topic recently caught our eye.
So, what did it conclude? Well, you'll have to read the conclusion to find out.
When Figma Design launched in 2015, most rich design tools were still native desktop apps. Betting on WebGL—a browser graphics API originally designed for 3D applications—was a bold move. WebGL wasn’t widely used for complex 2D applications at the time, but Figma’s team saw its potential to power a smooth, infinite canvas in the browser. That early bet on WebGL set the foundation for Figma’s performance and real-time collaboration capabilities.
In 2023, Chromium shipped support for WebGPU, the successor to WebGL. It allows for new rendering optimizations not possible in WebGL—for instance, compute shaders that can be used to move work off the CPU and onto the GPU to take advantage of its parallel processing. By supporting WebGPU, we could also avoid WebGL’s bug-prone global state, and benefit from much more performant and clear error-handling.
Figma is a web-native product and company. Well over a decade ago, they made the bet that rather than being native apps, design tools should, like many other apps, live in the browser, live natively on the web.
The technology that enabled building such a sophisticated, graphically rich interactive application in the browser at the time was WebGL.
More recently, we've seen the release of WebGPU, a way in which web applications written in JavaScript can access the GPU of a device.
These days, we associate GPUs, of course, with AI. With the training of and inference on large language models.
But the G stands for graphics, and for decades GPUs have accelerated the graphical performance of computers.
WebGPU gives developers access to the GPU on their device directly and dramatically speeds up the performance of many kinds of calculation.
Here, the engineering team involved with the implementation of their new web GPU-powered workflows talk about the challenges and the techniques associated with it.
Generative AI in the Real World: Context Engineering with Drew Breunig – O’Reilly
In this episode, Ben Lorica and Drew Breunig, a strategist at the Overture Maps Foundation, talk all things context engineering: what’s working, where things are breaking down, and what comes next. Listen in to hear why huge context windows aren’t solving the problems we hoped they might, why companies shouldn’t discount evals and testing, and why we’re doing the field a disservice by leaning into marketing and buzzwords rather than trying to leverage what current crop of LLMs are actually capable of.
We've referred to Drew Breunig's work a few times here, on Elsewhere. Here is an in-depth conversation he recently recorded on the topic of context engineering.
The Great Software Quality Collapse: How We Normalized Catastrophe
We’re living through the greatest software quality crisis in computing history. A Calculator leaks 32GB of RAM. AI assistants delete production databases. Companies spend $364 billion to avoid fixing fundamental problems.
This isn’t sustainable. Physics doesn’t negotiate. Energy is finite. Hardware has limits.
The companies that survive won’t be those who can outspend the crisis.
Denis Stetskov thinks we're in the middle of what you might consider to be a new software crisis. Not created by, but perhaps amplified by large language models.
He observes how wasteful of memory and computing resources modern software applications tend to be, and he's also concerned, particularly with live sandwich models, about how we get new developers when the roles we traditionally gave junior developers are being done by AI code generation tools.
Definitely worth a read and some consideration of the points he is making.
Having been around the block a few times, I don't think there's anything particularly new about wasting computing resources. We all know about Moore's law, but there is also the ironic Wirth's Law (also know as Gates law) which observes that "software is getting slower more rapidly than hardware is becoming faster."
But I think his points are well made and, as mentioned, well worth thinking about.
Dev Tools – Your Ultimate Developer Toolkit | Free Online Tools
Few days ago, we linked to everywhere.tools, a collection of open source tools for designers.
In a similar vein, here we have a collection of developer tools, all free and open source. No surveillance, no advertising. From converting JSON to CSV, or the other way around, to counting tokens, converting case and many many more highly recommended. Add this to your speed dial.
Imagine a responsive hero image that becomes more transparent as the viewport gets narrower, helping text readability on small screens or a card that scales up slightly as the viewport grows, adding a subtle polish.
Until now, achieving these effects required complex calc() expressions or JavaScript, often leading to brittle solutions. But with the new CSS progress()function, you can express these relationships cleanly and intuitively.
With the new CSS progress() function, you can express “how far” the viewport width has moved between a minimum and maximum size, and map that directly to opacity.
Progress is a new but relatively well-supported CSS function that can be used with progressive enhancement. Learn about it and how it works, and how to fallback for older browsers from Amit Merchant.
I have to admit that I got a little over-excited when I read that the contrast-color() function is supported in Safari and was keen to try it out. But there’s a problem, the thing with contrast-color is…
Given a certain color value, contrast-color() returns either white or black, whichever produces a sharper contrast with that color. So, if we were to provide coral as the color value for a background, we can let the browser decide whether the text color is more contrasted with the background as either white or black.
So yeah, define a background colour and the browser will choose either black or white to contrast it with:
For my website design, I chose a dark blue background colour (#212E45) and light text (#d3d5da). This colour is off-white to soften the contrast between background and foreground colours, while maintaining a decent level for accessibility considerations.
We've covered contrast-color here recently, a relatively new CSS function that allows us to straightforwardly specify contrast in colour values To ensure we have contrast.
But this is relatively limited, as Andy Clarke observes in this article "define a background colour and the browser will choose either black or white to contrast it with".
Here Andy explores the function and how we can work with it despite this apparent limitation.
Designing Amiable Web Spaces: Lessons from Vienna’s Café Culture
Today’s web is not always an amiable place. Sites greet you with a popover that demands assent to their cookie policy, and leave you with Taboola ads promising “One Weird Trick!” to cure your ailments. Social media sites are tuned for engagement, and few things are more engaging than a fight. Today it seems that people want to quarrel; I have seen flame wars among birders.
These tensions are often at odds with a site’s goals. If we are providing support and advice to customers, we don’t want those customers to wrangle with each other. If we offer news about the latest research, we want readers to feel at ease; if we promote upcoming marches, we want our core supporters to feel comfortable and we want curious newcomers to feel welcome.
In a study for a conference on the History of the Web, I looked to the origins of Computer Science in Vienna (1928-1934) for a case study of the importance of amiability in a research community and the disastrous consequences of its loss. That story has interesting implications for web environments that promote amiable interaction among disparate, difficult (and sometimes disagreeable) people.
I'm a sucker for drawing lessons from historical periods, maybe those that have been overlooked or somewhat forgotten.
I'm also a bit of a sucker for sitting around in cafes and having pleasant conversations.
This article on A List Apart, which draws lessons from Vienna's cafe culture of the interwar years (the first part of the 20th century), really caught my attention.
It's often observed that things that make folks angry are the things that get more clicks, and in a click-driven, attention-scarce, advertising world, it's no surprise we often get content and experiences that one wouldn't characterise as amiable.
And yet, do people really want to live in a world driven by anxiety, conflict, contradiction, and argument?
Take a moment to draw some lessons from a more civilised time and space. One that ironically ended very soon after in one of the worst episodes of human history.
Just Talk To It – the no-bs Way of Agentic Engineering
I’ve been more quiet here lately as I’m knee-deep working on my latest project. Agentic engineering has become so good that it now writes pretty much 100% of my code. And yet I see so many folks trying to solve issues and generating these elaborated charades instead of getting sh*t done.
When I very first started developing software in any serious kind of way at university in the mid-1980s, waterfall was only then starting to become a standardised practise.
Before then, particularly at the scale of architecture, there really weren't any established practises to speak of.
We've since seen the Agile revolution, but it seems clear that we're in the midst of a similar revolutionary transformation in how we develop low-scale and small-scale software products.
We've published a few of these accounts so far where experienced developers talk about how they are working with large language models as software engineers.
It's fascinating to see how different engineers work with these technologies, what they are learning, how their process is iterating and evolving, and how it compares with other very experienced developers.
Are we evolving towards the one true way we should work with these models? Or do different use cases require different processes? Or do different models work better with different approaches?
None of that is clear, but it's definitely interesting to get these sorts of inputs into our own decision-making about how we should be working with the models.
I create and sell online design courses for many folks in tech – designers, aspiring designers, design-adjacent – and one question I’ve gotten a lot over the last two and a half years is: what is AI going to do to design?
I’ve been hesitant to answer too confidently, since things have been moving very quickly, and – as a teacher of design – I fall straight into a trap put so memorably by Upton Sinclair a century ago:
It is difficult to get a man to understand something, when his salary depends on his not understanding it. ― Upton Sinclair
That being said, if the robots are taking all the design jobs any time soon, I also would like to find my next act. So you can take this article with a grain of salt, and feel free to push back in the comments, on X, or via email 🙂This post attempts to answer two questions:Will AI take design jobs? If so, which ones?In light of that, what should designers focus on?
If AI is radically transforming fields such as design, where is the actual impact and the benefit? After nearly three years of the ChatGPT world Erik Kennedy asks, where is the AI design renaissance?
It's a really thoughtful essay. I really recommend. It starts from a place of humility, which I think is a very good place to start when it comes to thinking about these technologies.
Default Isn’t Design. Why familiar feels right but often…
Framework monoculture is a psychology problem as much as a tech problem. When one approach becomes “how things are done,” we unconsciously defend it even when standards would give us a healthier, more interoperable ecosystem. Psychologists call this reflex System Justification. Naming it helps us steer toward a standards-first future without turning the discussion into a framework war.
Frameworks, and above all React and its ecosystem, have become the most widely adopted approach to developing modern web applications.
But as Rob Eisenberg observes here there are significant risks when default approaches become so entrenched.
This is a detailed essay I recommend to anyone who thinks about front-end architecture. It will help question a lot of likely unexamined assumptions that we make about how we build web applications. Whatever implications it ultimately has, it will ensure that the choices we make are more informed and more conscious, and not simply the adoption of defaults.
Annotating designs using common language – TetraLogical
In most organisations, design documentation often includes annotations, but accessibility-specific ones are still rare. That’s a missed opportunity. Annotating designs for accessibility helps everyone involved understand what needs to be built, tested, and maintained.
Using a common language that designers, developers, and quality assurance (QA) teams all understand is essential so that information doesn’t get lost in translation, especially during the hand-off between design and development.
AI is not born neutral. The rules that govern how it behaves are set by its authors: corporations and governments embedding their own assumptions, priorities, and cultural defaults into code. Those rules travel with the system. If Australia adopts AI built to foreign specifications, we also import those embedded choices — about privacy, bias, autonomy, and control. We can avoid this dependency only by setting our own AI development guidelines, explicitly crafted to preserve sovereignty. That means writing the governance rules here, not just buying in systems and pretending we can retrofit “Australian values” later. The strategic choice is not whether to use AI, but whether to use someone else’s rules or our own.
Will large language models, particularly the largest frontier models, be restricted to a handful of the largest, best-resourced labs in the world, closely associated with the largest, most powerful nations in the world, like the United States and China?
Or should other nations like Australia invest in their own sovereign models?
Here Ian Gribble explores the implications for a nation like Australia of the increasing importance of AI and large language models in the modern world.
For Blogtober, I dug up a draft about the two CSS pseudo-class functions :is() and :where() that I’d had lying around in my drafts folder for quite some time. Actually, when I originally started writing this post, :is() and :where() had just landed in CSS, and — just like with so many other new CSS features — I was expecting them to “change the way we write CSS.” Both are now widely available baseline features supported by all modern browsers.
Mattias Ott looks at two relatively new CSS selectors that are now widely supported and how they are being used in the wild as well as how you might best take advantage of them.
Big O notation is a way of describing the performance of a function without using time. Rather than timing a function from start to finish, big O describes how the time grows as the input size increases. It is used to help understand how programs will perform across a range of inputs.In this post I’m going to cover 4 frequently-used categories of big O notation: constant, logarithmic, linear, and quadratic. Don’t worry if these words mean nothing to you right now. I’m going to talk about them in detail, as well as visualise them, throughout this post.
I remember first being introduced to Big O notation most likely in my first year of computer science, decades ago now.
As someone who'd fiddled around with programming in languages like BASIC, it likely never occurred to me that programming was a kind of mathematical science and we could reason about it in the same way we reason about things like mathematics.
Big O notation is a way of thinking about the performance of an algorithm in relative terms.
This is a great, dynamic, and engaging exploration of the topic.
Core Web Vitals measure user experience by assessing a website’s performance. This write-up is a history of how Core Web Vitals came to be based on my recollections from our work on it at Google from 2014 onwards. The initiative saved Chrome users over 30,000 years waiting with businesses seeing revenue and user-engagement lifts through optimizing for it, inclusive of 2025.
Years ago, the web development community discussed and declared on Twitter that HTML was not just a document language, but a programming language.Meanwhile, virtually no HTML page is error-free. In HTML parlance, almost no page is conformant, compliant, valid, or “validates.” (Cf. validation data from 2021 through 2025.)
It's a relatively simple syllogism:
1. HTML is a programming language (as many believe, certainly I do).
2. Essentially, no HTML document is without errors.
3. Therefore, HTML is the most difficult programming language in the world.
Honestly, I feel like web developers are constantly being gaslit into thinking that complex over-engineered solutions are the only option. When the discourse is being dominated by people invested in frameworks and libraries, all our default thinking will involve frameworks and libraries. That’s not good for users, and I don’t think it’s good for us either.Of course, the trick is knowing that the simpler solution exists. The information probably isn’t going to fall in your lap—especially when the discourse is dominated by overly-complex JavaScript.
As I observed in a recent essay, we've spent the last 15 to 20 years in web development building ever more complex and complicated approaches to developing for the web.
But what we often overlook is how far the web platform has come, and continues to evolve.
This enables us to embrace one of the principles I articulated in my recent Dao of CSS presentation at CSS Day (watch free with no login required), of simplicity, as Jeremy Keith observes here.
Jeremy looks at something that has traditionally often been implemented in very complex and complicated ways, carousels. He shows how they can be implemented with just two or three lines of CSS.
Time to re-implement the video carousels on our front page, I think.