Glimpses of the Future: Speed & Swarms

October 24, 2025

Abstract painting with vibrant, overlapping shapes and lines in red, green, yellow, black, and blue on a pale background, suggesting dynamic, humanoid forms in motion.

Last month, I embarked on an AI-assisted code safari. I tried different applications (Claude Code, Codex, Cursor, Cline, Amp, etc.) and different models (Opus, GPT-5, Qwen Coder, Kimi K2, etc.), trying to get a better lay of the land. I find it useful to take these macro views occasionally, time-boxing them explicitly, to build a mental model of the domain and to prevent me from getting rabbit-holed by tool selection during project work.

The takeaway from this safari was that we are undervaluing speed.We talk constantly about model accuracy, their ability to reliably solve significant PRs, and their ability to solve bugs or dig themselves out of holes. Coupled with this conversation is the related discussion about what we do while an agent churns on a task. We sip coffee, catch up on our favorite shows, or make breakfast for our family all while the agent chugs away. Others spin up more agents and attack multiple tasks at once, across a grid of terminal windows. Still others go full async, handing off Github issues to OpenAI’s Codex, which works in the cloud by itself… often for hours.

Source: Glimpses of the Future: Speed & Swarms

Along with Simon Willison, Jeff Huntley, and a handful of other folks, I find Drew Breunig’s insights and shared experience of working with large language models as a software engineer very valuable.

Here Drew reports back on a month of intensive use of multiple models and applications.

The takeaway from this safari was that we are undervaluing speed

Drew examines the swarm pattern for working, particularly with slower models, and predicts what the future of working with large language models as a software engineer might look like.