llm weights vs the papercuts of corporate
December 9, 2025

We are now one year in where a new category of companies has been founded whereby the majority of the software behind that company was code-generated.From here on out I’m going to call to these companies as model weight first. This category of companies can be defined as any company that is building with the data (“grain”) that has been baked into the large language models.
Model weight first companies do not require as much context engineering. They’re not stuffing the context window with rules to try attempt to override and change the base models to fit a pre-existing corporate standard and conceptualisation of how software should be.
My instinct is, and this will be a seminal observation, as we evolve the way we work with large language models as software engineers. As Jeff Huntley observes here, one approach is to bend the models to our approach to software engineering. That’s largely what we’ve been doing for the last three years, whether it’s begging them to output in JSON through to filling their context with Agents.md files. But I think Jeff was really onto something here with his observation that there’s a different approach, and that is to go with the flow of how an LLM wants to work rather than work against its instincts.
This brought to mind a great interview with Brett Taylor some months ago now at Latent Space, where he talked about the AI architect and how the role of software engineers will increasingly be less and less about writing the code and more and more about guiding the outcomes.








