what is the point of libraries now that you can just generate them?
September 3, 2025

Instead of relying on a third-party library maintained by a developer in Nebraska, we code-generate the libraries/dependencies ourselves unless the dependency has network effects or is critical infrastructure.For example, you wouldn’t want to code-generate your crypto – trust me, I have, and the results are comical. Well, it works, but I wouldn’t trust it because I’m not a cryptographer. However, I’m sure a cryptographer with AI capabilities could generate something truly remarkable.For projects like FFmpeg, Kubernetes, React, or PyTorch, they are good examples of something with network effects. Something that I wouldn’t code-generate because it makes no sense to do so.
However, I want you to pause and consider the following:If something is common enough to require a trustworthy NPM package, then it is also well-represented in the training set, and you can generate it yourself.
Source: what is the point of libraries now that you can just generate them?
Jeff Huntley, who has been thinking deeply about the impact of large language models on the practise of software engineering, has recently published this essay where he explores the impact of large language models on open source.
He speculates that our reliance on open source software, at least directly, may well diminish because of the existence of large language models which have been trained on all of them.
But I wonder if there’s a very low level of abstraction, almost akin to an operating system or the APIs of the browser, where we will continue to rely on open source projects.
Or will everything above the barest metal—whether that be DOM APIs, operating system APIs, and so on—become obsolete simply because large language models can address them directly without the libraries and frameworks that we as humans have come to rely on.







