My Vision for AI and the Web

September 16, 2025

AI will not all run locally yet. Large models still need server-side power. But smaller, specialized models can run on your device today, privately and securely. That is the balance I believe in: local AI for lightweight, privacy-first features, and server AI only when the workload is too heavy. On top of that, privacy on the server side is improving quickly thanks to technologies like GPU enclaves, for example Nvidia’s Confidential Compute. As devices get stronger and these safeguards mature, more intelligence can shift back to the user’s side with stronger guarantees at every step.

Source: My Vision for AI and the Web | Tarek Ziadé

Many people are far from most optimistic about the impact of large language models on the web.
Firstly, for better or worse, the business model of the web for the last two decades or more has been advertising and traffic driven to publishers by search.

But with large language models increasingly replacing search engines, And even Google providing answers directly on their site for queries. It seems this traffic, the lifeblood of independent content producers, may well be drawing up.

Meanwhile, what is the role of the browser in all this?

Chrome and Firefox have been introducing some of the functionality we get from large language model providers like OpenAI and Anthropic directly in the browser. These can help address some of the concerns folks have, in particular around privacy and to some extent energy consumption.

Here Tarek Ziadé who works on Mozilla? Exploring some ideas about the role browsers Might play in this era of large language models.