AI open models have benefits. So why aren’t they more widely used? | MIT Sloan
February 2, 2026

A new paper co-authored by Frank Nagle, a research scientist at the MIT Initiative on the Digital Economy, found that users largely opt for closed, proprietary AI inference models, namely those from OpenAI, Anthropic, and Google. Those models account for nearly 80% of all AI tokens that are processed on OpenRouter, the leading AI inference platform. In comparison, less-expensive open models from the likes of Meta, DeepSeek, and Mistral account for only 20% of AI tokens processed. (A token is a unit of input or output to an AI model, roughly equivalent to one word in a prompt to an AI chatbot.)
Open models achieve about 90% of the performance of closed models when they are released, but they can quickly close that gap — and the price of running inference is 87% less on open models. Nagle and co-author Daniel Yue at the Georgia Institute of Technology found that optimal reallocation of demand from closed to open models could cut average overall spending by more than 70%, saving the global AI economy about $25 billion annually.
I’ll admit to also being guilty here. I use extensively multiple closed models, but very little by way of open ones. It’s not a particularly thought-through choice. That’s what I’ve gotten used to in terms of API setups and workflows and muscle memory. There’s no network effects here, but those big closed expensive models keep getting the mind share and user share.
There’s a sense that in particular the Chinese Frontier Labs who often produce open models are more than nipping at the heels of the American Frontier lab companies. So perhaps this is something we’ll see change over the coming months.







