elements | AI Focus
July 28, 2025

As much as struggle with on-device processing and the quality of its output compared to server models, I am excited by some of the APIs that are being built into browsers that are backed by LLMs and other AI inference models.
For example, the prompt API, along with a multi-modal version that can take any arbitrary combination of text, image, and audio and run prompts against them. These APIs are neat but not yet web-exposed and many developers struggle to know what to do with a generic prompt. It’s not a solution that is natural to many people yet.
Source: elements | AI Focus
We’ve covered on device AI inference already quite a bit at our events, and share Paul Kinlan’s sense that this is going to be an increasingly important technology for developers to take advantage (and which helps address several of the most significant concerns pool have about LLMs, including energy use, and privacy.)
Paul shares some examples of the kind of use this might be put to, and inspiration for you to explore more.