Screen readers do not need to be saved by AI
September 25, 2025

Imagine listening to your favourite podcast. You rewind it to go over something you missed, but each time you replay it, it’s somehow different.This sounds frustrating, right? But, it’s likely this is what would happen if we just stuffed large language models into screen readers, in a lazy attempt to avoid having to publish accessible content.
Source: Screen readers do not need to be saved by AI – craigabbott.co.uk
One of the use cases we often hear about for AI is when it comes to accessibility. After all, a lot of the images on the web do not even have the most basic alt text associated with them.
Now, as we have been quite open about, we use live language models extensively for this purpose, not only with the images associated with stories like this, but in particular with the feature we have where we take the slides from presentations and turn them into structured semantic HTML.
This is something that we did initially by hand. We would convert all the text from images to text. We would provide descriptions of images, graphs, charts, and other visual content. And then we would turn it all into structured HTML.
We slowly iterated on this to use large language models for part and then pretty much all of the process.
But it’s one thing when that content is fixed and can, as in our case, be edited by humans.
When it comes to screen readers and providing them with this functionality, then it is a very different situation, as Craig Abbott discusses here.







