An open letter to mobile vendors
Accessibility starts at the shutter, not the CMS.
I never thought I would have to write the words, ‘I have a manifesto’. It feels a little bold, perhaps even a touch presumptuous. But sometimes you stumble upon a gap in the world so wide, and yet so easily bridged, that you realise you have to speak up.
The digital world has a ‘last-mile’ problem that we’ve simply learned to live with.
According to the 2026 WebAIM Million report, a staggering 95.9% of the world’s top one million homepages fail basic accessibility standards. For seven years running, one of the primary reasons for this failure has remained the same: missing alternative text for images.
We are currently leaving millions of users in the dark. Not because we do not care, but because our current workflow is broken. We rely on overstretched content creators and busy small business owners to manually add ALT text at the very end of a long publishing process. Because this creates a high degree of friction, the requirement is typically ignored or overlooked.
I believe we can fix this by moving accessibility from the end of the journey to the very beginning. This is the #AltAtSource campaign.
When you take a photo on your smartphone, your device already sees what is in the frame. Apple, Google, and Samsung use sophisticated on-device AI to tag your photos so you can search your gallery for ‘dog’ or ‘birthday cake’.
However, that intelligence is siloed. When you upload that photo to the web, the description stays behind on your phone. The person publishing the image is then forced to manually recreate that description — effectively doing the work a second time.
The #AltAtSource campaign proposes a simple, three-step shift in the digital supply chain:
AltTextAccessibility standard.Some may argue that AI-generated descriptions are not yet perfect. However, even an AI description that is 90% accurate is a vast improvement over a screen reader announcing a cryptic file name like ‘DSC0021.jpg’. Every single time.
Furthermore, relying on every individual human author to know exactly how to write perfect ALT text is a losing battle. Human-generated descriptions are prone to enormous variation, errors, and fatigue. By using AI as the starting point, we provide a consistent, baseline level of accessibility that a human can then quickly moderate or validate before it goes live. AI is a tool that can finally help us achieve digital inclusion at scale.
This isn't just about ‘best practice’ anymore; it’s about the law. New statutory requirements — including ADA Title II in the US, the European Accessibility Act (EAA), and India’s IS 17802 — now mandate that digital products be usable by everyone.
Currently, only professionals with specialised third-party software have the tools to embed this data. I am pushing for this to become a default hardware feature for everyone. If this happens automatically at the source, we can fundamentally improve the inclusivity of the web.
I am just one person with a big idea, and I’ll admit, I often feel out of my depth navigating the complexities of global metadata standards. But I’ve seen that the technology to solve this already exists — we just haven't connected the dots yet.
Accessibility is often an afterthought, so we need to ensure it is baked in from the moment we press the shutter button or download an image from a library. This one change could have a major impact.
I will be documenting this journey on my website and via LinkedIn. If you believe the web should be accessible by design, I need your help to close the loop:
Let’s make #AltAtSource the new standard for a web that works for everyone.
Accessibility starts at the shutter, not the CMS.
Ensuring the world’s visual assets are born with a voice.
Bridging the gap between the image file and the authoring UI.
Turning the world's visual town square into an inclusive one.
Article by Simon Leadbetter
The Accessibility Guy at Kindera