Skip to main content

#AltAtSource —
A campaign to fix the biggest gap in accessibility

A split-screen image. The left side shows a clear, vibrant photo of a flower market. The right side shows the same scene but heavily pixelated and obscured in grayscale. Centred over the split is a yellow banner with the hashtag #AltAtSource.

I never thought I would have to write the words, ‘I have a manifesto’. It feels a little bold, perhaps even a touch presumptuous. But sometimes you stumble upon a gap in the world so wide, and yet so easily bridged, that you realise you have to speak up.

The digital world has a ‘last-mile’ problem that we’ve simply learned to live with.

A disconnect we can no longer ignore

According to the 2026 WebAIM Million report, a staggering 95.9% of the world’s top one million homepages fail basic accessibility standards. For seven years running, one of the primary reasons for this failure has remained the same: missing alternative text for images.

We are currently leaving millions of users in the dark. Not because we do not care, but because our current workflow is broken. We rely on overstretched content creators and busy small business owners to manually add ALT text at the very end of a long publishing process. Because this creates a high degree of friction, the requirement is typically ignored or overlooked.

I believe we can fix this by moving accessibility from the end of the journey to the very beginning. This is the #AltAtSource campaign.

Why our current tools are failing us

When you take a photo on your smartphone, your device already sees what is in the frame. Apple, Google, and Samsung use sophisticated on-device AI to tag your photos so you can search your gallery for ‘dog’ or ‘birthday cake’.

However, that intelligence is siloed. When you upload that photo to the web, the description stays behind on your phone. The person publishing the image is then forced to manually recreate that description — effectively doing the work a second time.

A simpler way to connect the dots

The #AltAtSource campaign proposes a simple, three-step shift in the digital supply chain:

  1. Mobile vendors: Mobile operating systems should use their existing AI to generate a draft description at the moment of capture. This should be a native feature where the user can verify or edit the text, which is then embedded directly into the image file’s metadata — using the IPTC AltTextAccessibility standard.
  2. Image libraries: Archives like Alamy and Getty should expand their metadata capabilities (if they haven’t already done so). While many of these platforms are excellent at maintaining image captions, a caption is often an editorial credit rather than a functional description for a screen reader. These hubs should ensure that dedicated, descriptive ALT text is baked into every file they deliver.
  3. Publishing platforms: It is my current understanding that the majority of CMS platforms do not leverage embedded metadata natively. I want to see this become a universal standard. Instead of a manual chore, the ALT field should auto-populate the moment an image is uploaded, pulling directly from the file’s own data.

The case for imperfect AI

Some may argue that AI-generated descriptions are not yet perfect. However, even an AI description that is 90% accurate is a vast improvement over a screen reader announcing a cryptic file name like ‘DSC0021.jpg’. Every single time.

Furthermore, relying on every individual human author to know exactly how to write perfect ALT text is a losing battle. Human-generated descriptions are prone to enormous variation, errors, and fatigue. By using AI as the starting point, we provide a consistent, baseline level of accessibility that a human can then quickly moderate or validate before it goes live. AI is a tool that can finally help us achieve digital inclusion at scale.

A matter of global compliance

This isn't just about ‘best practice’ anymore; it’s about the law. New statutory requirements — including ADA Title II in the US, the European Accessibility Act (EAA), and India’s IS 17802 — now mandate that digital products be usable by everyone.

Currently, only professionals with specialised third-party software have the tools to embed this data. I am pushing for this to become a default hardware feature for everyone. If this happens automatically at the source, we can fundamentally improve the inclusivity of the web.

Join the movement

I am just one person with a big idea, and I’ll admit, I often feel out of my depth navigating the complexities of global metadata standards. But I’ve seen that the technology to solve this already exists — we just haven't connected the dots yet.

Accessibility is often an afterthought, so we need to ensure it is baked in from the moment we press the shutter button or download an image from a library. This one change could have a major impact.

I will be documenting this journey on my website and via LinkedIn. If you believe the web should be accessible by design, I need your help to close the loop:

  • Follow the journey: You can follow me on LinkedIn at https://www.linkedin.com/in/simonpleadbetter for updates and progress.
  • Spread the word: Use the hashtag #AltAtSource in your own posts to highlight the need for metadata standards.
  • Take action: Contact image libraries, mobile OS vendors, CMS platforms, Social Media platforms and any other tech providers you use. Ask them when they plan to support embedded accessibility metadata.

Let’s make #AltAtSource the new standard for a web that works for everyone.

Related information

An open letter to mobile vendors

Accessibility starts at the shutter, not the CMS.

Read more

An open letter to image libraries

Ensuring the world’s visual assets are born with a voice.

Read more

An open letter to CMS vendors

Bridging the gap between the image file and the authoring UI.

Read more

An open letter to the social media industry

Turning the world's visual town square into an inclusive one.

Read more

Article by Simon Leadbetter

The Accessibility Guy at Kindera

Simon Leadbetter