Skip to main content

An open letter to mobile vendors

To the Product Teams at Apple, Google, and Samsung,

Smartphones have become the definitive authoring tools for the modern web. From independent retailers in Mumbai updating Shopify stores to public sector employees in London or Washington D.C. sharing vital information, the lifecycle of a digital image often begins within your native camera and photo applications. However, there is currently a significant technical disconnect in the digital supply chain. Whilst your platforms utilise advanced on-device AI to generate descriptive tags for internal search and organisation, this intelligence remains siloed. It is not being translated into standardised image metadata that can be utilised by external platforms.

At present, accessibility data (ALT text) is almost always added at the final stage of the workflow — typically within a CMS such as WordPress or Shopify. This process relies on individual users to manually replicate work that your AI has already performed. Because this creates a high degree of friction, the requirement to add ALT text is typically ignored or overlooked. The result is that the vast majority of web images remain inaccessible to users with visual impairments. This is no longer merely a matter of user experience; it is an issue of global compliance. The need for a streamlined solution has never been more pressing as digital accessibility becomes a statutory obligation across your largest markets — the enforcement of ADA Title II in the United States, the European Accessibility Act (EAA) in the EU, and the RPWD Act alongside IS 17802 standards in India all mandate that digital products and services be usable by persons with disabilities.

By trapping accessibility data within the device, the current workflow forces organisations globally to choose between manual inefficiency and the risk of non-compliance. Currently, only professionals with access to specialised third-party software have the tools to embed this data into a file’s XMP or IPTC headers. We propose that this capability should be democratised and made available to everyone at the point of capture. To bridge this gap, iOS and Android should implement a native ‘Alt at Source’ mechanism that enables AI-generated descriptions to be written directly to the file using industry-standard IPTC AltTextAccessibility and XMP dc:description fields.

By providing a straightforward ‘Confirm Description’ option within the photo info panel, you would allow every user to verify or refine the AI’s text before it is embedded. If this happens automatically or with minimal user input at the source, we can dramatically improve the accessibility of the web. Ensuring this metadata remains attached to the file during transit would allow web platforms to auto-populate ALT fields instantly upon upload.

I am writing this not as a representative of a large corporation, but as an individual advocate who sees a significant gap in our digital infrastructure. I believe that by simply connecting the dots of existing technology, we can remove the friction that currently prevents a truly inclusive web. One person with a clear idea can highlight a path, but it requires the scale of your platforms to turn that path into a standard for everyone. I look forward to discussing how your metadata schemas can evolve to support a more accessible digital landscape.

Kindera regards,

Simon Leadbetter, Founder at Kindera | #AltAtSource

Related information

#AltAtSource — A campaign to fix the biggest gap in accessibility

Ensuring accessibility starts at the shutter, not the screen.

Read more

An open letter to the social media industry

Turning the world's visual town square into an inclusive one.

Read more

An open letter to image libraries

Ensuring the world’s visual assets are born with a voice.

Read more

An open letter to CMS vendors

Bridging the gap between the image file and the authoring UI.

Read more

Article by Simon Leadbetter

The Accessibility Guy at Kindera

Simon Leadbetter