Automated exclusion: The crisis of inaccessible AI for persons with disabilities

In the global rush to regulate Artificial Intelligence, a dangerous consensus seemingly has formed: that the real drama of AI lies in the future – in existential risks, deepfakes, or algorithmic bias. But for persons with disabilities, the crisis of AI is already here, and it is older than the technology itself. It is the story of being told a system is efficient and modern, only to find the door locked from the inside. We are witnessing the birth of automated exclusion, in which the biases of the past are hard-coded into the digital infrastructure of the future.

The true frontline of digital rights is not found in high-level white papers, but in the mundane brutality of the ‘unlabeled button’, the visual CAPTCHA, and the broken screen-reader workflow. Before AI can empower, the surrounding digital environment must allow us to enter, navigate, and verify. When a government service introduces AI-assisted identity verification that relies solely on visual cues, security becomes a forced transfer of control. For a blind user, this is not a technical glitch; it is a direct assault on privacy and equal citizenship, forcing us into a state of state-mandated dependence for the most intimate of transactions.

My journey through the CADE Capacity Development Programme, as I transition from internet governance into AI Policy and Practice for CSOs, has only sharpened this conviction. AI governance cannot be reduced to the behaviour of a model; it is a question of whether the systems through which AI is taught, accessed, and deployed are themselves accessible. As I have reflected within the IGF Dynamic Coalition on Accessibility and Disability (DCAD)accessibility is not charity – it is legitimacy. If the participatory layer of a digital system is inaccessible, then that governance is unequal by design. Whether it is a CADE learning module or a national policy portal, when the main content is readable but the participatory layer – the annotations, the menus, the engagement tools – is fixed and inaccessible, the message is clear: our presence is tolerated, but our participation is not.

This crisis of exclusion is increasingly hidden inside ordinary product decisions. Consider the push for generative AI chatbots in advocacy work. A chatbot may be fluent and helpful, but if its outputs are a structural nightmare for assistive technology, or if it fails to clearly signal its sources, it creates a black box of epistemic exclusion. For those of us in the Global South, where datasets are often narrow and biassed, the inability to independently interrogate or verify an AI’s output is a denial of agency. Trustworthiness in AI cannot be a marketing slogan; it must be a functional reality where a user can navigate, question, and contest the machine’s output with dignity and independence.

This is why lived experience must be treated as a central governance lens, not a peripheral footnote. Disabled users are the canaries in the coal mine for digital rights; we notice where policy abstractions fail long before the regulators do. We know what it means when a platform claims a feature is “available” while a timing limit or a broken focus order turns it into a barricade. These are not side issues; they are the mechanics of power in the digital age. Exclusion is now being exercised through procurement choices and default settings that quietly filter out those who are supposedly being included.

The global community must move beyond the charity model of inclusive design. Accessibility is a normative necessity and the ultimate democratic test for AI governance. If a regulatory framework cannot guarantee that the most marginalised can independently navigate the systems it governs, that framework lacks the moral authority to lead.

Lived experience does not weaken digital rights analysis; it provides the friction necessary to keep it grounded in reality. It reminds us that rights are not realised when they are declared in principle, but when systems can actually be used with autonomy. If disability remains an afterthought, AI will not be a breakthrough – it will be a sophisticated tool for reproducing exclusion at scale. The legitimacy of our digital future depends on whether we treat accessibility as a technical add-on or as the fundamental prerequisite for a just society.

About the author: Dr Muhammad Shabbir is a researcher, digital rights advocate, and leading voice in the intersection of disability rights and technology policy. Holding a PhD in International Relations, he also serves as Coordinator of Internet Governance Forum Dynamic Coalition on Accessibility and Disability (IGF-DCAD) and bridges the gap between high-level policy and the lived realities of the disabled community. A blind professional himself, he is currently a participant in the CADE Capacity Development Programme, focusing on AI Policy and Practice for CSOs. 

The views expressed in this blog are those of the author and do not necessarily reflect the views of the European Union.

Go to Top