AI is locking people out. At Scale.
An important and sharply argued framing of AI-generated accessibility failures as a civil-rights problem, not just a quality-control issue.
This is not a minor bug trend. It is a systematic civil-rights failure that has now found its way into software as a whole, through lightning-fast adoption of AI systems that are trained on over 20 years of institutional barriers.
The numbers here are bad enough on their own, but what makes this especially troubling is where these systems are being deployed: education, healthcare, finance, employment, and other places people can’t simply opt out of. That’s why I appreciate how direct this project is about accountability. If we automate interface generation without putting accessibility at the center, we’re automating exclusion.