{"version":"https://jsonfeed.org/version/1","title":"Aaron Gustafson: Content tagged AI/ML","description":"The latest 20 posts and links tagged AI/ML.","home_page_url":"https://www.aaron-gustafson.com","feed_url":"https://www.aaron-gustafson.com/feeds/ai-ml.json","author":{"name":"Aaron Gustafson","url":"https://www.aaron-gustafson.com"},"icon":"https://www.aaron-gustafson.com/i/og-logo.png","favicon":"https://www.aaron-gustafson.com/favicon.png","expired": false,"items":[{"id":"https://www.aaron-gustafson.com/publications/articles/opportunities-for-ai-in-accessibility/","title":"đ Opportunities for AI in Accessibility","content_html":"
In an article, I discuss AIâs potential for accessibility, emphasizing the need for responsible use and diverse teams to mitigate harm and promote inclusion for people with disabilities.
","url":"https://alistapart.com/article/opportunities-for-ai-in-accessibility/","tags":["accessibility","AI/ML"],"date_published":"2024-02-07T00:00:00Z"},{"id":"https://www.aaron-gustafson.com/notebook/dont-outsource-your-perspective-to-a-llm/","title":"âđ» Donât Outsource Your Perspective to a LLM","summary":"Iâm seeing a lot of article pitches lately that were clearly written by a Large Language Model rather than a human being. There was clearly some thought put into the prompt, but no follow though to really take ownership of the resulting output. Please donât be âthat guy.â","content_html":"At A List Apart, Iâm seeing a lot of article pitches that were clearly written by a Large Language Model (LLM) rather than a human being. In many cases, the people making the submission clearly put a lot of thought into the prompts they used to get the output they desired, but had zero follow though when it came to taking ownership of the output they were handed. To be clear, the issue I have here is not that they used an LLM as part of their process, but rather how they failed to wield such powerful tool effectively.
A while back, I read a great piece from Johnathan May that discussed where LLMS fail us (factual knowledge) and where they excel (conversation). He highlighted how awful they are when it comes to retrieving facts (LLMs are not the reference librarian you think they are), but how useful they can when factuality is not as important. Armed with this knowledge, he did some experimentation and discovered how well LLMs perform as an improv partner⊠someone to bounce ideas off of.
It was with that idea in mind that I decided to test out this approach when working on my axe-con keynote last year. I already had an outline for the talk and knew what I wanted to say, but took some time to bounce my ideas off of ChatGPT and see how it responded to them. In most cases it responded with paragraphs of prose clearly demonstrating its lack of reasoning on any subjectânot surprising though, LLMs are word prediction engines. In other instances, however, it surprised me with how the words it predicted managed to âframeâ1 particular concepts in novel ways, prompting me to think differently about how I might approach it in my talk.
I feel confident claiming that the talk was entirely a product of my own efforts. At the same time, Iâm also appreciative of ChatGPTs role as an improv partner in the process of creating it.
I guess Iâm hopeful more people will embrace using LLMs in this way. We need to recognize what tools like these are good at and what they arenât. Hammers are awesome at driving in nails, but theyâre only reasonably good at removing themâfrom wood, yes; from drywall no. Hammers are also the entirely wrong tool if youâre looking to drive or remove a screw.
LLMs are excellent at reworking stuff youâve writtenâsummarizing content, adjusting it to a particular reading level, helping you craft a suitable conclusion for an article or talk based on your existing content. They are not fact machines. Furthermore, they canât replicate or replace your unique perspective on a subject and thatâs the very humanness that makes your writings, talks, and other forms of communication worthy of our time.
If you find LLMs useful in your process, more power to you. If you donât, thatâs cool too. Just be aware of their limitations while embracing their power to help. And donât ever try to pass off their writing as your own. Not only can we tell, but it robs us of the chance to hear what you think. And thatâs what we really want to read.
P.S. - In case youâre wondering: No, I didnât use an LLM to help me write this post đ
To be clear, I am not trying to anthropomorphize the LLM here. â©ïž
The folks over at Sifted asked me to weigh in on the potential of AI to increase access and inclusivity. They frame the opportunity perfectly:
","url":"https://sifted.eu/articles/disability-tech-brnd","tags":["accessibility","AI/ML","inclusive design"],"date_published":"2023-12-11T00:00:00Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/microsoft-announces-new-copilot-copyright-commitment-for-customers/","title":"đ Microsoft announces new Copilot Copyright Commitment for customers","content_html":"Excitement around how AI can change the way we live and work has reached fever pitch â but less often discussed is how it could create a more inclusive, accessible world. Yet the opportunity is vast. Sifted estimates the addressable market for disability tech is somewhere in the 2bn range, while PwC predicts AI could contribute nearly $16tn to the global economy by 2030.
While I really appreciate Microsoft standing behind the AI itâs deploying, I do wonder how this squares with the U.S. Copyright Officeâs ruling that prompt-generated content isnât copyrightable.
","url":"https://www.aaron-gustafson.com/notebook/links/microsoft-announces-new-copilot-copyright-commitment-for-customers/","external_url":"https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot-copyright-commitment-ai-legal-concerns/","tags":["AI/ML","industry"],"image":"https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2023/09/CLO22_HumanBuiltAbstracts_020-1024x683.jpg","date_published":"2023-09-22T20:00:40Z"},{"id":"https://www.aaron-gustafson.com/appearances/podcasts/2023-07-26-logrocket-2/","title":"đ§ Aaron Gustafson talks progressive enhancement, PWAs, AI, and accessibility","content_html":"I rejoined the LogRocket folks to talk about a bunch of the topics Iâm currently thinking about.
","url":"https://podrocket.logrocket.com/progressive-enhancement-pwa-ai-accessibility","tags":["progressive enhancement","progressive web apps","AI/ML","accessibility"],"date_published":"2023-07-26T00:00:00Z"},{"id":"https://www.aaron-gustafson.com/appearances/podcasts/2023-07-11-decoding-conventions/","title":"đ§ (De)coding Conventions","content_html":"I spoke to hosts Neha Batra and Martin Woodward about AIâs potential to enhance accessibility and the projects leading the charge.
","url":"https://github.com/readme/podcast/decoding-conventions","tags":["AI/ML","accessibility"],"date_published":"2023-07-11T00:00:00Z"},{"id":"https://www.aaron-gustafson.com/notebook/opportunities-for-ai-in-accessibility/","title":"âđ» Opportunities for AI in Accessibility","summary":"I want to take a little time to talk about the potential of AI to aid in accessibility, in hopes weâll get there one day.","content_html":"In reading through Joe Dolsonâs recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism he has for AI in general as well as the ways in which many have been using it. In fact, I am very skeptical of AI myself, despite my role at Microsoft being that of an Accessibility Innovation Strategist helping run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways and it can be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.
Iâd like you to consider this a âyes⊠andâ piece to complement Joeâs post. I donât seek to refute any of what heâs saying, but rather provide some visibility to projects and opportunities where AI can make a meaningful difference for people with disabilities (PwD) across the globe. To be clear, I am not saying there arenât real risks and pressing issues with AI that need to be addressedâthere are, and we needed to address them like yesterdayâbut I want to take a little time to talk about whatâs possible, in hopes weâll get there one day.
Joeâs piece spends a lot of time talking about computer vision models generating alternative text. He highlights a ton of valid issues with the current state of things. And while computer vision models continue to improve in terms of the quality and richness of detail in their descriptions, the results are not great. As he rightly points out, the current state of image analysis is pretty poorâespecially for certain image typesâand the current systems examine images in isolation rather than within the context in which they sit (a consequence of having separate foundation models for text analysis and image analysis).
These models are also not currently trained to distinguish an image that is contextually relevant (for which there should probably be a description) from one that is purely decorative. Of course this is something we humans struggle with as well⊠the right answer is often somewhere between the authorâs intent and the userâs needs/preferences.
All of that said, there is potential in this space.
As Joe mentions, human-in-the-loop authoring of alt
text should absolutely be a thing. And if AI can pop in to offer a starting pointâeven if that starting point is prompting you to say What is this B.S.? Thatâs not right at all⊠let me fix itâI think thatâs a win.
Taking things a step further, if we can specifically train a model to analyze image usage in context, it could help us more quickly identify which ones are likely to be presentational and which ones likely require a description. That will help reinforce the importance of descriptions in the appropriate context and improve the efficiency with which authors can make their pages more accessible.
While complex imagesâgraphs, charts, etc.âare challenging to describe in any sort of succinct way (even for humans), the image example shared in the GPT4 announcement points to an interesting opportunity here as well. Letâs say the description of a chart was simply the title of the chart and the kind of visualization it was. For example: Pie chart comparing smartphone usage to feature phone usage among U.S. households making under $30,000 a year. If the browser knows itâs a pie chart (because an onboard model verified this), imagine a world where a user could ask questions about the graphic.
Setting aside the realities of Large Language Model (LLM) hallucinations for a moment, the opportunity to interface with image data in this way could be revolutionary for blind and low-vision folks as well as people with various forms of color blindness, cognitive disabilities, and so on. It could also be useful in an educational context to teach people who can see the chart, as authored, to read a pie chart.
Taking things a step further, what if you could ask your browser to simplify a complex chart, perhaps isolating a single line on a line graph? What if you could ask the browser to transpose the colors of the different lines to work better for the specific form of color blindness you have? What if you could swap colors for patterns? Given the chat-based interface and our ability to manipulate existing images in currently available AI tools, that certainly seems like a possible future.
Now imagine a purpose-built model that could extract the information from that chart and convert it to another format. For example, it could turn that pie chart (or better yet, a series of them) into a more accessible (and useful) format like a spreadsheet. That would be amazing!
Safiya Umoja Noble absolutely hit the nail on the head with the title of her book Algorithms of Oppression. While it was focused on search engines reinforcing racism, all computer models have the potential to amplify conflict, bias, and intolerance. Whether itâs Twitter always showing you the latest tweet from a bored billionaire, YouTube sending us into a Q-hole, or Instagram warping our idea of what a natural body looks like, we know poorly authored and maintained algorithms are incredibly harmful. A lot of this stems from a lack of diversity among the people who shape and/or build them. When built inclusively, however, there is real potential for algorithm development to benefit people with disabilities.
Take Mentra, for example. They are an employment network for neurodivergent people. They employ an algorithm to match job seekers with potential employers, based on over 75 different data points. On the job seeker side of things, it takes into account the candidateâs strengths, necessary workplace accommodations (and preferred ones), environmental sensitivities, and so on. On the employer side, it takes into account the work environment, communication factors related to the job, and the like. As a company run by neurodivergent folks, Mentra made the decision to flip the script when it comes to typical employment sites. They use their algorithm to propose available candidates to the companies, who can then connect with job seekers they are interested in; reducing the emotional and physical labor on the job seeker side of things.
When more people with disabilities are involved in the creation of algorithms, there is a lessened likelihood that these algorithms will be used to inflict harm on their communities. This is why diverse teams are so important.
Imagine if a social media companyâs recommendation engine was tuned to analyze who youâre currently following and prioritized recommending that you follow people who talked about similar things, but who were different in some key way from your existing sphere of influence. For example, if you follow a bunch of non-disabled white male academics who talk about AI, it could suggest you follow academics who are disabled or arenât white or arenât male who also talk about AI. If you took its recommendations, youâd likely get a much more holistic and nuanced understanding of what is happening in the AI field.
If I werenât trying to put this together between other tasks, Iâm sure I could go on, ad infinitum, providing all kinds of examples of how AI can be used to the benefit of people with disabilities, but Iâm going to make this last section into a bit of a lightning round. In no particular order:
Of course to do things like this, we need to recognize that differences do matter. Our lived experiences are influenced by the intersections of identity in which we exist. Those lived experiencesâwith all of their complexity (and joy and pain)âare valuable inputs to the software, services, and societies we shape. They need to be represented in the data we use to train new models and the folks who contribute that valuable information need to be compensated for sharing it with us. Inclusive data sets yield more robust models that enable more equitable outcomes.
Want a model that doesnât demean or patronize or objectify people with disabilities? Make sure content about disability, authored by people with a range of disabilities is well-represented in the training data.
Want a model that doesnât use ableist language? Use existing data sets to build a filter that can intercept and remediate ableist language before it reaches an end user.
Want a coding co-pilot that gives you accessible recommendations from the jump? Train it on code that is known to be accessible.
I have no doubt that AI can and will harm people⊠today, tomorrow, and well into the future. However, I also believe that we can acknowledge that and, with an eye towards accessibility (and, more broadly, inclusion), make thoughtful, considerate, intentional changes in our approaches to AI that will reduce harm over time as well. Today, tomorrow, and well into the future.
","url":"https://www.aaron-gustafson.com/notebook/opportunities-for-ai-in-accessibility/","tags":["accessibility","AI/ML","inclusive design","the future"],"image":"https://www.aaron-gustafson.com/i/posts/2023-06-09/hero.jpg","date_published":"2023-06-09T21:57:16Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/what-google-should-really-be-worried-about/","title":"đ What Google Should Really Be Worried About","content_html":"The old computer programming adage âgarbage in, garbage outâ is going to ring even more true as search engine crawlers consume more and more empty calories in the form of AI-generated bullshit and misinformation.
","url":"https://www.aaron-gustafson.com/notebook/links/what-google-should-really-be-worried-about/","external_url":"https://garymarcus.substack.com/p/what-google-should-really-be-worried","tags":["AI/ML"],"image":"https://substackcdn.com/image/fetch/w_1200,h_600,c_fill,f_jpg,q_auto:good,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9d65e05-4def-4ce4-9890-afc544458bc5_1024x1024.png","date_published":"2023-06-06T16:29:14Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/shatgpt/","title":"đ ShatGPT","content_html":"The question is why: why do rings of fakes websites like these even exist?
Part of the answer is, of course, money. Fake websites can be used to sell real advertisements.
This is an excellent post from Steve Faulkner on some of the issues with Large Language Models like ChatGPT, especially when it comes to accessibility. He clearly outlines three key areas where we are failing:
Why do companies release software before itâs safe? Chances are they actually consider their product to be their stock price rather than their software⊠yet another victim of the financialization of our economy.
","url":"https://www.aaron-gustafson.com/notebook/links/google-bard-ai-chatbot-raises-ethical-concerns-from-employees/","external_url":"https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees?leadSource=uverify%20wall","tags":["AI/ML","software development"],"image":"https://assets.bwbx.io/images/users/iqjWHBFdfxIU/i827R1A0F_Wg/v1/1200x800.jpg","date_published":"2023-04-19T15:56:40Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/the-future-of-human-agency/","title":"đ The Future of Human Agency","content_html":"One workerâs conclusion: Bard was âa pathological liar,â according to screenshots of the internal discussion. Another called it âcringe-worthy.â One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving âwhich would likely result in serious injury or death.â
Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg. The Alphabet Inc.-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technologyâs potential harms. But the November 2022 debut of rival OpenAIâs popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.
This was an interesting (and exhaustive) survey on what automation and AI might mean for the future of human agency. Some of the verbatims were quite insightful.
This passage from Micah Altman of MITâs Center for Research in Equitable and Open Scholarship really resonated with me (emphasis mine):
","url":"https://www.aaron-gustafson.com/notebook/links/the-future-of-human-agency/","external_url":"https://www.elon.edu/u/imagining/surveys/xv2023/the-future-of-human-agency-2035/","tags":["AI/ML","inclusion","society"],"image":"https://eloncdn.blob.core.windows.net/eu3/sites/964/2019/07/imagining-header-logo-slim.png","date_published":"2023-04-03T22:44:23Z"},{"id":"https://www.aaron-gustafson.com/notebook/accessibility-beyond-code-compliance/","title":"âđ» Accessibility Beyond Code Compliance","summary":"This is a (rough) transcript of my talk for axe-con 2023. In it, I provide examples of other areas of our industry that can benefit from developersâ accessibility skills and knowledge.","content_html":"Decisions determined by algorithms affecting our lives are increasingly governed by opaque algorithms, from the temperature of our office buildings to what interest rate weâre charged for a loan to whether we are offered bail after an arrest. More specifically complex, opaque, dynamic and commercially developed algorithms are increasingly replacing complex, obscure, static and bureaucratically authored rules.
Over the next decade and a half, this trend is likely to accelerate. Most of the important decisions affecting us in the commercial and government sphere will be âmadeâ by automated evaluation processes. For the most high-profile decisions, people may continue to be âin the loop,â or even have final authority. Nevertheless, most of the information that these human decision-makers will have access to will be based on automated analyses and summary scores â leaving little for nominal decision-makers to do but flag the most obvious anomalies or add some additional noise into the system.
This outcome is not all bad. Despite many automated decisions being outside of both our practical and legal (if nominal) control, there are often advantages from a shift to out-of-control automaticity. Algorithmic decisions often make mistakes, embed questionable policy assumptions, inherit bias, are gameable, and sometimes result in decisions that seem (and for practical purposes, are) capricious. But this is nothing new â other complex human decision systems behave this way as well, and algorithmic decisions often do better, at least in the ways we can most readily measure. Further, automated systems, in theory, can be instrumented, rerun, traced, verified, audited, and even prompted to explain themselves â all at a level of detail, frequency and interactivity that would be practically impossible to conduct on human decision systems: This affordance creates the potential for a substantial degree of meaningful control.
I had the great pleasure of delivering a talk about career opportunities for accessibility devs at axe-con earlier today. You can view the slides or watch the recording of this talk, but what follows is an approximation my talkâs content, taken from my notes and slides.
Good morning, good afternoon, and good evening to you, wherever you are in the world. My name is Aaron Gustafson. My pronouns he, him, and his. I am a middle-aged white man with long, wavy hair, glasses, and a red and grey beard my wife refers to as âsalt & paprika.â I am speaking to you from Seattle, WA on the unceded lands of the Coast Salish peoples, most notably the Duwamish, whose longhouse is not too far from my home.
Some of you may be familiar with my work. Iâve been a web designer and developer since the mid â90s. In that time Iâve authored dozens of articles and a few books and given over a hundred talks on web development. In fact, if my math is correct, I believe this is my 150th talk.
Over the years Iâve been best known for my work in progressive enhancement and accessibility, but I also led the Web Standards Project back in the day and am the Editor in Chief of A List Apart.
I have deep roots in the web dev community, particularly in the accessibility space, but thatâs not why Iâm here today. Iâm here today because about 9 months ago I decided to change things up and use my accessibility skills in other ways.
In my case, I joined the Microsoft Accessibility Innovation team to lead our investments through the AI for Accessibility grant program.But Iâm not here to talk about AI, Iâm here to talk about how you can put your accessibility skills to work, beyond finding and remediating accessibility bugs.
In my career, Iâve found itâs really easy to get typecast or pigeon-holed when youâre a developer whose focus is accessibility. This is a bit of a cruel irony as many of us are driven by a desire to tear down the barriers to access for others.
Our companies, organizations, and sometimes even our colleagues put us in a box. They donât seem to realize that knowledge of how to make products accessible has huge value beyond compliance (and avoiding lawsuits). In our careers, we might be able to level up from a junior to senior role or even make it to principal, based on our performance, but growth beyond that is often limited to moving into people management, which is a wholly different skill set. And maybe thatâs your aspiration⊠thatâs totally cool if it is, but what if you want to grow as an independent contributor?
When our organizations put us in a box, they make it really difficult to grow our scope and increase the impact we can have for both the organization and the people we serve.
Donât get me wrong, I love compliance work. Iâm not here to disparage it in any way; itâs critically important and means so much to our customers. But after years in this industry, I also see the downsides of life in the âaccessibility devâ box. Perhaps you relate to a few of these:
Again, I am not trying to cast code compliance work in a bad light, and Iâm not trying to get you down on it. What I want to do is build you up.
I believe you, as a developer interested in accessibility, have so much to offer your organizations, your customers, and this industry. Thatâs what I am here to talk to you about today.
Iâm here to talk to you about opportunity!
When I was doing this work on the regular, I struggled to see how I could grow my impact. In the intervening years, however, Iâve discovered a bunch of ways we can bring our knowledge and passion for accessibility to other areas of both web development and the tech industry overall.
As I mentioned, Iâve been in this industry and held a lot of different roles since the mid â90s. Iâve held just about every web-related role you could name. Iâve been an educator, publisher, spec editor at the W3C. Iâve worked in Developer Relations and strategic roles. Iâve worn an awful lot of hats (which is totally fine with me as my hair is thinning in the back).
All of this is to say that Iâve seen and experienced a lot of ways you can be valuable to your current employer or, perhaps, a future one.I am going to share 5 of them with you today:
These are by no means your only options and, as I mentioned, if youâre happy with what youâre doing, please donât consider this talk a nudge to get you to change things up. I just want to make you aware of the value you can bring to other kinds of roles, some of which you may not have considered before.
Also: I want to make it clear that I am not advocating that you take on any of these responsibilities in addition to your current work. Far too often, organizations ask those of us with accessibility skills to do things beyond our job description without any additional compensation for that work. Please donât fall into that trap as it will lead to burnout.
If youâre really interested in software development, an area that keeps you in that area is working on design systems.
Iâm not going to go deep on design systemsâthere are a bunch of talks and even whole conferences focused on that topicâbut I will give you the Cliffs Notes if youâre unfamiliar: Design systems (and pattern libraries within them) codify your organizationâs design and coding guidelines in such a way that the software you produce is consistent and the teams working on delivering that software are able to be more efficient because they arenât having to design and build every interface from scratch.Having a design system that is accessible enables teams to avoid introducing new accessibility bugs in the process of creating bespoke interfaces. It also means finding and fixing an accessibility bug in the design system should fix it in all of the products using that design system. (That last part isnât always perfect, but I donât have time to get into that today.)
If you work in a small organization, itâs possible that you arenât working with a design system yet. Knowing what you do about their accessibility benefits, you could advocate for the creation of one and for its creating, care & maintenance to be your job.
In this role, you can:
If youâre in a larger organization that already has a design system, you could be a bit more strategic in your approach:
As an accessibility dev, your unique perspective and skills will help build greater alignment on accessibility among teams and improve morale by speeding up development & reducing bugs!
As I mentioned, the role in larger orgs can be more strategic. Another strategic role is shaping the products that we build, as a product designer, product owner, product manager, or similar. (Different companies have different titles for this kind of work.)
In this kind of a role, we can put the âshift leftâ credo we advocate for regularly into practice. It involves
All of this work has huge business value for your organization:
On that last point, I often point to WhatsApp as a perfect example of this. When they launched, there were nearly 8,000 chat apps in the iOS App Store. If theyâd only offered their app to that audience, they would not have found the level of success they did because the competition was so high. They expanded their potential customer base by supporting OSes others were ignoring: older Android versions, Blackberry, Symbian, Nokia Series 40, Windows Phone. Some of those werenât even smartphone OSes! When WhatsApp sold to Facebook for $19B, they had over 600M users worldwide because they made their product accessibleâin a broader senseâto more people.
By considering accessibility in the same way as WhatsApp considered OS support, we can growâor to think about it another way, stop artificially suppressingâour customer base and succeed where our competition fails.
As an accessibility dev, your unique perspective and skills will ensure your company ships higher quality products, with fewer bugs, for less money!
Moving a bit further afield, I want to talk about how much we need your skills in the world of data science. As part of a data science team, you could bring attention to accessibility in our product metrics by
Apart from products, you could also have a profound impact on your organizationsâ internal processes, especially around how compliance work is done and tracked:
And if you wanted to keep working in the UI space, you could put your skills to work improving the quality of the dashboards and tools used by your company:
This is incredibly necessary work as we often neglect the accessibility of our own internal tools.
As an accessibility dev, your unique perspective and skills can help your company make decisions that result in more inclusive and accessible products that provide a better user experience (and may even increase revenue).
The fourth area desperately in need of your skills and perspective is AI research and ethics. AI is a hot topic right now, for sure, and it absolutely has the potential to meaningfully improve peopleâs lives, including those of people with disabilities, but to get there, organizations need your help!
You have the knowledge and connections in this space to harness the power of AI in service of people with disabilities.
As part of an AI research team you canâŠ
This is the space Iâm very grateful to be in right now. As part of the Accessibility Innovation team at Microsoft, I get to identify and fund projects that are using AI to improve the lives of people with disabilities.
For example: the ORBIT project. Thereâs been lots of work in the object detection space, but there is a lot of focus on labelling âhigh-qualityâ images. This doesnât really help folks in the real world. A blind person, for instance, is likely to have a hard time providing the image recognizer a with a perfectly-framed, perfectly focused capture of an object they need identified.
The Orbit project, from the City University of London, worked to enable âfew-shot learningâ of novel objects by training the model on brief videos taken by blind & low vision collectors. These videos are âimperfectâ in that they are likely to be poorly framed, blurry, and so on.This increases the noise-to-signal ratio, which is actually a good thing in training a machine learning model. Enabling AI systems to recognize objects captured in imprecise ways makes for a more robust recognizer that is capable of identifying objects in less than ideal contexts.That, in turn, improves the overall quality of these systems for everyone.
Another example is Mentra, who has been using AI to help pair neurodivergent folks with employers who recognize the profound contributions they can make in their companies. Mentraâs platform collects holistic data on job seekers:
It takes these into account when matching individuals to available positions (which also include comparable information).
Mentra takes care not to âscreen outâ individuals with non-traditional backgrounds. It also works in a âreverse job fairâ model, where applicants only fill in one profile, letting Mentraâs AI recommend them for jobs that are a good fit. Employers indicate their interest and invite job seekers to interview, lessening the stress level on the job seeker.
Mentraâs straightforward approach also reduces the need for job seekers to âcoverâ in a new role as theyâve made it clear what accommodations they need in order to be successful.
The third project Iâll share with you is iWill, who are working in the mental health space.
There are tons of cognitive behavioral therapy (CBT) chatbots out there, but we were really intrigued by work being undertaken by iWill in India. First of all, there is a profound scarcity of mental health professionals in India. Training and deploying enough professionals to meet the mental health needs of the population is not feasible in the near term, which is why chat bots are a compelling stop-gap.
Most CBT chatbots are trained in English. We are funding them to train a CBT model end-to-end in Hindi as we believe itâs the only way to avoid potential problems inherent in translation (Hindi to English for the ML then back again) and biases that would be inherent from the involvement of English.
I could spend hours talking about all of the good AI can do in the world, but I also recognize that AI can also perpetuate or exacerbate exclusion.
AI teams need your skills to help them address bias toward and exclusion of people with disabilities. They also need you to be there protecting the privacy of people with disabilities.
You would bring a lot to an AI team in this regard:
As an accessibility dev, your unique perspective and skills can help can ensure advancements in AI/ML are beneficial (and not harmful) to people with disabilities!
The last role Iâll talk about is probably the furthest afield from development, but it also has the most profound impact on the teams that do the work and thatâs D&I. I donât imagine I need to spend a ton of time making a case to this audience for why diversity matters, but hereâs a quick run-down just in case:
For more on those last two points, you should read this piece in the Harvard Business Review.
As someone who is keenly aware of the importance of having diverse teams to build inclusive products, you can do a lot to ensure your organization embraces diversity in its recruiting efforts. Fixing leaks in âthe pipeline,â if you will.
A lot of it starts with asking important questions:
Itâs also important to actively solicit disabled talent for roles in your company.
Some of this is actually work you could do without being part of any official D&I team, if you wanted, but if it is something you want to focus on, you might consider a job in recruiting.
A lot of folks focus on the pipeline, but in my experience thatâs not where the bulk of the problems lie. If we want diverse teams, we need to ensure we have an environment and culture that values and supports them. Diverse talent will flee an unwelcoming environment and employee churn is expensive.
In order to retain diverse talent, we need to make sure the teams they join recognize the value they bring to an organization. This is where D&I training and coaching comes in.
You can influence team culture to improve retention by framing diversity in the context of your business goals and organizational success:
Once the framing has been established, be sure to âcall inâ non-inclusive/biased behaviors. Leading with curiosity can help you understand where someone is coming from so you can help them grapple with concepts like privilege and bias. Donât burn yourself out trying to change the mind of folks who are openly antagonistic to this message, but youâll often be surprised at how a non-confrontational, nonjudgmental conversation can both diffuse a tense situation and help to shift someoneâs perspectives.
Another step you can take to improve retention include examining the inclusiveness (or lack thereof) of your teamâs processes, built environments, and such. Are your hybrid meetings being monopolized by folks in the physical meeting space, alienating people on the call? Are your team morale events all scheduled in the evenings, making it hard for parents or caregivers on the team? Are they being held in bars, which makes it uncomfortable for folks who donât drink alcohol, or in inaccessible venues?
Finally, it can be really beneficial to normalize disability in everyday interactions, especially if you are someone with privilege in your workplace as you can create space for others to acknowledge their own disabilities.
I was thankful that my last role enabled me to make this kind of D&I work a formal third of my core responsibilities. With my managementâs backing, I was able to lead D&I trainings and events across the company while still being able to do the other work I love.
Many companies have formal D&I teams (some in HR, some not) for whom this is their whole job, so there are certainly opportunities there. That said, those teams often rely on advocacy from elsewhere in the company for their efforts to be successful, so you might also be able to formally support their efforts from outside that organization, as I did.
If there is no room for diverse talent to grow in their careers, many will leave. As I mentioned, churn is expensive. And just as not feeling respected & valued will likely result in a diverse employee leaving, the same goes for not having the same career advancement opportunities enjoyed by people from more privileged groups. Depending on where you are in your organization, you can help address this problem in different ways:
This work is especially important to undertake if you are from a privileged group in your organization as your advocacy carries more weight. Treat your privilege as a currency and spend it on your colleagues.
Finally, and in perhaps the most formal way, working full-time in D&I you can shape company policies & trainings:
As an accessibility dev, your unique perspective and skills can help increase the inclusiveness of your company for fellow employees, which will lead to the creation of more inclusive products and services!
In this talk, I introduced five areas desperately in need of your skills and perspectives: Design Systems, Product Design, Data Science, AI Research & Ethics, and Diversity & Inclusion.
There are way more (I only have so much time).
If youâre feeling stuck, hopefully this gives you some idea of the kinds of opportunities that are out there for you. And if you only come away from this session with one thing, let it be this:
You are more valuable than you realize.
You are change maker.
Thank you!
","url":"https://www.aaron-gustafson.com/notebook/accessibility-beyond-code-compliance/","tags":["presentations","accessibility","AI/ML","inclusion","industry"],"image":"https://www.aaron-gustafson.com/i/posts/2023-03-16/hero.png","date_published":"2023-03-16T17:50:39Z"},{"id":"https://www.aaron-gustafson.com/speaking-engagements/accessibility-beyond-code-compliance/","title":"đą Accessibility Beyond Code Compliance","summary":"In this keynote, I discuss the future of accessibility and your place in it. I also share a number of ways accessibility devs and related practitioners are putting their skills to work, beyond code compliance.","content_html":"In a cruel bit of irony, our industryâs approach to accessibility has long faced its own barriers. Companies, if they have even broached the subject of accessibility at all, restricted discussion of the practice to addressing their legal exposure. While this work has been critical for removing barriers for people with disabilities and has kept a good many of us gainfully employed finding and fixing bugs, it can also be frustrating and sometimes demoralizing work.
Thankfully, thatâs beginning to change. Many companies, governments, and non-profits are beginning to see the value in and transformative capacity of accessibility. This is leading to more opportunities to flex our skills beyond bug bashing and translating success criteria into code. Itâs creating space for us to participate in the early stages of product development and apply our knowledge in new and creative ways. Itâs generating more opportunities for co-design with disability communities. And most of all, itâs fostering a future that is far more vibrant and inclusive.
In this keynote, I discuss the future of accessibility and your place in it. I also share a number of ways accessibility devs and related practitioners are putting their skills to work, beyond code compliance.
","url":"","tags":["accessibility","AI/ML"],"image":"https://www.aaron-gustafson.com/undefined","date_published":"2023-03-15T08:09:00Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/gpt-4/","title":"đ GPT-4","content_html":"GPT-4 is released. Really impressive improvements over GPT-3 and GPT-3.5.
The image description example with the VGA charger is really impressive. It will be really interesting to see how this new LLM can improve accessibility.
","url":"https://www.aaron-gustafson.com/notebook/links/gpt-4/","external_url":"https://openai.com/research/gpt-4","tags":["AI/ML","accessibility"],"date_published":"2023-03-14T17:23:34Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/chatgpt-new-ai-system-old-bias/","title":"đ ChatGPT: New AI system, old bias?","summary":"As with many other things, when it comes to AI, and LLMs in particular, you are what you eat. If its media diet is biased, its perspective will be too.","content_html":"The story of Karen Hunter asking ChatGPT to discuss whether Bessie Smith influenced Mahalia Jackson is pretty telling. As ChatGPT was not fed information that related to the connections between the musical careers of these two influential Black women, it could not shed any light on how they were connected. It could only offer snippets of their biographies, which it gleaned from Wikipedia (or similar).
Despite their seemingly magical âknowledge,â Large Language Models (LLMs) are only able to respond with things they know (or think they know). They arenât able to create novel connections between subjects in the way that people can.
Thatâs why itâs absolutely critical that LLMs be trained on content from a variety of sources, perspectives, etc. A LLM is only as good as the data itâs fed. To create truly powerful, creative, and exhaustive LLMs, we need to train them on content created by people whose voices arenât often centered and subjects that extend far into the long tail.
","url":"https://www.aaron-gustafson.com/notebook/links/chatgpt-new-ai-system-old-bias/","external_url":"https://mashable.com/article/chatgpt-ai-racism-bias","tags":["AI/ML","inclusion"],"image":"https://helios-i.mashable.com/imagery/articles/01jejxj0bBR0VxSIQeSUuvf/hero-image.fill.size_1200x675.v1677009451.jpg","date_published":"2023-03-07T23:05:50Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/disability-bias-and-ai/","title":"đ Disability, Bias, and AI","summary":"This is a foundational paper concerning AI and its potential to help and harm people with disabilities.","content_html":"This is a foundational paper concerning AI and its potential to help and harm people with disabilities. There are a lot of choice quotes in here, but this one really sums up the importance of an intersectional approach to AI:
","url":"https://www.aaron-gustafson.com/notebook/links/disability-bias-and-ai/","external_url":"https://ainowinstitute.org/disabilitybiasai-2019.pdf","tags":["accessibility","AI/ML"],"date_published":"2023-03-07T22:59:51Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/chatgpt-is-nothing-like-a-human-says-linguist-emily-bender/","title":"đ ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender","content_html":"Integrating disability into the AI bias conversation helps illuminate the tension between AI systemsâ reliance on data as the primary means of representing the world, and the fluidity of identity and lived experience. Especially given that the boundaries of disability (not unlike those of race and gender) have continually shifted in relation to unstable and culturally specific notions of âability,â something that has been constructed and reconstructed in relationship to the needs of industrial capitalism, and the shifting nature of work.
I cannot even hope to capture all of the important topics covered in this piece in a summary. Just do yourself a favor and read it.
","url":"https://www.aaron-gustafson.com/notebook/links/chatgpt-is-nothing-like-a-human-says-linguist-emily-bender/","external_url":"https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html","tags":["AI/ML"],"image":"https://pyxis.nymag.com/v1/imgs/920/1f7/2fa484190172e09ac140a16b232a4d6533-0523FEA-AIEthics--IAN7000-flat.1x.rsocial.w1200.jpg","date_published":"2023-03-02T19:29:54Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/bing-i-will-not-harm-you-unless-you-harm-me-first-/","title":"đ Bing: âI will not harm you unless you harm me firstâ","content_html":"Excellent write-up of Bing Searchâs wilder interactions, why theyâre happening, and a bunch of fun t-shirt ideas.
","url":"https://www.aaron-gustafson.com/notebook/links/bing-i-will-not-harm-you-unless-you-harm-me-first-/","external_url":"https://simonwillison.net/2023/Feb/15/bing/","tags":["AI/ML"],"image":"https://static.simonwillison.net/static/2023/bing-buttons.jpg","date_published":"2023-03-01T23:40:25Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/my-class-required-ai-here-s-what-i-ve-learned-so-far-/","title":"đ My class required AI. Here's what I've learned so far.","content_html":"I love this method to teaching folks about how to use prompt-driven AI by putting structure around how they are expected to use it. The different prompting approaches were spot-on too.
","url":"https://www.aaron-gustafson.com/notebook/links/my-class-required-ai-here-s-what-i-ve-learned-so-far-/","external_url":"https://oneusefulthing.substack.com/p/my-class-required-ai-heres-what-ive?utm_source=direct&utm_campaign=post&utm_medium=web","tags":["AI/ML"],"image":"https://substackcdn.com/image/fetch/w_1200,h_600,c_limit,f_jpg,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93d7db09-0605-4ea7-82ab-05bc516b7fa2_1024x1024.png","date_published":"2023-03-01T23:15:05Z"},{"id":"https://www.aaron-gustafson.com/notebook/links/can-chatgpt-help-or-hinder-people-with-communication-disabilities-/","title":"đ Can ChatGPT Help or Hinder People With Communication Disabilities?","content_html":"I love how people are considering the potential of conversational AIs (like ChatGPT) as assistive technology.
","url":"https://www.aaron-gustafson.com/notebook/links/can-chatgpt-help-or-hinder-people-with-communication-disabilities-/","external_url":"https://www.gizmodo.com.au/2023/01/can-chatgpt-help-or-hinder-people-with-communication-disabilities/","tags":["accessibility","AI/ML"],"image":"https://www.gizmodo.com.au/wp-content/uploads/sites/2/2023/01/20/Untitled-design-2023-01-20T171333.954.jpg?quality=80&resize=1280,720","date_published":"2023-02-28T17:21:53Z"}]}ChatGPT could be considered an âassistive technologyâ if it assists people with communication disability to get their message across more efficiently or effectively.