Dispatches From The Internets

Creating a more accessible web with ARIA Notify

I just saw this very exciting announcement on the Edge Dev Blog:

ARIA Notify is an ergonomic and predictable way to tell assistive technologies (ATs), such as screen readers, exactly what to announce to users and when.

In its simplest form, developers can call the ariaNotify() method with the text to be announced to the user.

Here’s what it looks like:

// Dispatch a normal priority notification
document.ariaNotify(
  "Background task completed",
  { "priority": "normal" }
);

I’m particularly excited by this because of how much it simplifies the update process for engineers. Previously they needed to manage upates to an aria-live DOM node with the appropriate announcement level and hope for the best. This approach was plagued with issues ranging from lag — because, DOM manipulation — to confusion between whether “polite” or “assertive” was the right choice.

The ARIA Notify proposal is clear, concise, and far more likely to get used and — more importantly — used properly.

It’s currently in Origin Trial. Please give your feedback so we can get this into every browser sooner rather than later.


Optimizing Your Codebase for AI Coding Agents

A cute red robot is adrift in a choppy, foggy sea. A lighthouse on a cliff in the distance is beaming light across the sea, creating a safe, clear path to a nearby shore. The robot is steering the boat toward shore, following the path through the fog created by the lighthouse. Vintage travel poster aesthetic.

I’ve been playing around a bit with GitHub Copilot as an autonomous agent to help with software development. The results have been mixed, but positive overall. I made an interesting discovery when I took the time to read through the agent’s reasoning over a particular task. I thought the task was straightforward, but I was wrong. Watching the agent work was like watching someone try to navigate an unfamiliar room, in complete darkness, with furniture and Lego bricks scattered everywhere.



Default Isn’t Design

When one approach becomes “how things are done,” we unconsciously defend it even when standards would give us a healthier, more interoperable ecosystem. Psychologists call this reflex System Justification. Naming it helps us steer toward a standards-first future without turning the discussion into a framework war.

This whole piece is an excellent discussion about how tools can become an identity and why that’s a bad thing.


Identifying Accessibility Data Gaps in CodeGen Models

A pop-art style illustration of a wide chasm. On the left side of the chasm stands a small, cute, red robot, gazing to the right, across the abyss. On the right side of the chasm is his destination: a finish line flag. The flag reads “Accessible.”

Late last year, I probed an LLM’s responses to HTML code generation prompts to assess its adherence to accessibility best practices. The results were unsurprisingly disappointing — roughly what I’d expect from a developer aware of accessibility but unsure how to implement it. The study highlighted key areas where training data needs improvement.


Designing for Distress: Understanding Users in Crisis

In a distressing moment, it’s like you’re rushing to the airport — you’re just looking for help right now. When you aren’t distressed, it’s like you’re on vacation. You can take your time, you’re more open to exploring.

In a recent study, the VA learned a lot from users navigating acute distress — and why typical UX patterns fail. This is highly recommended reading for anyone working in the design space.


Why I’m Betting Against AI Agents in 2025 (Despite Building Them)

I think it’s really key to understand what AI is good for and where it falls short. Not just in terms of results, but in terms of externalities as well.

To that end, this is a piece worth reading. To me, the golden nugget is this (when discussing who will succeed with AI agents):

[T]he winners will be teams building constrained, domain-specific tools that use AI for the hard parts while maintaining human control or strict boundaries over critical decisions. Think less “autonomous everything” and more “extremely capable assistants with clear boundaries.”


Why AI Won’t Destroy Us with Microsoft’s Brad Smith

In this episode, Trevor Noah and Brad Smith talk about a lot of things, but I think the most prescient is their discussion of information bubbles and organizing around labels. Trevor astutely observes how the source of information often colors how we receive that information and whether we consider it or reject it out of hand. In today’s media ecosystem, the system of “in groups” and “out groups” creates deep division and makes us more susceptible to misinformation.