Dispatches From The Internets

On CrowdStrike, dependencies, and building robust products on the web

I have no opinion on CrowdStrike as a company or service. I’ve never used their products. In fact, prior to the incident last week, I had only a passing familiarity with their name — likely from headlines in the tech press I’d scrolled past at some point in time. I now have a vague understanding of what they do, but that’s only based on what I learned about the cause of the incident. In reflecting on this unfortunate incident, I can’t help but think of the lesson it holds for web designers and developers.


Requirement Rules for Checkboxes

HTML checkboxes debuted as part of HTML 2.0 in 1995. Our ability to mark an individual checkbox as being required became part of the HTML5 spec that published in 2014. A decade later, we can still only make checkboxes required on a case-by-case basis. To overcome this limitation, I had created a jQuery plugin that allowed me to indicate that a user should choose a specific number of items from within a checkbox group. Yesterday I turned that plugin into a web component: form-required-checkboxes.



An even faster Microsoft Edge

Progressive enhancement for the win! This post from the Edge team demonstrates that producing markup directly rather than relying on JavaScript to do it for you is faster — even in the browser UI!

In this project, we built an entirely new markup-first architecture that minimizes the size of our bundles of code, and the amount of JavaScript code that runs during the initialization path of the UI. This new internal UI architecture is more modular, and we now rely on a repository of web components that are tuned for performance on modern web engines.  We also came up with a set of web platform patterns that allow us to ship new browser features that stay within our markup-first architecture and that use optimal web platform capabilities.


Link Rot and Digital Decay on Government, News and Other Webpages

A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.

Linkrot, especially in government and legal scenarios, is a tremendous problem, which is why we need services like the Internet Archive and Perma.cc. If you have the means, please consider supporting these, and similar, projects!


Why I Care Deeply About Web Accessibility And You Should Too

I agree with so much of this piece… especially the expansive view of accessibility that is inclusive of both the disability divide and the digital divide.

Great summary here:

[M]y passion for accessibility stems from experiencing accessibility barriers personally, observing their impact on others, and holding the conviction that technology should tear down divides - not erect new ones. I want to fulfill, and help you fulfill, the web’s promise of equal access and opportunity for everyone, regardless of circumstances. Digital accessibility should not be an accommodation but a fundamental right and prerequisite for technology to truly better humanity.



Accessibility Training at Microsoft

At Microsoft, we’ve invested a lot into accessibility upskilling across the company. And now we’ve made our Accessibility Fundamentals learning path freely available to the world to take, either on MS Learn or within another learning environment via its SCORM (Sharable Content Object Reference Model) course package.


Better form UX with the CSS property field-sizing

Form fan that I am, I’m excited to have CSS that enables fields (especially textarea) to grow to accommodate the content someone’s in the process of entering into it.

I distinctly remember spending a good deal of time putting together a proof-of-concept for Twitter DMs to show how it could be done via JavaScript without killing performance, but this is far more elegant.


No Robots(.txt): How to Ask ChatGPT and Google Bard to Not Use Your Website for Training

The Electronic Frontier Foundation (EFF) has you covered if you’d like to opt out of being indexed into tools like Open AI’s ChatGPT and Google’s Gemini. Just add these to your robots.txt file:

User-agent: GPTBot
Disallow: /

User-agent: Google-Extended
Disallow: /

Building on this, you could exclude specific directories (e.g., where you keep your images):

User-agent: GPTBot
Disallow: /i/

User-agent: Google-Extended
Disallow: /i/

I’ve decided to (for now at least) allow my text content to be indexed, but I may change my mind in the future.