Somehow my students are allergic to semantics and shit. And they’re not alone. If you look at 99% of all websites in the wild, everybody who worked on them seems to be allergic to semantics and shit. On most websites heading levels are just random numbers, loosely based on font-size. Form fields have no labels. Links and buttons are divs. It’s really pretty bad. So it’s not just my students, the whole industry doesn’t understand semantics and shit.
I feel this… deeply. And I 100% agree with where Vasilis is coming from here. I do take a bit of umbrage with the idea that heading levels don’t matter—they really do—but his point about getting folks excited about the stuff they get for free by paying attention to their markup is something I’ve been pushing for years as well.
If you’re interested in a related deep dive into HTML’s lack of dependencies, check out this piece I wrote for Smashing Magazine. If you’d like to dive deeper into forms, I have this talk you might like.
]]>When it comes to sharing, there are myriad ways to do it. If you’re at all familiar with my work, it should come as no surprise that I always start with a universally-useable and accessible baseline and then progressively enhance things from there. Thankfully, every social media site I commonly use (with the exception of the Fediverse) makes this pretty easy by providing a form that accepts inbound content via the query string.1 You can try LinkedIn’s to see it in action.
Each service is a little different, but all function similarly. I support the following ones in this site:
Site | Destination | URL | Optional Params |
---|---|---|---|
Twitter / X | https://twitter.com/ | url | |
Hacker News | https://news.ycombinator.com/ | u | t = the title you want to share |
https://www.facebook.com/ | u | ||
https://www.linkedin.com/cws/share | url | ||
https://pinterest.com/ | url | media = an image to sharedescription = the text you want to share |
Using this information, I created a partial template for use on any page in this site (though I mainly use it on blog posts right now). Each link includes useful text content (e.g., “Share on ______”) and a local SVG of the service’s icon. Here’s a simplified overview of the markup I use:
<ulclass=“social-links social-links–share”>
<liclass=“social-links__item”>
<ahref=“{{ SHARE URL }}”class=“social-link”rel=“nofollow”>
<svg>{{ SERVICE ICON }}</svg>
<bclass=“social-link__text”>Share on {{ SERVICE NAME }}</b>
</a>
</li>
</ul>
You can check out the baseline experience on this very page by disabling JavaScript.
It’s worth noting that I have chosen not to enforce opening these links in a new tab. You can do that if you like, but on mobile devices I’d prefer the user just navigate to the share page directly. You may have a different preference, but if you decide to spawn a new tab, be sure your link text lets folks know that’s what will happen. I do include a rel="nofollow"
on the link, however, to prevent search spiders from indexing the share forms.
If you test out these links, you’ll notice many of the target forms will pick up a ton of information from your page automatically. By and large, this info is grabbed from your page’s Open Graph data (stored in meta
tags) or Linked Data (as JSON-LD). You can write that info to your page by hand or use a plugin to generate it for you automatically. There are a ton of options out there if you choose to go the later route (which I’d recommend).
If you played around with any of the various share forms, you probably noticed that they are, by and large, designed as discrete interactions best-suited to a narrow window (e.g., mobile) or popup. To provide that experience, I’ve long-relied on a little bit of JavaScript to launch them in a new, appropriately-sized window:
functionpopup(e){
var $link = e.target;
while( $link.nodeName.toLowerCase()!=“a”){
$link = $link.parentNode;
}
e.preventDefault();
var popup = window.open( $link.href,‘share’,
‘height=500,width=600,status=no,toolbar=no,popup’
);
try{
popup.focus();
e.preventDefault();
}catch(e){}
}
var screen_width =“visualViewport”in window ?
window.visualViewport.width : window.innerWidth,
$links = document.querySelectorAll(
‘.social-links–share a’
),
count = $links.length;
if( screen_width >600){
while( count–){
$links[count].addEventListener(‘click’, popup,false);
$links[count].querySelector(
‘.social-link__text’
).innerHTML +=" (in a popup)";
}
}
The first chunk defines a new function called popup()
that will act as the event listener. It takes the event (e) as an argument and then finds the associated link (bubbling up through the DOM as necessary in that while
loop). Once it finds the link, the function opens a new popup window (using window.open()
). Then, to check if the popup was blocked, it attempts (within the try…catch
) to focus it. If the focus succeeds, which means the popup wasn’t blocked, the script prevents the link from navigating the user to the href
(which is the default behavior, hence e.preventDefault()
).
The second block defines a couple of variables we’ll need. First, it captures the current screen_width using either the window’s visualViewport
(if available) or its innerWidth
(which is more old school). Next it grabs the social links ($links) and counts them for looping purposes (count).
The final block is a conditional that checks to see if the screen_width is wider than 600px (an arbitrary width that just feels right… your mileage may vary). If the screen is wider than that threshold, it loops through the links,2 adds the click handler, and adds some text to the link label to let folks know it will open a popup.
And with that, the first layer of enhancement is complete: Users with JavaScript support who also happen to be using a wider browser window will get the popup share form if the popup is allowed. If the popup isn’t allowed, they’ll default to the baseline experience.
A few years back, browsers began participating in OS-level share activities. On one side, this allowed websites to share some data—URLs, text, files—to other apps on the device via navigator.share()
. On the other side of the equation, Progressive Web Apps could advertise themselves—via the Manifest’s share_target
member—as being able to receive content shared in this way.
Sharing a URL and text is really well supported. That said, it’s only been around a few years at this point and some browsers require an additional permission to use the API.3 For these reasons, it’s best to use the API as a progressive enhancement. Thankfully, it’s easy to test for support:
if(“share”in navigator ){
// all good!
}
For my particular implementation, I’ve decided to swap out the individual links for a single button that, when clicked, will proffer the page’s details over to the OS’s share widget. Here’s the code I use to do that:
var $links = document.querySelector(‘.social-links–share’),
$parent = $links.parentNode,
$button = document.createElement(‘button’),
title = document.querySelector(‘h1.p-name,title’).innerText,
$description = document.querySelector(
‘meta[name=“og:description”],meta[name=“description”]’
),
text = $description ? $description.getAttribute(‘content’)
:‘’,
url = window.location.href;
$button.innerHTML =‘Share <svg>…</svg>’;
$button.addEventListener(‘click’,function(e){
navigator.share({ title, text, url });
});
$parent.insertBefore($button, $links);
$links.remove();
The first block sets up my variables:
ul
) of sharing links;h1
or title
element);meta
description element;The second block sets up the button by inserting the text “Share” and an SVG share icon and setting an event listener on it that will pass the collected info to navigator.share()
.
The third and final block swaps out the link list for the button.
The final step to putting this all together involves setting up the conditional that determines which enhancement is offered. To keep everything a bit cleaner, I’m also moving each of the enhancements into its own function:
!(function(window, document, navigator){
functionprepForPopup(){
// popup code
}
functionpopup(){
// popup event handler
}
functionswapForShareAPI(){
// share button code
}
if(“share”in navigator ){
swapForShareAPI();
}else{
prepForPopup();
}
})(this,this.document,this.navigator);
With this setup in place, I can provide the optimal experience in browsers that support the web share API and a pretty decent fallback experience to browsers that don’t. And if none of these enhancements can be applied, users can still share my content to the places I’ve identified… no cookies or third-party widgets required.
You can see (and play with) an isolated demo of this interface over on Codepen.
Interesting side-note: If you own a form like this on your site, it makes a great share target. ↩︎
Why a reverse while
loop? Well, the order of execution doesn’t matter and decrementing while
loops are faster in some instances. It’s a micro-optimization that boosts perf on older browsers and lower-end chipsets. ↩︎
Like many modern APIs, it also requires a secure connection (HTTPS). ↩︎
]]>Your app should work in a read-only mode without JavaScript.
Lots of juicy stats to share in your team discussions!
]]>I also love how succinctly he nails this section:
]]>So, if progressive enhancement is no more expensive to create, future-proof, provides us with technical credit, and ensures that our users always receive the best possible experience under any conditions, why has it fallen by the wayside?
Because before, when you clicked on a link, the browser would go white for a moment.
JavaScript frameworks broke the browser to avoid that momentary loss of control. They then had to recreate everything that the browser had provided for free: routing, history, the back button, accessibility features, the ability for search engines to read the page, et cetera iterum ad infinitum. Coming up with solutions to these problems has been the fixation of the JavaScript community for years now, and we do have serviceable solutions for all of these — but all together, they create the incredibly complex ecosystem of modern-day JavaScript that so many JavaScript developers bemoan and lament.
All to avoid having a browser refresh for a moment.
In particular, I was impressed with how they held the line on the importance of robustness in tools like this (from the “Choose the right tools and technology” section):
Before their reassessment, the team needs to … allow users who have issues using services with Javascript or have Javascript disabled. The team must build services for all users and cannot depend on client-side Javascript.
Which yielded results in their reassessment:
The panel was impressed that:
- the team has worked around the limitations for progressive enhancement of the service, resulting from the use of a serverless Single Page Application (SPA) architecture, which is a historical technology choice inherited from NHS loginthe team has ensured the service now works for all applicants without requiring JavaScript to be enabled
- the team has used the third party Paycasso identity verification mobile application to automate many parts of the process for validating an identity document, presenting a significant improvement versus the current remote identity check process via video link
- the team has used the no JavaScript route through the service which re-uses the business logic for the SPA route despite the UI forms being separately maintained for both routes
The assessment also provides guidance for further improvements to be made. Love this!
]]>Progressive enhancement doesn’t have to be more work
As of this year, I’ve officially been beating the drum of progressive enhancement for decades. With an “s.” And it’s still a philosophy that is foundational to building resilient, accessible projects on the web. Full stop.
Chris offers a great intro/reminder here. And when you want to dig in more, you should read my book.
]]>The previous iteration of Tipr was built in my hotel room while I was on site doing some consulting for a certain Silicon Valley company. I was rocking a Palm Treo 650 at the time and that day a few of my colleagues had lined up to wait for the release of the very first iPhone. At the time, web apps were the only way to get an “app” on the iPhone as there was no SDK or even an App Store.
I did a lot of PHP development back in the day, so armed with all of the mobile web development best practices of the day, I set about building the site and making it speedy. Some of the notable features of Tipr included:
At the time, most of these approaches were very new. As an industry, we weren’t doing a whole lot to ensure peak performance on mobile because most people’s mobile browsers were pretty crappy. This was the heyday of Usablenet’s “mobile friendly” bolt-on and WAP. Then came Mobile Safari.
The Tipr site has remained largely untouched since I built it in the Summer of 2007. That October, I added a theme switcher that made the site pink for October (Breast Cancer Awareness Month). I added a free text message-based interface using the then-free TextMarks service and a Twitter bot as well. But as far as the web interface went, it remained largely untouched.
Here are a handful of things that have come to the web in the intervening years:
Phew, that’s a lot! While I haven’t made upgrades in all these areas, I did sprinkle in a few, mainly to make it a true PWA and boost performance.
Much of my work over the last few years has been in the world of static site generators (e.g., Jekyll, Eleventy). I’m quite enamored of Eleventy, having used it for a number of projects at this point. Since I know it really well, it made sense to use it for this project too. The installation steps are minimal and I already had a library of configuration options, plugins, and filters to roll with.
While in the process of migrating to Eleventy, I also took the opportunity to
meta
info to reflect current best practices.I also swapped out the PHP logic that governed the pink color theme for a simple script
in the head
of the every page. Since the color change is an enhancement, rather than a necessity, I didn’t feel like it was something I needed to manage another way.
The greatest challenge in moving Tipr over to a static site was setting up the tip calculation engine, which had been in PHP to ensure it would work even if JavaScript wasn’t available.
When I originally built Tipr, JavaScript on the back-end wasn’t a thing. That’s why the core tip calculation engine was built in PHP. At the time, even XHR was in its infancy, so the fact that I could use PHP to do the calculations for both the server-side—for when JavaScript wasn’t available—and client-side—when it was—was pretty amazing.
Today, JavaScript is ubiquitous across the whole stack, which made it the logical choice for building out the revised tip calculator. As with the original, I needed the calculation to work on the client side if it could—saving a round trip to the server—but to also have the ability to fall back to a traditional form submission if the client-side approach wasn’t feasible. That would be possible by having client-side JavaScript for the form itself, with the server-side piece handled by Netlify’s Edge Functions (integrated through Eleventy’s Edge plugin).
From an architectural standpoint, I really didn’t want to have my logic duplicated in each place, so I began to play around with ensconcing the calculation logic in a JavaScript include, so I could import it into the form page itself and a JavaScript module that the Edge Function could use.
You can view Tipr’s source on GitHub, but here’s a basic rundown of the relevant directories and files:
netlify
edge-functions
tipr.js
src
_includes
js
tipr.js
j
process.njk
index.html
netlify.toml
src/_includes/js/tipr.js
This file contains the central logic of the tip calculator. It’s written in vanilla JavaScript with the intent that it would be understandable by the widest possible assortment of browsers out there.
src/index.html
The homepage of the site is also home to the tip calculation form. Below the form is an embedded script
element containing the logic for interacting with the DOM for the client-side version of the tip calculator. I include the logic at the top of that script
:
<script>
{%include“js/tipr.js”%}
//TherestoftheJavaScriptlogic
</script>
src/j/process.njk
This file exists solely to export the JavaScript logic from the include in a way that it can be consumed by the Edge Function. It will render a new JavaScript file called “process.js” and turns the central processing logic into a JavaScript module that Deno can use (Deno powers Netlify’s Edge Functions):
---
layout:false
permalink:/j/process.js
---
{%include“js/tipr.js”%}
export{process};
netlify/edge-functions/tipr.js
We define Edge Functions for use with Netlify in the netlify/edge-functions
folder. To make use of the core JavaScript logic in the Edge Function, I can import it from the module created above before using it in the function itself:
import{ process }from“./…/…/_site/j/process.js”;
functionsetCookie(context, name, value){
context.cookies.set({
name,
value,
path:“/”,
httpOnly:true,
secure:true,
sameSite:“Lax”,
});
}
exportdefaultasync(request, context)=>{
let url =newURL(request.url);
// Save to cookie, redirect back to form
if(url.pathname ===“/process/”&& request.method ===“POST”)
{
if( request.headers.get(“content-type”)===“application/x-www-form-urlencoded”)
{
let body =await request.clone().formData();
let postData = Object.fromEntries(body);
let result =process( postData.check, postData.percent );
setCookie( context,“check”, result.check );
setCookie( context,“tip”, result.tip );
setCookie( context,“total”, result.total );
returnnewResponse(null,{
status:302,
headers:{
location:“/results/”,
}
});
}
}
return context.next();
};
What’s happening here is that when a request comes in to this Edge Function, the default export will be executed. Most of this code is directly lifted from Netlify’s Edge Functions demo site. I grab the form data, pass it into the process
function, and then set browser cookies for each of the returned values before redirecting the request to the result page.
On that page, I use Eleventy’s Edge plugin to render the check, tip, and total amounts:
{%edge%}
{%setcheck=eleventy.edge.cookies.check%}
{%settip=eleventy.edge.cookies.tip%}
{%settotal=eleventy.edge.cookies.total%}
<trid=“check”>
<thscope=“row”>Check </th>
<td>${{check}}</td>
</tr>
<trid=“tip”>
<thscope=“row”>Tip </th>
<td>${{tip}}</td>
</tr>
<trid=“total”>
<thscope=“row”>Total </th>
<td>${{total}}</td>
</tr>
{%endedge%}
Side note: The cookies get reset using a separate Edge Function.
netlify.toml
To wire up the Edge Functions, we use put a netlify.toml
file in the root of the project. Configuration is pretty straightforward: you tell it the Edge Function you want to use and the path to associate it with. You can choose to associate it with a unique path or run the Edge Function on every path.
Here’s an excerpt from Tipr’s netlify.toml
as it pertains to the Edge Function above:
[[edge_functions]]
function=“tipr”
path=“/process/”
This tells Netlify to route requests to /process/
through netlify/edge-functions/tipr.js
. Then all that was left to do was wire up the form
to use that endpoint as its action
:
<formid=“calc”method=“post”action=“/process/”>
It took a fair bit of time to figure this all out, but I’m pretty excited by the possibilities of this approach for building more static isomorphic apps. Oh, and the new site… is fast.
Why a palindrome? Well, it makes it pretty easy to detect tip fraud because all restaurant totals will always be the same forwards & backwards. It’s a little easier than a checksum. ↩︎
Covered in this piece are:
Gotta love their guiding principles too:
- The 4 principles of web accessibility help the team to create accessible components and experiences.
- The 7 principles of universal design help the team when making design decisions.
- Progressive enhancement helps the team develop styles, components, patterns and websites that are resilient and accessible across a variety of devices.
🥰
]]>]]>Work to replace GOV.UK’s search infrastructure is due to begin in early next year, with the initial phase of the project dedicated to identifying and evaluating products. Such assessments will be focus on “functional and non-functional requirements, based around user needs [and] include accessibility requirements… and progressive enhancement”.
Using progressive enhancement means your users will be able to do what they need to do if any part of the stack fails. Building your service using progressive enhancement will:
- make your service more resilient
- mean your service’s most basic functionality will work and meet the core needs of the user
- improve accessibility by encouraging best practices like writing semantic markup
- help users with device or connectivity limitations to use your service
🥰
]]>]]>The web is filled with uncertainties—browsers, devices, networks. You can’t possibly account for all of the possible variations. On the web, you have to relinquish some control.
div
when you could have just used a button
.It’s kinda the inverse of a piece I wrote for Smashing a few years back: Developing Dependency Awareness.
]]>