This is great news! Microsoft, Google, the W3C, and Samsung are all joining Mozilla in the maintenance and curation of MDN. Finally, we’ll have one always up-to-date source of docs on web standards!
The Best of the Internets
It‘s apparently a week for historical reading. I wrote about the history of CSS Grid, specifically, and Jason Hoffman wrote about the history of CSS writ-large.
Your knowledge and experience is valuable, no matter where you are in your career; you should share that knowledge with others. The web is what it is today because we shared our code and learned from each other. Be a part of that legacy. Brandon Gregory will show you the way.
Excellent new piece from Manuel. So much great stuff to digest!
Whereas in previous years it seemed like images were the culprit, it looks like video is becoming a major source of bloat now.
Excellent post from Henrik Joreteg on PWAs and why they (and the Web) matter. I wish I could have seen the talk this post is based on.
Beware the “weak signifier”:
When we compared average number of fixations and average amount of time people spent looking at each page, we found that: * The **average amount of time** was significantly higher on the weak-signifier versions than the strong-signifier versions. On average participants spent **22% more time** (i.e., slower task performance) looking at the pages with weak signifiers. * The **average number of fixations** was significantly higher on the weak-signifier versions than the strong-signifier versions. On average, people had **25% more fixations** on the pages with weak signifiers. * (Both findings were significant by a paired t-test with sites as the random factor, p < 0.05.) This means that, when looking at a design with weak signifiers, **users spent more time looking at the page, and they had to look at more elements on the page**. Since this experiment used targeted findability tasks, **more time and effort spent looking around the page are not good**. These findings don’t mean that users were more “engaged” with the pages. Instead, they suggest that participants struggled to locate the element they wanted, or weren’t confident when they first saw it.
Syb Wartna shares what he learned from refactoring an airplane seating chart using progressive enhancement.
This piece offers some really great ideas here for progressively enhancing academic papers in the digital space. For example:
1. Start with embedding a lightweight static figure (a snapshot) of the key output of the code. This should represent whatever state the author deems fit to best convey the key finding/narrative contribution of the code in question. This will only serve as the minimum viable experience for skimming purposes (Casual engagement), but also as a safe baseline for when the content is being accessed through less capable devices, as a printable/PDF compatible output, and as a valid snapshot of what state the data was in when it was peer-reviewed (where applicable). 2. Allow the user to switch the static figure to an interactive output where supported, providing whatever level of UI is needed to appreciate the output in full. 3. Where appropriate, allow the user to dig behind the output of the interactive figure and directly look at the code behind it. You may at this stage allow minor edits to the algorithm and the ability to run it again in-situ to view the output. 4. If the user wants to engage further, for example intending to fork or modify the code, or do anything more complex, provide links to the most appropriate external resource where the user can benefit from a more appropriate environment or UI to do their work (e.g., the original GitHub and/or data repository, or an online IDE).