The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.
If we’re writing server-side software in Python or Rails or even PHP, one of two things is true:
- We control the server environment: operating system, language versions, packages, etc.; or
- We don’t control the server environment, but we have knowledge of it and can author your program accordingly so it will execute as anticipated.
In the more traditional installed software world, we can similarly control the environment by placing certain restrictions on what operating systems our code can run on and what the dependencies for its use may be in terms of hard drive space and RAM required. We provide that information up front and users can choose to use our software or use a competing product based on what will work for them.
On the Web, however, all bets are off. The Web is ubiquitous. The Web is messy. And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.
All we can do is author a compelling, adaptive experience, cross our fingers, and hope for the best.
The fact is that we can’t absolutely rely on the availability of any specific technology when it comes to delivering a Web experience. Instead, we must look at how we construct that experience and make smarter decisions about how we use specific technologies in order to take advantage of their benefits while simultaneously understanding that their availability is not guaranteed. This is why progressive enhancement is such a useful philosophy.
Daniel shared a few examples in his deck, but I couldn’t wait to take Daniel’s tool and fire it up on a bunch of random browsers and devices that I have sitting around.
For this test, I decided to profile just jQuery 2.1.1, which weighs in at 88kb when minimized. jQuery was selected for its popularity, not because it’s the worst offender. There are many libraries much worse (hey there Angular and your 120kb payload). The results above are based on the median times taken from 20 tests per browser/device combination.
The list of tested devices isn’t exhaustive by any means—I just took some of the ones I have sitting around to try and get a picture of how much parse and execution time would vary.
Parse and execution times of minimized jQuery 2.1.1 Device Browser Median Parse Median Execution Median Total Blackberry 9650 Default, BB6 171ms 554ms 725ms UMX U670C Android 2.3.6 Browser 168ms 484ms 652ms Galaxy S3 Chrome 32 39ms 297ms 336ms Galaxy S3 UC 8.6 45ms 215ms 260ms Galaxy S3 Dolphin 10 2ms 222ms 224ms Kindle Touch Kindle 3.0+ 63ms 132ms 195ms Geeksphone Peak Firefox 25 51ms 109ms 160ms Kindle Fire Silk 3.17 16ms 139ms 155ms Lumia 520 IE10 97ms 56ms 153ms Nexus 4 Chrome 36 13ms 122ms 135ms Galaxy S3 Android 4.1.1 Browser 3ms 125ms 128ms Kindle Paperwhite Kindle 3.0+ 43ms 71ms 114ms Lumia 920 IE10 70ms 37ms 107ms Droid X Android 2.3.4 Browser 6ms 96ms 102ms Nexus 5 Chrome 37 11ms 81ms 92ms iPod Touch iOS 6 26ms 37ms 63ms Nexus 5 Firefox 32 20ms 41ms 61ms Asus X202E IE10 31ms 14ms 45ms iPad Mini iOS6 16ms 30ms 46ms Macbook Air (2014) Chrome 37 5ms 29ms 34ms Macbook Air (2014) Opera 9.8 14ms 5ms 19ms iPhone 5s iOS 7 2ms 16ms 18ms Macbook Air (2014) Firefox 31 4ms 10ms 14ms iPad (4th Gen) iOS 7 1ms 13ms 14ms iPhone 5s Chrome 37 2ms 8ms 10ms Macbook Air (2014) Safari 7 1ms 4ms 5ms
As you can see from the table above, even in this small sample size the parsing and execution times varied dramatically from device to device and browser to browser. On powerful devices, like my Macbook Air (2014), parse and execution time was negligible. Powerful mobile devices like the iPhone 5s also fared very well.
But as soon as you moved away from the latest and greatest top-end devices, the ugly truth of JS parse and execution time started to rear its head.
On a Blackberry 9650 (running BB6), the combined time to parse and execute jQuery was a whopping 725ms. My UMX running Android 2.3.6 took 652ms. Before you laugh off this little device running the 2.3.6 browser, it’s worth mentioning I bought this a month ago, brand new. It’s a device actively being sold by a few prepaid networks.
Another interesting note was how significant the impact of hardware has on the timing. The Lumia 520, despite running the same browser as the 920, had a median parse and execution time that was 46% slower than the 920. The Kindle Touch, despite running the same browser as the Paperwhite, was 71% slower than it’s more powerful replacement. How powerful the device was, not just the browser, had a large impact.
This is notable because we’re seeing companies such as Mozilla and Google targeting emerging markets with affordable, low-powered devices that otherwise run modern browsers. Those markets are going to dominate internet growth over the next few years, and affordability is a more necessary feature than a souped up device.
In addition, as the cost of technology cheapens, we’re going to continue seeing an incredibly diverse set of connected devices. With endless new form factors being released (even the Android Wear watches quickly got a Chromium based browser), the adage about not knowing where our sites will end up has never been more true.
The truly frightening thing about these parse and execution times is that this is for the latest version of jQuery, and only the latest version of jQuery. No older versions. No additional plugins or frameworks. According to the latest run of HTTP Archive, the median JS transfer size is 230kb and this test includes just a fraction of that size. I’m not even asking jQuery to actually do anything. Basically, I’m lobbing the browsers a softball here—these are best case results.
So what’s a web developer to do?
Render on the server If you’re using a client-side MVC framework, make sure you pre-render on the server. If you build a client-side MVC framework and you’re not ensuring those templates can easily be rendered on the server as well, you’re being irresponsible. That’s a bug. A bug that impacts performance, stability and reach.
There are certainly cases to be made for JS libraries, client-side MVC frameworks, and the like, but providing a quality, performant experience across a variety of devices and browsers requires that we take special care to ensure that the initial rendering is not reliant on them. Frameworks and libraries should be carefully considered additions, not the default.
When you consider the combination of weight, parse time and execution time, it becomes pretty clear that optimizing your JS and reducing your site’s reliance on it is one of the most impactful optimizations you can make.
@AaronGustafson Yes. When did you write that?
@AaronGustafson well said.
But first let’s take a a trip back in time to 2003. In March of that year, Steve Champion introduced a concept he called “progressive enhancement”. It caused a bit of an upheaval at the time because it challenged the dominant philosophy of graceful degradation. Just so we’re all on the same page, I’ll compare these two philosophies.
What’s graceful degradation?
<p>Overall, graceful degradation is about risk avoidance. The problem was that it created a climate on the Web where we, as developers, were actively denying access to services (e.g., people’s bank accounts) because we deemed a particular browser (or browsers) too difficult to work with. Or, in many cases, we just didn’t have the time or budget (or both) to address the broadest number of browsers. It’s kind of hard to reconcile the challenge of cross-browser development in 2003 with what we are faced with today as we were only really dealing with 2-3 browsers back then, but you need to remember that standards support was far worse at the time.</p><h2>So what’s progressive enhancement?</h2><p>In his talk, Steve upended the generally shared perspective that older browsers deserved a worse experience because they were less technically capable. He asked us to look beyond the browsers and the technologies in play and focus on the user experience, challenging us to design inclusive experiences that would work in the broadest of scenarios. He asked that we focus on the content and core tasks in a given interface and then enhance the experience when we could. We accomplish this by layering experiences on top of one another, hence “progressive enhancement”.</p><p>To give a simple example, consider a form field for entering your email address. If we were to mark it up like this</p><pre>
<input type="email" name="email" id="email"></pre><p>I automatically create layers of experience with no extra effort:</p><ol><li>Browsers that don’t understand “email” as a valid
inputtype will treat the “email” text as a typo in my HTML (like when you type “rdio” instead of “radio”… or maybe I’m the only one that does that). As a result, they will fall back to the default input type of “text”, which is usable in every browser that supports HTML2 and up.</li><li>Browsers that consider “email” a valid
typeattribute as a signal that it should validate the field for proper email address formatting.</li></ol><p>That means that there are between 5 and 13 potential experiences (given all of the different possible combinations of these layers) in this one single single element… it’s kind of mind-boggling to think about, right? And the clincher here is that any of these experiences can be a good experience. Heck for nearly 15 years of the Web, the plain-ol’ text
Late last week, Josh Korr, a project manager at Viget, posted at length about what he sees as a fundamental flaw with the argument for progressive enhancement. In reading the post, it became clear to me that Josh really doesn’t have a good grasp on progressive enhancement or the reasons its proponents think it’s a good philosophy to follow. Despite claiming to be “an expert at spotting fuzzy rhetoric and teasing out what’s really being said”, Josh makes a lot of false assumptions and inferences. My response would not have fit in a comment, so here it is…
Before I dive in, it’s worth noting that Josh admits that he is not a developer. As such, he can’t really speak to the bits where the rubber really meets the road with respect to progressive enhancement. Instead, he focuses on the argument for it, which he sees as a purely moral one… and a flimsily moral one at that.
I’m also unsure as to how Josh would characterize me. I don’t think I fit his mold of PE “hard-liners”, but since I’ve written two books and countless articles on the subject and he quotes me in the piece, I’ll go out on a limb and say he probably thinks I am.
Ok, enough with the preliminaries, let’s jump over to his piece…
Right out of the gate, Josh demonstrates a fundamental misread of progressive enhancement. If I had to guess, it probably stems from his source material, but he sees progressive enhancement as a moral argument:
It’s a moral imperative that everything on the web should be available to everyone everywhere all the time. Failing to achieve — or at least strive for — that goal is inhumane.
Now he’s quick to admit that no one has ever explicitly said this, but this is his takeaway from the articles and posts he’s read. It’s a pretty harsh, black & white, you’re either with us or against us sort of statement that has so many people picking sides and lobbing rocks and other heavy objects at anyone who disagrees with them. And everyone he quotes in the piece as examples of why he thinks this is progressive enhancement’s central conceit is much more of an “it depends” sort of person.
I could go on, but let’s circle back to Josh’s piece. Off the bat he makes some pretty bold claims about what he intends to prove in this piece:
- Progressive enhancement is a philosophical, moral argument disguised as a practical approach to web development.
- This makes it impossible to engage with at a practical level.
- When exposed to scrutiny, that moral argument falls apart.
- Therefore, if PEers can’t find a different argument, it’s ok for everyone else to get on with their lives.
For the record, I plan to address his arguments quite practically. As I mentioned, progressive enhancement is not solely founded on morality, though that can certainly be viewed as a facet. The reality is that progressive enhancement is quite pragmatic, addressing the Web as it exists not as we might hope that it exists or how we experience it.
Over the course of a few sections—which I wish I could link to directly, but alas, the headings don’t have unique
ids—he examines a handful of quotes and attempts to tease out their hidden meaning by following the LSAT’s Logic Reasoning framework. We’ll start with the first one.
- It is always bad to ignore some potential users for any reason.
His first attempt at teasing out the meaning of these statements comes close, but ignores some critical word choices. First off, neither Jeremy nor I speak in absolutes. As I mentioned before, we (and the other folks he quotes) all believe that the right technical choices for a project depend on specifically on the purpose and goals of that specific project. In other words it depends. We intentionally avoid absolutist words like “always” (which, incidentally, Josh has no problem throwing around, on his own or on our behalf).
For the development of most websites, the benefits of following a progressive enhancement philosophy far outweigh the cost of doing so. I’m hoping Josh will take a few minutes to read my post on the true cost of progressive enhancement in relation to actual client projects. As a project manager, I hope he’d find it enlightening and useful.
As I mentioned, I disagree with his characterization of the argument for progressive enhancement being a moral one. Morality can certainly be one argument for progressive enhancement, and as a proponent of egalitarianism I certainly see that. But it’s not the only one. If you’re in business, there are a few really good business-y reasons to embrace progressive enhancement:
- Legal: Progressive enhancement and accessibility are very closely tied. Whether brought by legitimate groups or opportunists, lawsuits over the accessibility of your web presence can happen; following progressive enhancement may help you avoid them.
- Development Costs: As I mentioned earlier, progressive enhancement is a more cost-effective approach, especially for long-lived projects. Here’s that link again: The True Cost of Progressive Enhancement.
- Reach: The more means by which you enable users to access your products, information, etc., the more opportunities you create to earn their business. Consider that no one thought folks would buy big-ticket items on mobile just a few short years ago. Boy, were they wrong. Folks buy cars, planes, and more from their tablets and smartphones on the regular these days.
Hmm, no moral arguments for progressive enhancement there… but let’s continue.
Some experience vs. no experience
- “[With a PE approach,] Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.” — Jeremy Keith
- A clunky experience is always better than no experience.
- HTML content — i.e. text, images, unstyled forms — is the most important part of most websites.
You may be surprised to hear that I have no issue with Josh’s distillation here. Clunky is a bit of a loaded word, but I agree that an experience is better than no experience, especially for critical tasks like checking your bank account, registering to vote, making a purchase from an online shop. In my book, I talk a little bit about a strange thing we experienced when A List Apart stopped delivering CSS to Netscape Navigator 4 way back in 2001:
We assume that those who choose to keep using 4.0 browsers have reasons for doing so; we also assume that most of those folks don’t really care about “design issues.” They just want information, and with this approach they can still get the information they seek. In fact, since we began hiding the design from non–compliant browsers in February 2001, ALA’s Netscape 4 readership has increased, from about 6% to about 11%.
Folks come to our web offerings for a reason. Sometimes its to gather information, sometimes it’s to be entertained, sometimes it’s to make a purchase. It’s in our best interest to remove every potential obstacle that can preclude them from doing that. That’s good customer service.
- “Question any approach to the web where fancy features for a few are prioritized & basic access is something you’ll ‘get to’ eventually.” — Tim Kadlec
- Everything beyond HTML content is superfluous fanciness.
Not to put words in Tim’s mouth (like Josh is here), but what Tim’s quote is discussing is hype-driven (as opposed to user-centered) design. We (as developers) often prioritize our own convenience/excitement/interest over our users’ actual needs. It doesn’t happen all the time (note I said often), but it happens frequently enough to require us to call it out now and again (as Tim did here).
As for the “unstated assumptions”, I know for a fact that Tim would never call “everything beyond HTML” superfluous. What he is saying is that we should question—as in weigh the pros and cons—of each and every design pattern and development practice we consider. It’s important to do this because there are always tradeoffs. Some considerations that should be on your list include:
- Download speed;
- Time to interactivity;
- Interaction performance;
- Perceived performance;
- Input methods;
- User experience;
- Screen size & orientation;
- Visual hierarchy;
- Aesthetic design;
- Text equivalents of rich interfaces for visually impaired users and headless UIs;
- Fallbacks; and
This list is by no means exhaustive nor is it in any particular order; it’s what came immediately to mind for me. Some interfaces may have fewer or more considerations as each is different. And some of these considerations might be in opposition to others depending on the interface. It’s critical that we consider the implications of our design decisions by weighing them against one another before we make any sort of decision about how to progress. Otherwise we open ourselves up to potential problems and the cost of changing things goes up the further into a project we are:
Also, I agree, but enjoyed the notion of tossing one of your articles back at you.
That too ;-)
Yeah, I cribbed from the blog post for the book ;-)