[Insert Clickbait Headline About Progressive Enhancement Here]
Late last week, Josh Korr, a project manager at Viget, posted at length about what he sees as a fundamental flaw with the argument for progressive enhancement. In reading the post, it became clear to me that Josh really doesn’t have a good grasp on progressive enhancement or the reasons its proponents think it’s a good philosophy to follow. Despite claiming to be “an expert at spotting fuzzy rhetoric and teasing out what’s really being said”, Josh makes a lot of false assumptions and inferences. My response would not have fit in a comment, so here it is…
Before I dive in, it’s worth noting that Josh admits that he is not a developer. As such, he can’t really speak to the bits where the rubber really meets the road with respect to progressive enhancement. Instead, he focuses on the argument for it, which he sees as a purely moral one… and a flimsily moral one at that.
I’m also unsure as to how Josh would characterize me. I don’t think I fit his mold of PE “hard-liners”, but since I’ve written two books and countless articles on the subject and he quotes me in the piece, I’ll go out on a limb and say he probably thinks I am.
Ok, enough with the preliminaries, let’s jump over to his piece…
Right out of the gate, Josh demonstrates a fundamental misread of progressive enhancement. If I had to guess, it probably stems from his source material, but he sees progressive enhancement as a moral argument:
It’s a moral imperative that everything on the web should be available to everyone everywhere all the time. Failing to achieve — or at least strive for — that goal is inhumane.
Now he’s quick to admit that no one has ever explicitly said this, but this is his takeaway from the articles and posts he’s read. It’s a pretty harsh, black & white, you’re either with us or against us sort of statement that has so many people picking sides and lobbing rocks and other heavy objects at anyone who disagrees with them. And everyone he quotes in the piece as examples of why he thinks this is progressive enhancement’s central conceit is much more of an “it depends” sort of person.
To clarify, progressive enhancement is neither moral or amoral. It’s a philosophy that recognizes the nature of the Web as a medium and asks us to think about how to build products that are robust and capable of reaching as many potential customers as possible. It isn’t concerned with any particular technology, it simply asks that we look at each tool we use with a critical eye and consider both its benefits and drawbacks. And it’s certainly not anti-JavaScript.
I could go on, but let’s circle back to Josh’s piece. Off the bat he makes some pretty bold claims about what he intends to prove in this piece:
- Progressive enhancement is a philosophical, moral argument disguised as a practical approach to web development.
- This makes it impossible to engage with at a practical level.
- When exposed to scrutiny, that moral argument falls apart.
- Therefore, if PEers can’t find a different argument, it’s ok for everyone else to get on with their lives.
For the record, I plan to address his arguments quite practically. As I mentioned, progressive enhancement is not solely founded on morality, though that can certainly be viewed as a facet. The reality is that progressive enhancement is quite pragmatic, addressing the Web as it exists not as we might hope that it exists or how we experience it.
Over the course of a few sections—which I wish I could link to directly, but alas, the headings don’t have unique id
s—he examines a handful of quotes and attempts to tease out their hidden meaning by following the LSAT’s Logic Reasoning framework. We’ll start with the first one.
Working without JavaScript
Statement
- “When we write JavaScript, it’s critical that we recognize that we can’t be guaranteed it will run.” — Aaron Gustafson
- “If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold.” — Jeremy Keith
Unstated assumptions:
- Because there is some chance JavaScript won’t run, we must always account for that chance.
- Core tasks can always be achieved without JavaScript.
- It is always bad to ignore some potential users for any reason.
His first attempt at teasing out the meaning of these statements comes close, but ignores some critical word choices. First off, neither Jeremy nor I speak in absolutes. As I mentioned before, we (and the other folks he quotes) all believe that the right technical choices for a project depend on specifically on the purpose and goals of that specific project. In other words it depends. We intentionally avoid absolutist words like “always” (which, incidentally, Josh has no problem throwing around, on his own or on our behalf).
For the development of most websites, the benefits of following a progressive enhancement philosophy far outweigh the cost of doing so. I’m hoping Josh will take a few minutes to read my post on the true cost of progressive enhancement in relation to actual client projects. As a project manager, I hope he’d find it enlightening and useful.
It’s also worth noting that he’s not considering the reason we make statements like this: Many sites rely 100% on JavaScript without needing to. The reasons why sites (like news sites, for instance) are built to be completely reliant on a fragile technology is somewhat irrelevant. But what isn’t irrelevant is that it happens. Often. That’s why I said “it’s critical that we recognize that we can’t be guaranteed it will run” (emphasis mine). A lack of acknowledgement of JavaScript’s fragility is one of the main problems I see with web development today. I suspect Jeremy and everyone else quoted in the post feels exactly the same. To be successful in a medium, you need to understand the medium. And the (sad, troubling, interesting) reality of the Web is that we don’t control a whole lot. We certainly control a whole lot less than we often believe we do.
As I mentioned, I disagree with his characterization of the argument for progressive enhancement being a moral one. Morality can certainly be one argument for progressive enhancement, and as a proponent of egalitarianism I certainly see that. But it’s not the only one. If you’re in business, there are a few really good business-y reasons to embrace progressive enhancement:
- Legal: Progressive enhancement and accessibility are very closely tied. Whether brought by legitimate groups or opportunists, lawsuits over the accessibility of your web presence can happen; following progressive enhancement may help you avoid them.
- Development Costs: As I mentioned earlier, progressive enhancement is a more cost-effective approach, especially for long-lived projects. Here’s that link again: The True Cost of Progressive Enhancement.
- Reach: The more means by which you enable users to access your products, information, etc., the more opportunities you create to earn their business. Consider that no one thought folks would buy big-ticket items on mobile just a few short years ago. Boy, were they wrong. Folks buy cars, planes, and more from their tablets and smartphones on the regular these days.
- Reliability: When your site is down, not only do you lose potential customers, you run the risk of losing existing ones too. There have been numerous incidents where big sites got hosed due to JavaScript dependencies and they didn’t have a fallback. Progressive enhancement ensures users can always do what they came to your site to do, even if it’s not the ideal experience.
Hmm, no moral arguments for progressive enhancement there… but let’s continue.
Some experience vs. no experience
Statement
- “[With a PE approach,] Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.” — Jeremy Keith
- “If for some reason JavaScript breaks, the site should still work and look good. If the CSS doesn’t load correctly, the HTML content should still be there with meaningful hyperlinks.” — Nick Pettit
Unstated assumptions:
- A clunky experience is always better than no experience.
- HTML content — i.e. text, images, unstyled forms — is the most important part of most websites.
You may be surprised to hear that I have no issue with Josh’s distillation here. Clunky is a bit of a loaded word, but I agree that an experience is better than no experience, especially for critical tasks like checking your bank account, registering to vote, making a purchase from an online shop. In my book, I talk a little bit about a strange thing we experienced when A List Apart stopped delivering CSS to Netscape Navigator 4 way back in 2001:
We assume that those who choose to keep using 4.0 browsers have reasons for doing so; we also assume that most of those folks don’t really care about “design issues.” They just want information, and with this approach they can still get the information they seek. In fact, since we began hiding the design from non–compliant browsers in February 2001, ALA’s Netscape 4 readership has increased, from about 6% to about 11%.
Folks come to our web offerings for a reason. Sometimes its to gather information, sometimes it’s to be entertained, sometimes it’s to make a purchase. It’s in our best interest to remove every potential obstacle that can preclude them from doing that. That’s good customer service.
Project priorities
Statement
- “Question any approach to the web where fancy features for a few are prioritized & basic access is something you’ll ‘get to’ eventually.” — Tim Kadlec
Unstated assumptions:
- Everything beyond HTML content is superfluous fanciness.
- It’s morally problematic if some users cannot access features built with JavaScript.
Not to put words in Tim’s mouth (like Josh is here), but what Tim’s quote is discussing is hype-driven (as opposed to user-centered) design. We (as developers) often prioritize our own convenience/excitement/interest over our users’ actual needs. It doesn’t happen all the time (note I said often), but it happens frequently enough to require us to call it out now and again (as Tim did here).
As for the “unstated assumptions”, I know for a fact that Tim would never call “everything beyond HTML” superfluous. What he is saying is that we should question—as in weigh the pros and cons—of each and every design pattern and development practice we consider. It’s important to do this because there are always tradeoffs. Some considerations that should be on your list include:
- Download speed;
- Time to interactivity;
- Interaction performance;
- Perceived performance;
- Input methods;
- User experience;
- Screen size & orientation;
- Visual hierarchy;
- Aesthetic design;
- Contrast;
- Readability;
- Text equivalents of rich interfaces for visually impaired users and headless UIs;
- Fallbacks; and
- Copywriting.
This list is by no means exhaustive nor is it in any particular order; it’s what came immediately to mind for me. Some interfaces may have fewer or more considerations as each is different. And some of these considerations might be in opposition to others depending on the interface. It’s critical that we consider the implications of our design decisions by weighing them against one another before we make any sort of decision about how to progress. Otherwise we open ourselves up to potential problems and the cost of changing things goes up the further into a project we are:

As a project manager, I’m sure Josh understands this reality.
As to the “morally problematic” bit, I’ll refer back to my earlier discussion of business considerations. Sure, morality can certainly be part of it, but I’d argue that it’s unwise to make assumptions about your users regardless. It’s easy to fall into the trap of thinking that all of or users are like us (or like the personas we come up with). My employer, Microsoft, makes a great case for why we should avoid doing this in their Inclusive Design materials:

If you’re in business, it doesn’t pay to exclude potential customers (or alienate current ones).
Erecting unnecessary barriers
Statement
- “Everyone deserves access to the sum of all human knowledge.” — Nick Pettit
- “[The web is] built with a set of principles that — much like the principles underlying the internet itself — are founded on ideas of universality and accessibility. ‘Universal access’ is a pretty good rallying cry for the web.” — Jeremy Keith
- “The minute we start giving the middle finger to these other platforms, devices and browsers is the minute where the concept of The Web starts to erode. Because now it’s not about universal access to information, knowledge and interactivity. It’s about catering to the best of breed and leaving everyone else in the cold.” — Brad Frost
Unstated assumptions:
- What’s on the web comprises the sum of human knowledge.
- Progressive enhancement is fundamentally about universal access to this sum of human knowledge.
- It is always immoral if something on the web isn’t available to everyone.
I don’t think anyone quoted here would argue that the Web (taken in its entirety) is “the sum of all human knowledge”—Nick, I imagine, was using that phrase somewhat hyperbolically. But there is a lot of information on the Web folks should have access too, whether from a business standpoint or a legal one. What Nick, Jeremy, and Brad are really highlighting here is that we often make somewhat arbitrary design & development decisions that can block access to useful or necessary information and interactions.
In my talk Designing with Empathy (slides), I discussed “mystery meat” navigation. I can’t imagine any designer sets out to make their site difficult to navigate, but we are influenced by what we see (and are inspired by) on the web. Some folks took inspiration from web-based art projects like this Toyota microsite:

Though probably not directly influenced by On Toyota’s Mind, Yeshiva of Flatbush was certainly influenced by the concept of “experiential” (which is a polite way of saying “mystery meat”) navigation.

That’s a design/UX example, but development is no different. How many Single Page Apps have you see out there that really didn’t need to be built that way? Dozens? We often put the cart before the horse and decide to build a site using a particular stack or framework without even considering the type of content we’re dealing with or whether that decision is in the best interest of the project or its end users. That goes directly back to Tim’s earlier point.
Progressive enhancement recognizes that experience is a continuum and we all have different needs when accessing the Web. Some are permanent: Low vision or blindness. Some are temporary: Imprecise mousing due to injury. Others are purely situational: Glare when your users are outside on a mobile device or have turned their screen brightness down to conserve battery. When we make our design and development decisions in the service of the project and the users who will access it, everyone wins.
Real answers to real questions
In the next section, Josh tries to say we only discuss progressive enhancement as a moral imperative. Clearly I don’t (and would go further to say no one else who was quoted does either). He argues that ours is “a philosophical argument, not a practical approach to web development”. I call bullshit. As I’ve just discussed in the previous sections, progressive enhancement is a practical, fiscally-responsible, developmentally robust philosophical approach to building for the Web.
But let’s look at some of the questions he says we don’t answer:
“Wait, how often do people turn off JavaScript?”
Folks turning off JavaScript isn’t really the issue. It used to be, but that was years ago. I discussed the misconception that this is still a concern a few weeks ago. The real issue is whether or not JavaScript is available. Obviously your project may vary, but the UK government pegged their non-JavaScript usage at 1.1%. The more interesting bit, however, was that only 0.2% of their users fell into the “Javascript off or no JavaScript support” camp. 0.9% of their users should have gotten the JavaScript-based enhancement on offer, but didn’t. The potential reasons are myriad. JavaScript is great, but you can’t assume it’ll be available.
“I’m not trying to be mean, but I don’t think people in Sudan are going to buy my product.”
This isn’t really a question, but it is the kinda thing I hear every now and then. An even more aggressive and ill-informed version I got was “I sell TVs; blind people don’t watch TV”. As a practical person, I’m willing to admit that your organization probably knows its market pretty well. If your products aren’t available in certain regions, it’s probably not worth your while to cater to folks in that region. But here’s some additional food for thought:
- When you remove barriers to access for one group, you create opportunities for others. A perfect example of this is the curb cut. Curb cuts were originally created to facilitate folks in wheelchairs getting across the road. In creating curb cuts, we’ve also enabled kids to ride bicycles more safely on the sidewalk, delivery personnel to more easily move large numbers of boxes from their trucks into buildings, and parents to more easily cross streets with a stroller. Small considerations for one group pay dividends to more. What rational business doesn’t want to enable more folks to become customers?
- Geography isn’t everything. I’m not as familiar with specific design considerations for Sudanese users, but since about 97% of Sudanese people are Muslim, let’s tuck into that. Ignoring translations and right-to-left text, let’s just focus on cultural sensitivity. For instance, a photo of a muscular, shirtless guy is relatively acceptable in much of the West, but would be incredibly offensive to a traditional Muslim population. Now your target audience may not be 100% Muslim (nor may your content lend itself to scantily-clad men), but if you are creating sites for mass consumption, knowing this might help you art direct the project better and build something that doesn’t offend potential customers.
Reach is incredibly important for companies and is something the Web enables quite easily. To squander that—whether intentionally or not—would be a shame.
Failures of understanding
Josh spends the next section discussing what he views as failures of the argument for progressive enhancement. He’s of course, still debating it as a purely moral argument, which I think I’ve disproven at this point, but let’s take a look at what he has to say…
The first “fail” he casts on progressive enhancement proponents is that we “are wrong about what’s actually on the Web.” Josh offers three primary offerings on the Web:
- Business and personal software, both of which have exploded in use now that software has eaten the world and is accessed primarily via the web
- Copyrighted news and entertainment content (text, photos, music, video, video games)
- Advertising and marketing content
This is the fundamental issue with seeing the Web only through the lens of your own experience. Of course he would list software as the number one thing on the Web—I’m sure he uses Basecamp, Harvest, GitHub, Slack, TeamWork, Google Docs, Office 365, or any of a host of business-related Software as a Service offerings every day. As a beneficiary of fast network speeds, I’m not at all surprised that entertainment is his number two: Netflix, Hulu, HBO Go/Now… It’s great to be financially-stable and live in the West. And as someone who works at a web agency, of course advertising would be his number three. A lot of the work Viget, and most other agencies for that matter, does is marketing-related; nothing wrong with that. But the Web is so much more than this. Here’s just a fraction of the stuff he’s overlooked:
- eCommerce,
- Social media,
- Banks,
- Governments,
- Non-profits,
- Small businesses,
- Educational institutions,
- Research institutions,
- Religious groups,
- Community organizations, and
- Forums.
It’s hard to find figures on anything but porn—which incidentally accounts for somewhere between 4% and 35% of the Web, depending on who you ask—but I have to imagine that these categories he’s overlooked probably account for the vast majority of “pages” on the Web even if they don’t account for the majority of traffic on it. Of course, as of 2014, the majority of traffic on the Web was bots, so…
The second “fail” he identifies is that our “concepts of universal access and moral imperatives… make no sense” in light of “fail” number one. He goes on to provide a list of things he seems to think we want even though advocating for progressive enhancement (and even universal access) doesn’t mean advocating for any of these things:
- All software and copyrighted news/entertainment content accessed via the web should be free. and Netflix, Spotify, HBO Now, etc. should allow anyone to download original music and video files because some people don’t have JavaScript. I’ve never heard anyone say that… ever. Advocating a smart development philosophy doesn’t make you anti-copyright or against making money.
- Any content that can’t be accessed via old browsers/devices shouldn’t be on the web in the first place. No one made that judgement. We just think it behooves you to increase the potential reach of your products and to have a workable fallback in case the ideal access scenario isn’t available. You know, smart business decisions.
- Everything on the web should have built-in translations into every language. This would be an absurd idea given that the number of languages in use on this planet top 6,500. Even if you consider that 2,000 of those have less than 1,000 speakers it’s still absurd. I don’t know anyone who would advocate for translation to every language.1
- Honda needs to consider a universal audience for its marketing websites even though (a) its offline advertising is not universal, and (b) only certain people can access or afford the cars being advertised. To you his first point, Honda does actually offline advertising in multiple languages. They even issue press releases mentioning it: “The newspaper and radio advertisements will appear in Spanish or English to match the primary language of each targeted media outlet.” As for his second argument… making assumptions about target audience and who can or cannot afford your product seems pretty friggin’ elitist; it’s also incredibly subjective. For instance, we did a project for a major investment firm where we needed to support Blackberry 4 & 5 even though there were many more popular smartphones on the market. The reason? They had several high-dollar investors who loved their older phones. You can’t make assumptions.
- All of the above should also be applied to offline software, books, magazines, newspapers, TV shows, CDs, movies, advertising, etc. Oh, I see, he’s being intentionally ridiculous.
I’m gonna skip the third fail since it presumes morality is the only argument progressive enhancement has and then chastises the progressive enhancement community for not spending time fighting for equitable Internet access and net neutrality and against things like censorship (which, of course, many of us actually do).
In his closing section, Josh talks about progressive enhancement moderates and he quotes Matt Griffin on A List Apart:
One thing that needs to be considered when we’re experimenting … is who the audience is for that thing. Will everyone be able to use it? Not if it’s, say, a tool confined to a corporate intranet. Do we then need to worry about sub-3G network users? No, probably not. What about if we’re building on the open web but we’re building a product that is expressly for transferring or manipulating HD video files? Do we need to worry about slow networks then? … Context, as usual, is everything.
In other words, it depends, which is what we’ve all been saying all along.
I’ll leave you with these facts:
- Progressive enhancement has many benefits, not the least of which are resilience and reach.
- You don’t have to like or even use progressive enhancement, but that doesn’t detract from its usefulness.
- If you ascribe to progressive enhancement, you may have a project (or several) that aren’t really good candidates for it (e.g., online photo editing software).
- JavaScript is a crucial part of the progressive enhancement toolbox.
- JavaScript availability is never guaranteed, so it’s important to consider offering fallbacks for critical tasks.
- Progressive enhancement is neither moral nor amoral, it’s just a smart way to build for the Web.
Is progressive enhancement necessary to use on every project?
No.
Would users benefit from progressive enhancement if it was followed on more sites than it is now?
Heck yeah.
Is progressive enhancement right for your project?
It depends.
My sincere thanks to Sara Soueidan, Baldur Bjarnasun, Jason Garber, and Tim Kadlec for taking the time give me feedback on this piece.
-
Of course, last I checked, over 55% of the Web was in English and just shy of 12% of the world speaks English, so… ↩
Webmentions
[Insert Clickbait Headline About Progressive Enhancement Here] aaron-gustafson.com/notebook/inser… +@AaronGustafson // Very gracious (undeservedly so)
Great takedown of the Progressive Enhancement blog post I shared yesterday twitter.com/aarongustafson…
you could link to his subheadings with fragmentions ,and progressively enhance the links with a plugin
the funny thing is that title is very click bait-ish.
It was the one bit of snark I allowed myself.
My coworker @AaronGustafson says pretty much all that needs to be said about progressive enhancement: aaron-gustafson.com/notebook/inser…
Excellent post from @AaronGustafson on Progressive Enhancement aaron-gustafson.com/notebook/inser…
“[Insert Clickbait Headline About Progressive Enhancement Here]” – bit.ly/2h3cJ2U @AaronGustafson
Great piece by @AaronGustafson. If you still aren’t sure what #ProgressiveEnhancement is all about, read this.aaron-gustafson.com/notebook/inser…
Thoughtful and well-written response by @AaronGustafson to that PE post the other day, as per his usual. aaron-gustafson.com/notebook/inser…
Great response from @AaronGustafson to an earlier, and fairly flawed post on progressive enhancement. aaron-gustafson.com/notebook/inser…
A progressively enhanced rebuttal by @AaronGustafson aaron-gustafson.com/notebook/inser…
Great article by @AaronGustafson on progressive enhancement. Definitely worth a read. aaron-gustafson.com/notebook/inser…
While I rage about deliberately inflammatory posts, clever folk like @AaronGustafson write eloquent counterposts that do a far better job
twitter.com/aarongustafson…
A must read article by @AaronGustafson aaron-gustafson.com/notebook/inser…
So, so good, Aaron. Thanks for writing it.
Thanks @AaronGustafson for that, esp for pointing out that PE is not anti-JS, but pro-reach.aaron-gustafson.com/notebook/inser…
I think important bits like that get overlooked because everyone seems to want to pick a side. We can (& should) work together.
I have trouble understanding how people tend to see it as a threat to their favourite toys.
’Progressive enhancement is neither moral nor amoral, it’s just a smart way to build for the Web‘ – @AaronGustafson aaron-gustafson.com/notebook/inser…
You should read this if you still had issues understanding what “Progressive Enhancement” is and why it’s so important! <3 @AaronGustafson
twitter.com/AaronGustafson…
Also read @AaronGustafson’s piece. Nice to read a balanced reply, although I generally agree with @joshkorr. aaron-gustafson.com/notebook/inser…
“Is progressive enhancement right for your project? It depends.”@AaronGustafson nicely (re)explains what PE is.buff.ly/2gPHOZQ
This @AaronGustafson response to the Viget article on PE is 🔥 aaron-gustafson.com/notebook/inser…
Great response by @AaronGustafson to Josh Korr’s article. Including “[PE] recognizes that experience is a continuum” aaron-gustafson.com/notebook/inser…
Brilliant article on Progressive Enhancement by @AaronGustafsonaaron-gustafson.com/notebook/inser…
Wonderful reply. Ironic that he set out to use “logic” and failed by his own standards. Nice job.
Loved your rebuttal. Have you posted it to Sidebar? Original article got a lot traction when posted there.
Very good article by @AaronGustafson about why progressive enhancement is, in general, a pretty good idea aaron-gustafson.com/notebook/inser…
This post by @aarongustafson on Progressive Enhancement is far more articulate and less ranty than mine. aaron-gustafson.com/notebook/inser…
Reading this @AaronGustafson post makes me pine for the days when blogs ruled the web and discourse > 140 chars aaron-gustafson.com/notebook/inser…
[Insert Clickbait Headline About Progressive Enhancement Here]aaron-gustafson.com/notebook/inser…@AaronGustafson has much more patience than I do.
Absolutely brilliant piece from @AaronGustafson. Thank you for writing it. #progressiveenhancement
twitter.com/AaronGustafson…
👏 @AaronGustafson dissects a Vidget article in this elaborate post on progressive enhancement. A joy to read: aaron-gustafson.com/notebook/inser… #PE
Thanks!
Beatifully written rebuttal of the latest anti-progressive enhancement logical fallacies by @AaronGustafson: aaron-gustafson.com/notebook/inser… 👏
Pondering if @AaronGustafson’s aaron-gustafson.com/notebook/inser… OR @sonniesedge’s sonniesedge.co.uk/blog/progressi… is a better riposte to that shit PE article
Thankful people like @AaronGustafson take the time to practically articulate the benefits of progressive enhancement aaron-gustafson.com/notebook/inser…
Well said about Progressive Enhancement by @AaronGustafson aaron-gustafson.com/notebook/inser…
Screeching to an end in 2016, take a break & read an essay by @AaronGustafson. Inclusive design is practical & just. aaron-gustafson.com/notebook/inser…
One day I’d love that people like @AaronGustafson no longer have to spend time setting the record straight abt progressive enhancement.
twitter.com/AaronGustafson…
Another good response to that (purposefully inflammatory) article about progressive enhancement: ow.ly/EvzX306VXOX via @AaronGustafson
[Insert Clickbait Headline About Progressive Enhancement Here] buff.ly/2g9OnCz by @AaronGustafson #progressiveenhancement #webdesign
@AaronGustafson drops the mic on the benefits of #ProgressiveEnhancements. #MicDrop #webdev aaron-gustafson.com/notebook/inser…
@AaronGustafson: “PE is pragmatic, addressing the Web as it exists not as we might hope that it exists”…and more bit.ly/2h7Tisz
Excellent and really long read on progressive enhancement by @AaronGustafson. aaron-gustafson.com/notebook/inser…#webdev
“We often put the cart before the horse…” —@AaronGustafson aaron-gustafson.com/notebook/inser…
“…sections—which I wish I could link to directly, but alas, the headings don’t have unique IDs” —@AaronGustafson 👏 aaron-gustafson.com/notebook/inser…
Another discussion of progressive enhancement – aaron-gustafson.com/notebook/inser… via @AaronGustafson #mobile #webdevelopment
Really interesting read, by @AaronGustafson on progressive enhancement & JavaScript: aaron-gustafson.com/notebook/inser…, via @Mendy_1, @reybango
Really interesting read, by @AaronGustafson on progressive enhancement & JavaScript: aaron-gustafson.com/notebook/inser…, via @Mendy_1, @reybango
Really interesting read, by @AaronGustafson on progressive enhancement & JavaScript: aaron-gustafson.com/notebook/inser…, via @Mendy_1, @reybango
Really interesting read, by @AaronGustafson on progressive enhancement & JavaScript: aaron-gustafson.com/notebook/inser…, via @Mendy_1, @reybango
Really interesting read, by @AaronGustafson on progressive enhancement & JavaScript: aaron-gustafson.com/notebook/inser…, via @Mendy_1, @reybango
Really interesting read, by @AaronGustafson on progressive enhancement & JavaScript: aaron-gustafson.com/notebook/inser…, via @Mendy_1, @reybango
@AaronGustafson on wider perspectives of #ProgressiveEnhancement. Design for web users, not some browsers - aaron-gustafson.com/notebook/inser…
2017前端性能优化清单
你开始使用渐进启动了么?是不是已经使用过React和Angular中tree-shaking和code-splitting两个工具?有没有用过Brotli、Zofli和HPACK这几种压缩技术,或者OCSP协议(在线证书状态协议)?知不知道资源提醒,客户端提醒和CSS containment一类的技术?了解IPv6,HTTP/2和Service Worker这些协议吗?
回想那些年,大家往往在完成了产品之后才会去考虑性能。常常把与性能相关的事情拖到项目的最后来做,所做的也不过是对服务器上的
config
文件进行一些微调、串联、优化以及部分特别小的调整。而现在,技术已经有了翻天覆地的变化。一个项目的性能是非常重要的,除了要在技术层面上注意,更要在项目的设计之初就开始考虑,这样才可以使性能的各种隐形需求完美的整合到项目中,随着项目一起推进。性能最好具有可量化、可监测以及可改动的特性。网络越来越复杂,对网络的监控也变得越来越难,因为监测的过程会受到包括设备、浏览器、协议、网络类型以及其他技术(CDN,ISP,缓存,代理服务器,防火墙,负载均衡器和服务器对性能的影响都很大)的很大影响。
下文是一份2017年的前端性能优化清单,阐述了作为前端开发人员,为了确保反馈速度以及浏览器兼容性我们需要考虑的问题。
(你也可以下载checklist PDF或者check in Apple Pages。优化万岁!)
正文
微优化是保持性能最好的办法,但是又不能有太过明确的优化目标,因为过于明确的目标会影响在项目中做的每一个决定。以下是一些不同的模型,请按照自己舒服的顺序阅读。
请准备好然后定下目标!
1. 比你最强的竞争对手快20%
根据一个心理学研究,你的网站最少在速度上比别人快20%,才能让用户感觉到你的网站比别人的更快。这个速度说的不是整个页面的加载时间,而是开始加载渲染的时间,首次有效渲染时间(例如页面需要加载主要内容的时间),或者交互时间(指的是页面或者应用中主要的页面加载完成,并主备好与用户进行交互的时间)。
在Moto G(一个中端三星设备)和Nexus 4(比较主流的设备)上衡量开始渲染时间(用WebPagetest)以及首页有效渲染时间(用Lighthouse),最好是在一个开放的实验室中,使用规范的3G,4G和Wi-Fi链接。
Lighthouse,一个Google开发的新的性能审查工具
你可以通过你的分析报告看看你的用户处在哪个阶段,选取其中前90%的用户体验来做测试。接着收集数据,建一个表格,筛去20%的数据并预设一个目标(如:性能预算)。现在你可以将上述两个值进行对比检测。如果你始终维持着你的目标并且通过一点一点改变脚本去加快交互时间,那么上述方法就是合理可行的。
由Brad Frost创建的性能分析
和你的同事分享这份清单。首先要确保团队中的每个人都熟悉这份清单。项目中每一个决定都会影响性能,如果前端工程师们都在积极的参与项目概念,UX以及视觉设计的决定,这将会给整个项目带来巨大收益。地图设计的决定违背了性能理念,所以他在这份清单内的顺序有待考虑。
2. 反应时间为100毫秒,帧数是每秒60帧
RAIL性能模型会为你提供一个优秀的目标,既尽最大的努力在用户初始操作后的100毫秒内提供反馈。为了达到这个目标,页面需要放弃权限,并将权限在50毫秒内交回给主线程。对于像动画一样的高压点,最好的方法就是什么都不做,因为你永远无法达到最小绝对值。
同理,动画的每一帧都需要在16毫秒内完成,这样才能保证每秒60帧(一秒/60=16.6毫秒),如果可以的话最好能在10毫秒内完成。因为浏览器需要一定的时间去在屏幕上渲染新的帧,而且你的代码需要在16.6毫秒内完成执行。要注意,上述目标应用于衡量项目的运行性能,而非加载性能。
3. 首次有效渲染时间要低于1.25秒,速度指数要低于1000
纵使这个目标实现起来非常困难,你的最终目标也应该是让开始渲染时间低于1秒且速度指数低于1000(在网速快的地方)。对于首次有效渲染时间,上限最好是1250毫秒。对于移动端,3G下移动设备首次渲染时间低于3秒都是可以接受的。稍微高一点也没关系,但千万别高太多。
定义你所需要的环境
4. 选择和安装你的开发环境
不要过多的关注当下最流行的工具,坚持选择适合自己开发环境的工具,例如Grunt、Gulp、Webpack、PostCSS,或者组合起来的工具。只要这个工具运行的速度够快,而且没有给你的维护带来太大问题,这就够了。
5. 渐进增强(progressive enhancement)
在构建前端结构的时候,应始终将渐进增强作为你的指导原则。首先设计并且构建核心体验,随后再完善那些为高性能浏览器设计的高级特性的相关体验,创建弹性体验。如果你的网页可以在使用低速网络、老旧显示器的很慢的电脑上运行飞快,那么在光纤高配电脑上它只会运行的更快。
6. Angular,React,Ember等
最好使用那些支持服务器端渲染的框架。在使用某个框架钱,先记录服务器和客户端的引导时间,记得要在移动设备上测试,最终才能使用某个框架(因为面对的是性能问题,如果在使用某个框架后,再做修改是非常困难的)。如果你使用JavaScript框架,要保证你的选择是被广泛使用并且经过考验的。不同框架对性能有着不同程度的影响,同时对应着不同的优化策略,所以最好可以清楚的了解你要用的框架的每个方面。在写网页应用时可以先看看PRPL pattern和application shell architecture。
本图描述了PRPL pattern
上图是application shell,是一个小型的、由HTML,CSS和JavaScript构成的用户界面
7. AMP还是Instant Articles?
根据你整体组织结构的优先顺序和策略,你可以考虑使用Google的AMP或Facebook的Instant Articles。要知道没有这些你也可以达到不错的性能,但是AMP可以提供一个性能不错的免费的内容分发网络框架(CDN),Instant Articles可以在Facebook上促进你的性能。你也可以建立progressive web AMP。
8. 选择适合你的CDN
根据你的动态数据量,可以将一部分内容外包给静态网站生成工具,将它置于CDN上,从中生成一个静态版本,从而避免那些数据库的请求。也可以选择基于CDN的静态主机平台,通过交互组件丰富你的页面(JAMStack)。
注意CDN是否可以很好的处理(或分流)动态内容。没必要单纯的将你的CDN限制为静态。反复检查CDN是否执行了内容的压缩和转化,检查智能HTTP/2传输和缓存服务器(ESI),注意哪些静态或动态的部分处在CDN的边缘(最接近用户的服务器)。
开始优化
9. 直接确定优化顺序
首先应该弄清楚你想解决的问题是什么。检查一遍你所有的文件(JavaScript,图片,字体,第三方script文件以及页面中重要的模块,例如轮播,复杂信息图标和多媒体内容),并将他们分类。
列一个表格。明确浏览器上应该有的基础核心内容,哪些部分属于为高性能浏览器设计的升级体验,哪些是附加内容(那些不必要或者可以被延时加载的部分,例如字体,不必要的样式,轮播组件,播放器,社交网站入口,很大的图片)。更详细的细节请参考文章”Improving Smashing Magazine’s Performance‘’。
10. 使用符合标准的技术
使用符合标准的技术向过时的浏览器提供核心体验,向老式浏览器提供增强体验, 同时对所加载的内容要有严格的把控。即首要加载核心体验部分,将增强部分放在
DomContentLoaded
,把额外内容发在load
事件中。以前我们可以通过浏览器的版本推断出设备的性能,但现在我们已经无法推测了。因为现在市场上很多廉价的安卓手机都不考虑内存限制和CPU性能,直接使用高版本的Chrome浏览器。一定要注意,在我们没有其他选择的时候,我们选择的技术同时可能成为我们的限制。
11. 考虑微优化和渐进启动
在一些应用中,可以在渲染页面前先初始化应用。最好先显示框架,而不是一个进度条或指示器。使用可以加速初始渲染时间的模块或技术(例如tree-shaking和code-splitting),因为大部分性能问题来自于应用引导程序的初始分析时间。还可以在服务器上提前编译,从而减轻部分客户端的渲染过程,从而快速输出结果。最后,考虑使用Optimize.js来加快初始加载速度,它的原理是包装优先级高的调用函数(虽然现在已经没什么必要了)。
渐进启动指的是使用服务器端渲染,从而快速得到首次有效渲染,这个渲染过程也包括小部分的JavaScript文件,目的是使交互时间尽可能的接近首次有效渲染时间。
到底采用客户端渲染还是服务器端渲染?不论哪种做法,我们的目标都是建立渐进启动:使用服务器端渲染可以得到很短的首次有效渲染时间,这个渲染过程也包括小部分的JavaScript文件,目的是使交互时间尽可能的接近首次有效渲染时间。接下来,尽可能的增加一些应用的非必要功能。不幸的是,正如Paul Lewis所说,框架基本上对开发者是没有优先级的概念的,因此渐进启动在很多库和框架上是很难实施的。如果你有时间的话,还是考虑使用策略去优化你的性能吧。
12. HTTP的缓存头使用的合理吗?
仔细检查一下例如
expires
,cache-control
,max-age
以及其他HTTP缓存头是否被正确的使用。一般来说,资源不论在短时间(如果它会被频繁改动)还是不确定的时间内(如果它是静态的)都是可缓存的——你大可在需要的时候在URL中成改版本。如果可能,使用为指纹静态资源设计的Cache-control:immutable,从而避免二次验证(2016年12月,只有FireFox在
https://
处理中支持)。你可以使用,Heroku的primer on HTTP caching headers,Jake Archibald的 ”Caching Best Practices”,还有IIya Grigorik的HTTP caching primer作为指导。13. 减少使用第三方库,加载JavaScript异步操作
当用户请求页面时,浏览器会抓取HTML同时生成DOM,然后抓取CSS并建立CSS对象模型,最后通过匹配DOM和CSS对象生成渲染树。在需要处理的JavaScript文件被解决之前,浏览器不会开始对页面进行渲染。作为开发者,我们要明确的告诉浏览器不要等待,直接开始渲染。具体方法是使用HTML中的
defer
和async
两个属性。事实上,
defer
更好一些(因为对于IE9及以下用户对于IE9及以下用户,很有可能会中断脚本)。同时,减少第三方库和脚本的使用,尤其是社交网站的分享按键和<iframe>
嵌入(比如地图)。你可以使用静态的社交网站分享按键(例如SSBG的)和指向交互地图的静态链接去代替他们。14. 图片是否被正确优化?
尽可能的使用带有
srcset
,sizes
还有<picture>
元素的响应式图片。你也可以利用<picture>
元素的WebP格式,用JPEG格式作为替补(参见Andreas Bovens的code snippet)或是使用内容协商(使用接受头)。Sketch原本就支持WebP,WebP图片可以直接被Photoshop的WebP plugin导出。当然也有很多其他方法。响应图片断点生成器可自动处理图片
你也可以使用客户端提示,现在浏览器也可以做到。在用来生成响应图片的源文件过少时,使用响应图片断点生成器或类似Cloudinary的服务自动的优化图片。在很多案例中,单独使用
sreset
和sizes
都带来了很大的收益。在本网站上,我们给文件添加-opt
后缀——例如brotli-compression-opt.png
;这样团队的每一个人就知道这些带着后最的图片是被优化过的。15. 图片的进一步优化
当你在编写登陆界面的时候,发现页面上的图片加载的特别快,这时你需要确认一下JPEG格式文件是否已经通过mozJPEG(它可以操作扫描等级从而提高渲染时间)优化和压缩,PNG格式对应Pingo,GIF需要用到Lossy GIF,SVG要使用SVGOMG。对图片不重要的部分进行模糊处理(使用高斯模糊过滤器处理他们),从而减少文件大小,最后你可能还要去彩色化使图片变成黑白,从而减少更多的容量。对于背景图片,使用Photoshop保持0到10%的质量输出是绝对可以接受的。
如果你还觉得不够,那你可以通过多重背景图片技术来提高图片的感知性能。
16. 网页字体优化了吗?
你用来修饰网页字体的服务很有可能毫无用处,包括字形和额外的特性。如果你在使用开源的字体,尝试用字体库中某一个小的子集或是自己归类一个小的子集从而压缩文件大小(例如通过一些特殊的注音符号引用Latin)。WOFF2 support是个非常不错的选择,如果浏览器不支持,那你可以将WOFF和OTF作为备用。你也可以从Zach Leatherman的“Comprehensive Guide to Font-Loading Strategies”一文中选择一个合适的策略,然后使用服务器来缓存字体。如果想要快速入门,Pixel Ambacht的教程与案例会让你的字体变得尽然有序。
Zach Leatherman的“Comprehensive Guide to Font-Loading Strategies”提供了一打可以让字体传输变得更好的选项
如果你用的是第三方服务器主机,没办法自己在服务器上对字体进行操作,一定看看Web Font Loader。FOUT is better than FOIT中提到,在备选情况下立即渲染文本,并且异步加载字体——你也可以使用loadCSS实现这个。你可能也会避免本地OS上安装字体。
17. 快速执行关键部分的CSS
为了确保浏览器尽可能快的渲染你的页面,先收集页面首次可见部分的CSS文件(也叫决定性CSS或上半版CSS)进行渲染,然后将它加入页面的<head>部分,从而避免重复操作。因为慢启动阶段对交换包大小的限制,你关键CSS文件的大小也被限制在14KB左右。如果高于这个值,浏览器需要重复一些步骤来获取更多的样式。关键CSS是允许你这样做的。可能对每个模板都需要这个操作。如果可能,考虑一下用Fiament Group用的条件内敛方法。
通过HTTP/2,关键CSS可以单独存为CSS文件,通过服务器传输,而且可以避免HTML膨胀。服务器传输缺乏连续支持,并且存在一些超高速缓存的问题(Hooman Beheshti演示的前144页)。事实上,这样会导致网络缓冲区膨胀。因为TCP的慢启动,服务器传输在稳定的连接下会更有效率。所以你可能需要建立带有缓存的HTTP/2服务器传输机制。但请记住,新的
cache-digest
规则会否定手动建立的需要缓存的服务器的请求。18. 通过tree-shaking和code-splitting减少净负载
Tree-shaking是通过保留那些在项目过程中真正需要的代码从而清理你的构建过程的一种方式。你可以用Webpack 2来提出那些没用的住配置文件,用UnCSS或Helium从CSS中取出不需要的样式。同理,也可以考虑学习一下如何编写高效的CSS选择器,以及如何避免膨胀和高费的样式。
Code-splitting是另一个Webpack特性,它可以根据按需加载的块将你的代码分开。只要你在代码中定义了分离点,Webpack便会处理好相关的输出文件。它基本上能保证你初始下载的内容很小,而且在需求被添加时按需请求代码。
Rollup所展示的结果要比Browserify配置文件所显示的好得多。所以当我们想使用类似工具的时候,或许应该看看Rollupify,它将ECMAScript2015模块变成了一个更大的CommonJS模块——因为小模块没准有出乎意料的高性能成本,这源自于你对打包工具模块系统的选择。
19. 提升渲染性能
使用类似CSS containment的方法对高消耗组建进行隔离,从而限制浏览器样式的范围,可以作用在为无canvas的浏览工作的布局和装饰工作中,或是用在第三方工具上。要确保页面滚动和出现动画效果时没有延迟,别忘了之前提到过的每秒60帧的原则。如果没办法做到,那就尽可能保证每秒帧数的大致范围在15到60帧。使用CSS中的will-change通知浏览器哪些元素和属性发生了变化。
也记得要衡量渲染执行中的性能(可以用DevTools)。可以参照Udacity上Paul Lewis的免费课程——浏览器渲染优化,作为入门。还有一篇Sergey Chikuyonok的文章讲了如何正确的用GPU动画。
20. 预热网络连接,加快传输速度
使用页面框架,对高消耗组建延迟加载(字体,JS文件,循环播放,视频和内嵌框架)。使用资源提示来节省在
dns-prefetch
(指的是在后台执行DNS检索),preconnect
(指要求浏览器在后台进行握手链接(DNS,TCP,TLS)),prerender
(指要求浏览器在后台对特定页面进行渲染),preload
(指的是提前取出暂不执行的源文件)。根据你浏览器的支持情况,尽量使用preconnect
来代替dns-prefetch
,而且在使用prefetch
和prerender
要特别小心——后者(prerender
)只有在你非常确信用户下一步操作的情况下再去使用(比如购买流程中)。HTTP/2
21. 准备好使用HTTP/2
Google开始向着更安全网页的方向努力,并且将所有Chrome上的HTTP网页定义为“不安全”时,你或许应该考虑是继续使用HTTP/1.1,还是将目光转向HTTP/2环境。虽然初期投入比较大,但是使用HTTP/是大趋势,而且在熟练掌握之后,你可以使用service worker和服务器推送技术让行性能再提升一大截。
现在,Google计划把所有HTTP页面标记为不安全,并且将HTTP安全指示器设置为与Chrome用来拦截HTTPS的红色三角相同的图标。
使用HTTP/2的环境的缺点在于你要转移到HTTPS上,并且根据你HTTP/1.1用户的数量(主要指使用过时操作系统和过时浏览器的用户),你需要适应不同的建构过程,才能发送不同的建构。注意:不论是迁移还是新的构建过程都可能非常棘手而且耗时很多。
22. 适当部署HTTP/2
重申,使用HTTP/2协议之前,你需要彻底排查目前为止你所使用协议的情况。你需要在打包组建和同时加载很多小组间之间找到平衡。
一方面,你可能想要避免将很多资源链式链接,与其将你全部的界面分割成许多小模块,不如将他们压缩使之成为建构过程的一部分,之后给它们赋予可检索的信息并加载它们。这样的话,对一个文件将不再需要重新下载整个样式清单或JavaScript文件。
另一方面,封装是很有必要的,因为一次向浏览器发送太多JavaScript文件会出问题。首先,压缩会造成损坏。得益于dictionary reuse,压缩大文件不会对文件造成损害,压缩小文件则不然。确实有方法可以解决这个问题,但这不是本文讨论的范畴。其次,浏览器还没有优化到可以对类似工作流进行优化。例如,Chrome会引发进程间通信(IPCs),这些通信的数量与资源的数量成正比,而这成百上千个资源将会消耗大量的浏览器的执行时间。
Chrome的Jake Archibald建议,为了用HTTP/2达到最好的效果,考虑一下逐步加载CSS文件
当然你可以考虑逐步加载CSS文件。很显然,你这样做对HTTP/1.1的用户非常不利,所以你可能需要根据不同的浏览器建立多个版本来应对你的调度过程,这样就会使过程略微复杂。你也可以避免HTTP/2连接的合并,同时受益于HTTP/2来使用域名碎片,但是实现起来有些困难。
我们到底应该做什么呢?如果你粗略的用过HTTP/2,似乎成功的发送过10个左右的包(在老是浏览器上运行的也不错)。那你就能着手开始试验并且为你的网站找到平衡点。
23. 确保服务器安全可靠
所有的浏览器都支持HTTP/2并且使用TLS,这是有你可能想要避免安全警告,并删除页面上没用的元素。好好检查你的安全头部是否设置正确,排除已知的缺陷并检查证书。
如果还没有迁移到HTTP, 你那可以先看看HTTPS准则(The HTTPS-Only Standard)。确保所有外部插件和监视脚本都能被HTTPS正确加载,确保没有跨站脚本出现,HTTP脚本传输的安全头和内容安全头也都设置正确。
24. 你的服务器和CDN支持HTTP/2吗?
不同服务器和CDN对HTTP/2的兼容性不同,你可以使用TLS够快吗?一文来查看你有什么选择。
Is TLS Fast Yet?让你能看看你的服务器和CDN在使用HTTP/2的时候你能使用的工具
25. Brotli和Zopfli两种压缩算法还在用吗?
2015年,Google介绍了Brotli,一个新的开源无损数据格式,它已经被Chrome,Firefox和Opera很好的兼容了。具体使用时,Brotli所显示出的效率要远高于Gzip和Deflate。它根据不同的配置可能压缩的时候会比较慢,但是压缩速度慢最后会让压缩效率提高。而且解压起来非常快。因为这个算法来自Google,所以浏览器只在用户通过HTTPS访问网页的时候使用它,这个事情就不奇怪了。Brotli的隐患是它没办法在目前大部分服务器上预设,而且如果没有NGINX或者Ubuntu,部署起来还是有难度的。但其实你是可以在不支持它的CDN上使用Brotli(通过service worker)。
你可以看看Zopfli压缩算法作为备选,它将数据编码为Deflate,Gzip和Zlib格式。任何规范的Gzip压缩资源都受益于Zopfli改进了Deflate编码,因为文件会比Zlib压缩的最大文件小3%-8%。问题在于文件会消耗大概80倍的时间去压缩。这就是为什么在不怎么会变得资源上使用Zopfli是不错的选择,这样的文件一般都压缩一次,下载多次。
26. OCSP装订是否可以使用?
让服务器使用OCSP装订,可以提升你TLS握手的速度。线证书状态协议(OCSP)是作为证书废置列表协议的代替品被创造出来的。两个协议都可以用来检测SSL证书是否被取消。然而,OCSP不需要浏览器花时间下载和扫描证书信息的列表,所以它可以减少握手时间。
27. 你是否开始使用IPv6?
因为我们已经没什么IPv4的地址可用了,而且移动网络大都开始使用IPv6(美国已经有50%的入口采用IPv6),将你的DNS升级到IPv6为以后作打算是个不错的选择。要确保通网络可以支持双栈协议——它需要允许IPv6和IPv4同时运行。毕竟IPv6不只是做为后备计划的。而且研究显示,伴随邻居发现(NDP)和路由优化,使用IPv6的网站要比普通网站快10%到15%。
28. 是否使用HPACK?
如果你在使用HTTP/2,看看你的服务器有没有执行HPACK对HTTP的响应头进行压缩,来减少不必要的消耗。因为HTTP/2服务器相对较新,所以有些像HPACK这样的规格目前还没有被支持。我们可以用H2spec来检查HPACK是否在工作。
H2spec的截图
29. service workers是否为超高速缓存和网络提供预设机制?
没有经过优化的网络可以比用户机器的本地缓存跑得更快。如果你的网站在HTTPS上运行,你可以参考“实用主义者的service workers手册”,然后把静态资源存在service worker的缓存中,把离线预设(甚至离线页面)存在用户机器方便检索,这样比多次进行网络连接更有效。你还可以参考Jake的离线使用手册和免费的Udactiy课程“离线网络应用”。如果浏览器支持,那就再好不过了,预设就能在任何地方代替网络了。
测试与监听
30. 监听混合内容中的警告
如果你近期完成了HTTP到HTTPS的迁移,你可以利用类似Report-URI.io一类的对主动和被动的混合内容警告都进行监听。也可以利用混合内容扫描器来对你使用HTTPS的网页进行扫描。
31. 你的开发流程是否使用Devtools进行了优化?
选一个调试工具来对每一个按钮进行检查。确保自己明白如何分析渲染性能和控制台输出、明白如何调试JavaScript以及编辑CSS样式。Umar Hansa最近有一个关于使用DevTools调试和测试的分享,主要包括一些不常用的技巧和技术。
32. 是否使用代理浏览器和过时浏览器测试过?
仅仅使用Chrome和Firefox测试是不够的。还应该看看你的网页在代理浏览器和过时浏览器上运行的怎么样。比如UC浏览器和Opera Min, 它们在亚洲市场的份额很高(高达35%)。在推广时,利用目标客户所在国家的平均网速来进行测试,避免日后出现大的误差。同样的,不仅要在节流网络以及模拟的高数据处理设备上进行测试,还要在真实设备上测试。
33. 有无建立持续监听?
在进行快速、无限制的测试时,最好使用一个个人的WebPageTest实例。建立一个能自动预警的性能预算监听。建立自己的用户时间标记从而测量并监测具体商用的数据。使用SpeedCurve对性能的变化进行监控,同时利用New Relic获取WebPageTest没法提供的数据。SpeedTracker,Lighthouse和Calibre都是不错的选择。
快速入门
这份清单综合性很强,几乎介绍了所有的可用的优化方式。那么,如果你只有一个小时进行优化,你应该干什么呢?让我们从中总结10个最有用的来。别忘了在你开始优化前和结束优化后,记录你的结果,包括开始渲染时间以及在3G,有限连接下的速度。
结语
文中提到的一些优化可能超出了你工作的范围,有的可能超出了你的预算,有的甚至对你正在使用的有些过时的代码判了死刑。但没关系,本文只是一个普通大纲(希望能做到综合性强),你应该根据自己的工作环境列一份适合自己的清单。最重要的,在开始优化之前,先对项目中存在的问题有一个明确的了解。最后,祝大家2017年优化顺利!
———————————-我们需要的小伙伴————————————
工作职责:
-Web前端的功能设计、开发和优化,开发可重用模块
-实现与视觉稿、交互稿一致的跨平台、浏览器、客户端,兼容性好的移动端页面和PC页面
-前沿技术研究和新技术调研,
职位要求:
-精通JavaScript/HTML5/CSS3等相关前端开发技术,熟悉页面架构和布局,有良好的程序设计和架构能力;
-精通vue或react; 熟悉并熟练使用es6语法,熟悉node.js;
-掌握前端调试、性能优化、工程化等开发等相关技术;
-具有至少一年以上移动端网页开发经验;
-对web技术钻研有强烈兴趣,有良好的学习能力和强烈的进取心,能主动跟进新技术发展动态优先 ;
-学习能力强,强烈的责任心,具有较强的沟通能力及团队合作精神 ;
-有较强的产品理解,能从技术角度推动产品优化;
-思维缜密、思路清晰,较好的逻辑分析能力;
-了解一门后台语言(php或java)者优先;
有意向的同学,请发送简历到 xuheng@baidu .com 🙂
2017前端性能优化清单
你开始使用渐进启动了么?是不是已经使用过React和Angular中tree-shaking和code-splitting两个工具?有没有用过Brotli、Zofli和HPACK这几种压缩技术,或者OCSP协议(在线证书状态协议)?知不知道资源提醒,客户端提醒和CSS containment一类的技术?了解IPv6,HTTP/2和Service Worker这些协议吗?
回想那些年,大家往往在完成了产品之后才会去考虑性能。常常把与性能相关的事情拖到项目的最后来做,所做的也不过是对服务器上的
config
文件进行一些微调、串联、优化以及部分特别小的调整。而现在,技术已经有了翻天覆地的变化。一个项目的性能是非常重要的,除了要在技术层面上注意,更要在项目的设计之初就开始考虑,这样才可以使性能的各种隐形需求完美的整合到项目中,随着项目一起推进。性能最好具有可量化、可监测以及可改动的特性。网络越来越复杂,对网络的监控也变得越来越难,因为监测的过程会受到包括设备、浏览器、协议、网络类型以及其他技术(CDN,ISP,缓存,代理服务器,防火墙,负载均衡器和服务器对性能的影响都很大)的很大影响。
下文是一份2017年的前端性能优化清单,阐述了作为前端开发人员,为了确保反馈速度以及浏览器兼容性我们需要考虑的问题。
(你也可以下载checklist PDF或者check in Apple Pages。优化万岁!)
正文
微优化是保持性能最好的办法,但是又不能有太过明确的优化目标,因为过于明确的目标会影响在项目中做的每一个决定。以下是一些不同的模型,请按照自己舒服的顺序阅读。
请准备好然后定下目标!
1. 比你最强的竞争对手快20%
根据一个心理学研究,你的网站最少在速度上比别人快20%,才能让用户感觉到你的网站比别人的更快。这个速度说的不是整个页面的加载时间,而是开始加载渲染的时间,首次有效渲染时间(例如页面需要加载主要内容的时间),或者交互时间(指的是页面或者应用中主要的页面加载完成,并主备好与用户进行交互的时间)。
在Moto G(一个中端三星设备)和Nexus 4(比较主流的设备)上衡量开始渲染时间(用WebPagetest)以及首页有效渲染时间(用Lighthouse),最好是在一个开放的实验室中,使用规范的3G,4G和Wi-Fi链接。
Lighthouse,一个Google开发的新的性能审查工具
你可以通过你的分析报告看看你的用户处在哪个阶段,选取其中前90%的用户体验来做测试。接着收集数据,建一个表格,筛去20%的数据并预设一个目标(如:性能预算)。现在你可以将上述两个值进行对比检测。如果你始终维持着你的目标并且通过一点一点改变脚本去加快交互时间,那么上述方法就是合理可行的。
由Brad Frost创建的性能分析
和你的同事分享这份清单。首先要确保团队中的每个人都熟悉这份清单。项目中每一个决定都会影响性能,如果前端工程师们都在积极的参与项目概念,UX以及视觉设计的决定,这将会给整个项目带来巨大收益。地图设计的决定违背了性能理念,所以他在这份清单内的顺序有待考虑。
2. 反应时间为100毫秒,帧数是每秒60帧
RAIL性能模型会为你提供一个优秀的目标,既尽最大的努力在用户初始操作后的100毫秒内提供反馈。为了达到这个目标,页面需要放弃权限,并将权限在50毫秒内交回给主线程。对于像动画一样的高压点,最好的方法就是什么都不做,因为你永远无法达到最小绝对值。
同理,动画的每一帧都需要在16毫秒内完成,这样才能保证每秒60帧(一秒/60=16.6毫秒),如果可以的话最好能在10毫秒内完成。因为浏览器需要一定的时间去在屏幕上渲染新的帧,而且你的代码需要在16.6毫秒内完成执行。要注意,上述目标应用于衡量项目的运行性能,而非加载性能。
3. 首次有效渲染时间要低于1.25秒,速度指数要低于1000
纵使这个目标实现起来非常困难,你的最终目标也应该是让开始渲染时间低于1秒且速度指数低于1000(在网速快的地方)。对于首次有效渲染时间,上限最好是1250毫秒。对于移动端,3G下移动设备首次渲染时间低于3秒都是可以接受的。稍微高一点也没关系,但千万别高太多。
定义你所需要的环境
4. 选择和安装你的开发环境
不要过多的关注当下最流行的工具,坚持选择适合自己开发环境的工具,例如Grunt、Gulp、Webpack、PostCSS,或者组合起来的工具。只要这个工具运行的速度够快,而且没有给你的维护带来太大问题,这就够了。
5. 渐进增强(progressive enhancement)
在构建前端结构的时候,应始终将渐进增强作为你的指导原则。首先设计并且构建核心体验,随后再完善那些为高性能浏览器设计的高级特性的相关体验,创建弹性体验。如果你的网页可以在使用低速网络、老旧显示器的很慢的电脑上运行飞快,那么在光纤高配电脑上它只会运行的更快。
6. Angular,React,Ember等
最好使用那些支持服务器端渲染的框架。在使用某个框架钱,先记录服务器和客户端的引导时间,记得要在移动设备上测试,最终才能使用某个框架(因为面对的是性能问题,如果在使用某个框架后,再做修改是非常困难的)。如果你使用JavaScript框架,要保证你的选择是被广泛使用并且经过考验的。不同框架对性能有着不同程度的影响,同时对应着不同的优化策略,所以最好可以清楚的了解你要用的框架的每个方面。在写网页应用时可以先看看PRPL pattern和application shell architecture。
本图描述了PRPL pattern
上图是application shell,是一个小型的、由HTML,CSS和JavaScript构成的用户界面
7. AMP还是Instant Articles?
根据你整体组织结构的优先顺序和策略,你可以考虑使用Google的AMP或Facebook的Instant Articles。要知道没有这些你也可以达到不错的性能,但是AMP可以提供一个性能不错的免费的内容分发网络框架(CDN),Instant Articles可以在Facebook上促进你的性能。你也可以建立progressive web AMP。
8. 选择适合你的CDN
根据你的动态数据量,可以将一部分内容外包给静态网站生成工具,将它置于CDN上,从中生成一个静态版本,从而避免那些数据库的请求。也可以选择基于CDN的静态主机平台,通过交互组件丰富你的页面(JAMStack)。
注意CDN是否可以很好的处理(或分流)动态内容。没必要单纯的将你的CDN限制为静态。反复检查CDN是否执行了内容的压缩和转化,检查智能HTTP/2传输和缓存服务器(ESI),注意哪些静态或动态的部分处在CDN的边缘(最接近用户的服务器)。
开始优化
9. 直接确定优化顺序
首先应该弄清楚你想解决的问题是什么。检查一遍你所有的文件(JavaScript,图片,字体,第三方script文件以及页面中重要的模块,例如轮播,复杂信息图标和多媒体内容),并将他们分类。
列一个表格。明确浏览器上应该有的基础核心内容,哪些部分属于为高性能浏览器设计的升级体验,哪些是附加内容(那些不必要或者可以被延时加载的部分,例如字体,不必要的样式,轮播组件,播放器,社交网站入口,很大的图片)。更详细的细节请参考文章”Improving Smashing Magazine’s Performance‘’。
10. 使用符合标准的技术
使用符合标准的技术向过时的浏览器提供核心体验,向老式浏览器提供增强体验, 同时对所加载的内容要有严格的把控。即首要加载核心体验部分,将增强部分放在
DomContentLoaded
,把额外内容发在load
事件中。以前我们可以通过浏览器的版本推断出设备的性能,但现在我们已经无法推测了。因为现在市场上很多廉价的安卓手机都不考虑内存限制和CPU性能,直接使用高版本的Chrome浏览器。一定要注意,在我们没有其他选择的时候,我们选择的技术同时可能成为我们的限制。
11. 考虑微优化和渐进启动
在一些应用中,可以在渲染页面前先初始化应用。最好先显示框架,而不是一个进度条或指示器。使用可以加速初始渲染时间的模块或技术(例如tree-shaking和code-splitting),因为大部分性能问题来自于应用引导程序的初始分析时间。还可以在服务器上提前编译,从而减轻部分客户端的渲染过程,从而快速输出结果。最后,考虑使用Optimize.js来加快初始加载速度,它的原理是包装优先级高的调用函数(虽然现在已经没什么必要了)。
渐进启动指的是使用服务器端渲染,从而快速得到首次有效渲染,这个渲染过程也包括小部分的JavaScript文件,目的是使交互时间尽可能的接近首次有效渲染时间。
到底采用客户端渲染还是服务器端渲染?不论哪种做法,我们的目标都是建立渐进启动:使用服务器端渲染可以得到很短的首次有效渲染时间,这个渲染过程也包括小部分的JavaScript文件,目的是使交互时间尽可能的接近首次有效渲染时间。接下来,尽可能的增加一些应用的非必要功能。不幸的是,正如Paul Lewis所说,框架基本上对开发者是没有优先级的概念的,因此渐进启动在很多库和框架上是很难实施的。如果你有时间的话,还是考虑使用策略去优化你的性能吧。
12. HTTP的缓存头使用的合理吗?
仔细检查一下例如
expires
,cache-control
,max-age
以及其他HTTP缓存头是否被正确的使用。一般来说,资源不论在短时间(如果它会被频繁改动)还是不确定的时间内(如果它是静态的)都是可缓存的——你大可在需要的时候在URL中成改版本。如果可能,使用为指纹静态资源设计的Cache-control:immutable,从而避免二次验证(2016年12月,只有FireFox在
https://
处理中支持)。你可以使用,Heroku的primer on HTTP caching headers,Jake Archibald的 ”Caching Best Practices”,还有IIya Grigorik的HTTP caching primer作为指导。13. 减少使用第三方库,加载JavaScript异步操作
当用户请求页面时,浏览器会抓取HTML同时生成DOM,然后抓取CSS并建立CSS对象模型,最后通过匹配DOM和CSS对象生成渲染树。在需要处理的JavaScript文件被解决之前,浏览器不会开始对页面进行渲染。作为开发者,我们要明确的告诉浏览器不要等待,直接开始渲染。具体方法是使用HTML中的
defer
和async
两个属性。事实上,
defer
更好一些(因为对于IE9及以下用户对于IE9及以下用户,很有可能会中断脚本)。同时,减少第三方库和脚本的使用,尤其是社交网站的分享按键和<iframe>
嵌入(比如地图)。你可以使用静态的社交网站分享按键(例如SSBG的)和指向交互地图的静态链接去代替他们。14. 图片是否被正确优化?
尽可能的使用带有
srcset
,sizes
还有<picture>
元素的响应式图片。你也可以利用<picture>
元素的WebP格式,用JPEG格式作为替补(参见Andreas Bovens的code snippet)或是使用内容协商(使用接受头)。Sketch原本就支持WebP,WebP图片可以直接被Photoshop的WebP plugin导出。当然也有很多其他方法。响应图片断点生成器可自动处理图片
你也可以使用客户端提示,现在浏览器也可以做到。在用来生成响应图片的源文件过少时,使用响应图片断点生成器或类似Cloudinary的服务自动的优化图片。在很多案例中,单独使用
sreset
和sizes
都带来了很大的收益。在本网站上,我们给文件添加-opt
后缀——例如brotli-compression-opt.png
;这样团队的每一个人就知道这些带着后最的图片是被优化过的。15. 图片的进一步优化
当你在编写登陆界面的时候,发现页面上的图片加载的特别快,这时你需要确认一下JPEG格式文件是否已经通过mozJPEG(它可以操作扫描等级从而提高渲染时间)优化和压缩,PNG格式对应Pingo,GIF需要用到Lossy GIF,SVG要使用SVGOMG。对图片不重要的部分进行模糊处理(使用高斯模糊过滤器处理他们),从而减少文件大小,最后你可能还要去彩色化使图片变成黑白,从而减少更多的容量。对于背景图片,使用Photoshop保持0到10%的质量输出是绝对可以接受的。
如果你还觉得不够,那你可以通过多重背景图片技术来提高图片的感知性能。
16. 网页字体优化了吗?
你用来修饰网页字体的服务很有可能毫无用处,包括字形和额外的特性。如果你在使用开源的字体,尝试用字体库中某一个小的子集或是自己归类一个小的子集从而压缩文件大小(例如通过一些特殊的注音符号引用Latin)。WOFF2 support是个非常不错的选择,如果浏览器不支持,那你可以将WOFF和OTF作为备用。你也可以从Zach Leatherman的“Comprehensive Guide to Font-Loading Strategies”一文中选择一个合适的策略,然后使用服务器来缓存字体。如果想要快速入门,Pixel Ambacht的教程与案例会让你的字体变得尽然有序。
Zach Leatherman的“Comprehensive Guide to Font-Loading Strategies”提供了一打可以让字体传输变得更好的选项
如果你用的是第三方服务器主机,没办法自己在服务器上对字体进行操作,一定看看Web Font Loader。FOUT is better than FOIT中提到,在备选情况下立即渲染文本,并且异步加载字体——你也可以使用loadCSS实现这个。你可能也会避免本地OS上安装字体。
17. 快速执行关键部分的CSS
为了确保浏览器尽可能快的渲染你的页面,先收集页面首次可见部分的CSS文件(也叫决定性CSS或上半版CSS)进行渲染,然后将它加入页面的<head>部分,从而避免重复操作。因为慢启动阶段对交换包大小的限制,你关键CSS文件的大小也被限制在14KB左右。如果高于这个值,浏览器需要重复一些步骤来获取更多的样式。关键CSS是允许你这样做的。可能对每个模板都需要这个操作。如果可能,考虑一下用Fiament Group用的条件内敛方法。
通过HTTP/2,关键CSS可以单独存为CSS文件,通过服务器传输,而且可以避免HTML膨胀。服务器传输缺乏连续支持,并且存在一些超高速缓存的问题(Hooman Beheshti演示的前144页)。事实上,这样会导致网络缓冲区膨胀。因为TCP的慢启动,服务器传输在稳定的连接下会更有效率。所以你可能需要建立带有缓存的HTTP/2服务器传输机制。但请记住,新的
cache-digest
规则会否定手动建立的需要缓存的服务器的请求。18. 通过tree-shaking和code-splitting减少净负载
Tree-shaking是通过保留那些在项目过程中真正需要的代码从而清理你的构建过程的一种方式。你可以用Webpack 2来提出那些没用的住配置文件,用UnCSS或Helium从CSS中取出不需要的样式。同理,也可以考虑学习一下如何编写高效的CSS选择器,以及如何避免膨胀和高费的样式。
Code-splitting是另一个Webpack特性,它可以根据按需加载的块将你的代码分开。只要你在代码中定义了分离点,Webpack便会处理好相关的输出文件。它基本上能保证你初始下载的内容很小,而且在需求被添加时按需请求代码。
Rollup所展示的结果要比Browserify配置文件所显示的好得多。所以当我们想使用类似工具的时候,或许应该看看Rollupify,它将ECMAScript2015模块变成了一个更大的CommonJS模块——因为小模块没准有出乎意料的高性能成本,这源自于你对打包工具模块系统的选择。
19. 提升渲染性能
使用类似CSS containment的方法对高消耗组建进行隔离,从而限制浏览器样式的范围,可以作用在为无canvas的浏览工作的布局和装饰工作中,或是用在第三方工具上。要确保页面滚动和出现动画效果时没有延迟,别忘了之前提到过的每秒60帧的原则。如果没办法做到,那就尽可能保证每秒帧数的大致范围在15到60帧。使用CSS中的will-change通知浏览器哪些元素和属性发生了变化。
也记得要衡量渲染执行中的性能(可以用DevTools)。可以参照Udacity上Paul Lewis的免费课程——浏览器渲染优化,作为入门。还有一篇Sergey Chikuyonok的文章讲了如何正确的用GPU动画。
20. 预热网络连接,加快传输速度
使用页面框架,对高消耗组建延迟加载(字体,JS文件,循环播放,视频和内嵌框架)。使用资源提示来节省在
dns-prefetch
(指的是在后台执行DNS检索),preconnect
(指要求浏览器在后台进行握手链接(DNS,TCP,TLS)),prerender
(指要求浏览器在后台对特定页面进行渲染),preload
(指的是提前取出暂不执行的源文件)。根据你浏览器的支持情况,尽量使用preconnect
来代替dns-prefetch
,而且在使用prefetch
和prerender
要特别小心——后者(prerender
)只有在你非常确信用户下一步操作的情况下再去使用(比如购买流程中)。HTTP/2
21. 准备好使用HTTP/2
Google开始向着更安全网页的方向努力,并且将所有Chrome上的HTTP网页定义为“不安全”时,你或许应该考虑是继续使用HTTP/1.1,还是将目光转向HTTP/2环境。虽然初期投入比较大,但是使用HTTP/是大趋势,而且在熟练掌握之后,你可以使用service worker和服务器推送技术让行性能再提升一大截。
现在,Google计划把所有HTTP页面标记为不安全,并且将HTTP安全指示器设置为与Chrome用来拦截HTTPS的红色三角相同的图标。
使用HTTP/2的环境的缺点在于你要转移到HTTPS上,并且根据你HTTP/1.1用户的数量(主要指使用过时操作系统和过时浏览器的用户),你需要适应不同的建构过程,才能发送不同的建构。注意:不论是迁移还是新的构建过程都可能非常棘手而且耗时很多。
22. 适当部署HTTP/2
重申,使用HTTP/2协议之前,你需要彻底排查目前为止你所使用协议的情况。你需要在打包组建和同时加载很多小组间之间找到平衡。
一方面,你可能想要避免将很多资源链式链接,与其将你全部的界面分割成许多小模块,不如将他们压缩使之成为建构过程的一部分,之后给它们赋予可检索的信息并加载它们。这样的话,对一个文件将不再需要重新下载整个样式清单或JavaScript文件。
另一方面,封装是很有必要的,因为一次向浏览器发送太多JavaScript文件会出问题。首先,压缩会造成损坏。得益于dictionary reuse,压缩大文件不会对文件造成损害,压缩小文件则不然。确实有方法可以解决这个问题,但这不是本文讨论的范畴。其次,浏览器还没有优化到可以对类似工作流进行优化。例如,Chrome会引发进程间通信(IPCs),这些通信的数量与资源的数量成正比,而这成百上千个资源将会消耗大量的浏览器的执行时间。
Chrome的Jake Archibald建议,为了用HTTP/2达到最好的效果,考虑一下逐步加载CSS文件
当然你可以考虑逐步加载CSS文件。很显然,你这样做对HTTP/1.1的用户非常不利,所以你可能需要根据不同的浏览器建立多个版本来应对你的调度过程,这样就会使过程略微复杂。你也可以避免HTTP/2连接的合并,同时受益于HTTP/2来使用域名碎片,但是实现起来有些困难。
我们到底应该做什么呢?如果你粗略的用过HTTP/2,似乎成功的发送过10个左右的包(在老是浏览器上运行的也不错)。那你就能着手开始试验并且为你的网站找到平衡点。
23. 确保服务器安全可靠
所有的浏览器都支持HTTP/2并且使用TLS,这是有你可能想要避免安全警告,并删除页面上没用的元素。好好检查你的安全头部是否设置正确,排除已知的缺陷并检查证书。
如果还没有迁移到HTTP, 你那可以先看看HTTPS准则(The HTTPS-Only Standard)。确保所有外部插件和监视脚本都能被HTTPS正确加载,确保没有跨站脚本出现,HTTP脚本传输的安全头和内容安全头也都设置正确。
24. 你的服务器和CDN支持HTTP/2吗?
不同服务器和CDN对HTTP/2的兼容性不同,你可以使用TLS够快吗?一文来查看你有什么选择。
Is TLS Fast Yet?让你能看看你的服务器和CDN在使用HTTP/2的时候你能使用的工具
25. Brotli和Zopfli两种压缩算法还在用吗?
2015年,Google介绍了Brotli,一个新的开源无损数据格式,它已经被Chrome,Firefox和Opera很好的兼容了。具体使用时,Brotli所显示出的效率要远高于Gzip和Deflate。它根据不同的配置可能压缩的时候会比较慢,但是压缩速度慢最后会让压缩效率提高。而且解压起来非常快。因为这个算法来自Google,所以浏览器只在用户通过HTTPS访问网页的时候使用它,这个事情就不奇怪了。Brotli的隐患是它没办法在目前大部分服务器上预设,而且如果没有NGINX或者Ubuntu,部署起来还是有难度的。但其实你是可以在不支持它的CDN上使用Brotli(通过service worker)。
你可以看看Zopfli压缩算法作为备选,它将数据编码为Deflate,Gzip和Zlib格式。任何规范的Gzip压缩资源都受益于Zopfli改进了Deflate编码,因为文件会比Zlib压缩的最大文件小3%-8%。问题在于文件会消耗大概80倍的时间去压缩。这就是为什么在不怎么会变得资源上使用Zopfli是不错的选择,这样的文件一般都压缩一次,下载多次。
26. OCSP装订是否可以使用?
让服务器使用OCSP装订,可以提升你TLS握手的速度。线证书状态协议(OCSP)是作为证书废置列表协议的代替品被创造出来的。两个协议都可以用来检测SSL证书是否被取消。然而,OCSP不需要浏览器花时间下载和扫描证书信息的列表,所以它可以减少握手时间。
27. 你是否开始使用IPv6?
因为我们已经没什么IPv4的地址可用了,而且移动网络大都开始使用IPv6(美国已经有50%的入口采用IPv6),将你的DNS升级到IPv6为以后作打算是个不错的选择。要确保通网络可以支持双栈协议——它需要允许IPv6和IPv4同时运行。毕竟IPv6不只是做为后备计划的。而且研究显示,伴随邻居发现(NDP)和路由优化,使用IPv6的网站要比普通网站快10%到15%。
28. 是否使用HPACK?
如果你在使用HTTP/2,看看你的服务器有没有执行HPACK对HTTP的响应头进行压缩,来减少不必要的消耗。因为HTTP/2服务器相对较新,所以有些像HPACK这样的规格目前还没有被支持。我们可以用H2spec来检查HPACK是否在工作。
H2spec的截图
29. service workers是否为超高速缓存和网络提供预设机制?
没有经过优化的网络可以比用户机器的本地缓存跑得更快。如果你的网站在HTTPS上运行,你可以参考“实用主义者的service workers手册”,然后把静态资源存在service worker的缓存中,把离线预设(甚至离线页面)存在用户机器方便检索,这样比多次进行网络连接更有效。你还可以参考Jake的离线使用手册和免费的Udactiy课程“离线网络应用”。如果浏览器支持,那就再好不过了,预设就能在任何地方代替网络了。
测试与监听
30. 监听混合内容中的警告
如果你近期完成了HTTP到HTTPS的迁移,你可以利用类似Report-URI.io一类的对主动和被动的混合内容警告都进行监听。也可以利用混合内容扫描器来对你使用HTTPS的网页进行扫描。
31. 你的开发流程是否使用Devtools进行了优化?
选一个调试工具来对每一个按钮进行检查。确保自己明白如何分析渲染性能和控制台输出、明白如何调试JavaScript以及编辑CSS样式。Umar Hansa最近有一个关于使用DevTools调试和测试的分享,主要包括一些不常用的技巧和技术。
32. 是否使用代理浏览器和过时浏览器测试过?
仅仅使用Chrome和Firefox测试是不够的。还应该看看你的网页在代理浏览器和过时浏览器上运行的怎么样。比如UC浏览器和Opera Min, 它们在亚洲市场的份额很高(高达35%)。在推广时,利用目标客户所在国家的平均网速来进行测试,避免日后出现大的误差。同样的,不仅要在节流网络以及模拟的高数据处理设备上进行测试,还要在真实设备上测试。
33. 有无建立持续监听?
在进行快速、无限制的测试时,最好使用一个个人的WebPageTest实例。建立一个能自动预警的性能预算监听。建立自己的用户时间标记从而测量并监测具体商用的数据。使用SpeedCurve对性能的变化进行监控,同时利用New Relic获取WebPageTest没法提供的数据。SpeedTracker,Lighthouse和Calibre都是不错的选择。
快速入门
这份清单综合性很强,几乎介绍了所有的可用的优化方式。那么,如果你只有一个小时进行优化,你应该干什么呢?让我们从中总结10个最有用的来。别忘了在你开始优化前和结束优化后,记录你的结果,包括开始渲染时间以及在3G,有限连接下的速度。
结语
文中提到的一些优化可能超出了你工作的范围,有的可能超出了你的预算,有的甚至对你正在使用的有些过时的代码判了死刑。但没关系,本文只是一个普通大纲(希望能做到综合性强),你应该根据自己的工作环境列一份适合自己的清单。最重要的,在开始优化之前,先对项目中存在的问题有一个明确的了解。最后,祝大家2017年优化顺利!
———————————-我们需要的小伙伴————————————
工作职责:
-Web前端的功能设计、开发和优化,开发可重用模块
-实现与视觉稿、交互稿一致的跨平台、浏览器、客户端,兼容性好的移动端页面和PC页面
-前沿技术研究和新技术调研,
职位要求:
-精通JavaScript/HTML5/CSS3等相关前端开发技术,熟悉页面架构和布局,有良好的程序设计和架构能力;
-精通vue或react; 熟悉并熟练使用es6语法,熟悉node.js;
-掌握前端调试、性能优化、工程化等开发等相关技术;
-具有至少一年以上移动端网页开发经验;
-对web技术钻研有强烈兴趣,有良好的学习能力和强烈的进取心,能主动跟进新技术发展动态优先 ;
-学习能力强,强烈的责任心,具有较强的沟通能力及团队合作精神 ;
-有较强的产品理解,能从技术角度推动产品优化;
-思维缜密、思路清晰,较好的逻辑分析能力;
-了解一门后台语言(php或java)者优先;
有意向的同学,请发送简历到 xuheng@baidu .com 🙂
One day I was looking for a checklist for front end development, something to re-use for improving our job and I found this article on Smashing magazine very clear and useful to measure and monitor performance.
It’s a very long article with a lot of insights and resources, so I’m going to highlight here the most important topics. Of course, we are not the IT team so we can’t have full control, but we have the duty to give to the user the best experience or he will buy our products on other ecommerce websites. Gomez studied online shopper behavior and found that 88% of online consumers are less likely to return to a site after a bad experience. The same study found that “at peak traffic times, more than 75% of online consumers left for a competitor’s site rather than suffer delays.”
I still suggest you to read the entire article on Smashing (for designers). Developer MUST READ IT.
Establish a performance culture.
In many organizations, front-end developers know exactly what common underlying problems are and what loading patterns should be used to fix them. However, as long as there is no alignment between dev/design and marketing teams, performance isn’t going to sustain long-term. Why performance matters? Because it has high impacts on business metrics. Case studies and experiments on wpostats.com can prove that:
Goal: Be at least 20% faster than your fastest competitor.
If you prioritize early on which parts are more critical, and define the order in which they should appear, you will also know what can be delayed. Ideally, that order will also reflect the sequence of your CSS and JavaScript imports, so handling them during the build process will be easier. Also, consider what the visual experience should be in “in-between”-states, while the page is being loaded (e.g. when web fonts aren’t loaded yet).
Lara Hogan’s guide on how to approach designs with a performance budget can provide helpful pointers to designers and both Performance Budget Calculator and Browser Calories can aid in creating budgets.
> FOR DESIGNER: HERE THE ENTIRE BOOK DESIGNING FOR PERFORMANCE OF LARA HOGAN. It seems that all chapters are available for free.
Choose the right metrics.
Not all metrics are equally important. Study what metrics matter most to your application: usually it will be related to how fast you can start render most important pixels (and what they are) and how quickly you can provide input responsiveness for these rendered pixels.
Below are some of the metrics worth considering:
The slides available here are very useful to understand these metrics.
> For developers: do you know how to measure TTI in the network tab of chrome dev tool?
Gather data on a device representative of your audience.
Facebook introduced the 2g Tuesday to increase visibility and sensitivity of slow connections. This made me thinking about the differences of connection speeds around the world and I found this article about it very interesting. Anyway, the right approach is test, test, test! There are a lot of tools. One very good is Lighthouse and it’s inside the tab Audits in chrome dev tool.
Setting Realistic Goals
100-millisecond response time, 60 fps.
The RAIL, a user-centered performance model gives you healthy targets.
1 second ÷ 60 = 16.6 milliseconds
). How can you measure this? Tab Rendering in the Chrome dev tool, inside the console drawer, then click FPS meter.SpeedIndex < 1250, TTI < 5s on 3G, Critical file size budget < 170Kb.
Speaking about Time To Interactive, it’s a good idea to distinguish between First Interactive and Consistency Interactive to avoid misunderstandings down the line. The former is the earliest point after the main content has rendered (where there is at least a 5-second window where the page is responsive). The latter is the point where the page can be expected to always be responsive to input. Tools such as Calibre, SpeedCurve and Bundlesize can help you keep your budgets in check, and can be integrated into your build process.
Defining The Environment
Progressive enhancement.
Keeping progressive enhancement as the guiding principle of your front-end architecture and deployment is a safe bet. Design and build the core experience first, and then enhance the experience with advanced features for capable browsers, creating resilient experiences.
Choose a strong performance baseline.
JavaScript has the heaviest cost of the experience, next to web fonts blocking rendering by default and images often consuming too much memory. With the performance bottlenecks moving away from the server to the client, as developers, we have to consider all of these unknowns in much more detail.
Keep in mind that on a mobile device, you should be expecting a 4×–5× slowdown compared to desktop machines. Mobile devices have different GPUs, CPU, different memory, different battery characteristics. Parse times on mobile are 36% higher than on desktop. So always test on an average device — a device that is most representative of your audience.
Build Optimizations
Set your priorities straight.
Run an inventory of all of your assets (JavaScript, images, fonts, third-party scripts and “expensive” modules on the page, such as carousels, complex infographics and multimedia content), and break them down in groups.
> Set up a spreadsheet. Define the basic core experience for legacy browsers (i.e. fully accessible core content), the enhanced experience for capable browsers (i.e. the enriched, full experience) and the extras (assets that aren’t absolutely required and can be lazy-loaded, such as web fonts, unnecessary styles, carousel scripts, video players, social media buttons, large images). We published an article on “Improving Smashing Magazine’s Performance,” which describes this approach in detail.
Consider using the “cutting-the-mustard” pattern.
Albeit quite old, we can still use the cutting-the-mustard technique to send the core experience to legacy browsers and an enhanced experience to modern browsers. Be strict in loading your assets: Load the core experience immediately, then enhancements, and then the extras. Note that the technique deduces device capability from browser version, which is no longer something we can do these days.
For example, cheap Android phones in developing countries mostly run Chrome and will cut the mustard despite their limited memory and CPU capabilities. That’s where PRPL patterncould serve as a good alternative. Eventually, using the Device Memory Client Hints Header, we’ll be able to target low-end devices more reliably. At the moment of writing, the header is supported only in Blink (it goes for client hints in general).
> Since Device Memory also has a JavaScript API which is already available in Chrome, one option could be to feature detect based on the API, and fallback to “cutting the mustard” technique only if it’s not supported
Parsing JavaScript is expensive, so keep it small (This is more about web apps but always interesting).
When dealing with single-page applications, you might need some time to initialize the app before you can render the page. Look for modules and techniques to speed up the initial rendering time (for example, here’s how to debug React performance, and here’s how to improve performance in Angular), because most performance issues come from the initial parsing time to bootstrap the app.
JavaScript has a cost, but it’s not necessarily the file size that drains on performance. Parsing and executing times vary significantly depending on the hardware of a device. On an average phone (Moto G4), a parsing time alone for 1MB of (uncompressed) JavaScript will be around 1.3–1.4s, with 15–20% of all time on mobile spent on parsing. With compiling in play, just prep work on JavaScript takes 4s on average, with around 11s before First Meaningful Paint on mobile. Reason: parse and execution times can easily be 2–5x times higher on low-end mobile devices.
An interesting way of avoiding parsing costs is to use binary templates that Ember has recently introduced for experimentation. These templates don’t need to be parsed.
That’s why it’s critical to examine every single JavaScript dependency, and tools like webpack-bundle-analyzer, Source Map Explorer and Bundle Buddy can help you achieve just that. Measure JavaScript parse and compile times. Etsy’s DeviceTiming, a little tool allowing you to instruct your JavaScript to measure parse and execution time on any device or browser. Bottom line: while size matters, it isn’t everything. Parse and compiling times don’t necessarily increase linearly when the script size increases.
Are you using tree-shaking, scope hoisting and code-splitting?
Tree-shaking is a way to clean up your build process by only including code that is actually used in production and eliminate unused exports in Webpack. Also, you might want to consider learning how to write efficient CSS selectors as well as how to avoid bloat and expensive styles. Feeling like going beyond that? You can also use Webpack to shorten the class names and use scope isolation to rename CSS class names dynamically at the compilation time.
Code-splitting is another Webpack feature that splits your code base into “chunks” that are loaded on demand. Also, consider using preload-webpack-plugin that takes routes you code-split and then prompts browser to preload them using
<link rel="preload">
or<link rel="prefetch">,
attributes that we can also apply without using webpack.> Where to define split points? By tracking which chunks of CSS/JavaScript are used, and which aren’t used. Umar Hansa explains how you can use Code Coverage from Devtools to achieve it.
Take advantage of optimizations for your target JavaScript engine.
Study what JavaScript engines dominate in your user base, then explore ways of optimizing for them.
Make use of script streaming for monolithic scripts. It allows
async
ordefer scripts
to be parsed on a separate background thread once downloading begins, hence in some cases improving page loading times by up to 10%. Practically, use<script defer>
in the<head>
, so that the browsers can discover the resource early and then parse it on the background thread.Caveat: Opera Mini doesn’t support script deferment, so if you are developing for India or Africa,
defer
will be ignored, resulting in blocking rendering until the script has been evaluatedClient-side rendering or server-side rendering?
In both scenarios, our goal should be to set up progressive booting: Use server-side rendering to get a quick first meaningful paint, but also include some minimal necessary JavaScript to keep the time-to-interactive close to the first meaningful paint.
Do you constrain the impact of third-party scripts?
With all performance optimizations in place, often we can’t control third-party scripts coming from business requirements. Third-party-scripts metrics aren’t influenced by end user experience, so too often one single script ends up calling a long tail of obnoxious third-party scripts, hence ruining a dedicated performance effort. To contain and mitigate performance penalties that these scripts bring along, it’s not enough to just load them asynchronously (probably via defer) and accelerate them via resource hints such as
dns-prefetch
orpreconnect
.As Yoav Weiss explained in his must-watch talk on third-party scripts, in many cases these scripts download resources that are dynamic. The resources change between page loads, so we don’t necessarily know which hosts the resources will be downloaded from and what resources they would be.
What options do we have then? Consider using service workers by racing the resource download with a timeoutand if the resource hasn’t responded within a certain timeout, return an empty response to tell the browser to carry on with parsing of the page. You can also log or block third-party requests that aren’t successful or don’t fulfill certain criteria.
Another option is to establish a Content Security Policy (CSP)to restrict the impact of third-party scripts, e.g. disallowing the download of audio or video. The best option is to embed scripts via
<iframe>
so that the scripts are running in the context of the iframe and hence don’t have access to the DOM of the page, and can’t run arbitrary code on your domain. Iframes can be further constrained using thesandbox
attribute, so you can disable any functionality that iframe may do, e.g. prevent scripts from running, prevent alerts, form submission, plugins, access to the top navigation, and so on.For example, it’s probably going to be necessary to allow scripts to run with
<iframe sandbox="allow-scripts">
. Each of the limitations can be lifted via variousallow
values on thesandbox
attribute (supported almost everywhere), so constrain them to the bare minimum of what they should be allowed to do. Consider using Safeframe and Intersection Observer; that would enable ads to be iframed while still dispatching events or getting the information that they need from the DOM (e.g. ad visibility).Assets Optimizations (FOR DESIGNERS AND DEVELOPERS)
Are images properly optimized?
As far as possible, use responsive images with
srcset
,sizes
and the<picture>
element. While you’re at it, you could also make use of the WebP format (supported in Chrome, Opera, Firefox soon) by serving WebP images with the<picture>
element and a JPEG fallback (see Andreas Bovens’ code snippet) or by using content negotiation (usingAccept
headers).> Sketch natively supports WebP, and WebP images can be exported from Photoshop using a WebP plugin for Photoshop. Other options are available, too.
> Use the Responsive Image Breakpoints Generator.
On Smashing Magazine, they use the postfix
-opt
for image names — for example,brotli-compression-opt.png
; whenever an image contains that postfix, everybody on the team knows that the image has already been optimized.Take image optimization to the next level.
When you’re working on a landing page on which it’s critical that a particular image loads blazingly fast, make sure that JPEGs are progressive and compressed with Adept, mozJPEG (which improves the start rendering time by manipulating scan levels) or Guetzli, Google’s new open source encoder focusing on perceptual performance, and utilizing learnings from Zopfli and WebP. The only downside: slow processing times (a minute of CPU per megapixel). For PNG, we can use Pingo, and SVGO or SVGOMG for SVG.
Every single image optimization article would state it, but keeping vector assets clean and tight is always worth reminding. Make sure to clean up unused assets, remove unnecessary metadata and reduces the amount of path points in artwork (and thus SVG code).
These optimizations so far cover just the basics. Addy Osmani has published a very detailed guide on Essential Image Optimization that goes very deep into details of image compression and color management. For example, you could blur out unnecessary parts of the image (by applying a Gaussian blur filter to them) to reduce the file size, and eventually you might even start removing colors or turn the picture into black and white to reduce the size even further. For background images, exporting photos from Photoshop with 0 to 10% quality can be absolutely acceptable as well.
What about GIFs? Well, instead of loading heavy animated GIFs which impact both rendering performance and bandwidth, we could potentially use looping HTML5 videos, yet browser performance is slow with
<video>
and, unlike with images, browsers do not preload<video>
content. At least we can add lossy compression to GIFs with Lossy GIF, gifsicleor giflossy.Good news: hopefully soon we’ll be able to use
<img src=".mp4">
to load videos, and early tests show thatimg
tags display 20× faster and decode 7× faster than the GIF equivalent, in addition to being a fraction in file size.Not good enough? Well, you can also improve perceived performance for images with the multiple background imagestechnique. Keep in mind that playing with contrast and blurring out unnecessary details (or removing colors) can reduce file size as well. Ah, you need to enlarge a small photo without losing quality? Consider using Letsenhance.io.
> This article is the bible. It explain how the browsers load images, how to optimize them (for designers) and how to use them (for developers).
Delivery Optimizations
Do you limit the impact of JavaScript libraries, and load them asynchronously?
When the user requests a page, the browser fetches the HTML and constructs the DOM, then fetches the CSS and constructs the CSSOM, and then generates a rendering tree by matching the DOM and CSSOM. If any JavaScript needs to be resolved, the browser won’t start rendering the page until it’s resolved, thus delaying rendering. As developers, we have to explicitly tell the browser not to wait and to start rendering the page. The way to do this for scripts is with the
defer
andasync
attributes in HTML.In practice, it turns out we should prefer
defer
toasync
(at a cost to users of Internet Explorer up to and including version 9, because you’re likely to break scripts for them). Also, as mentioned above, limit the impact of third-party libraries and scripts, especially with social sharing buttons and<iframe>
embeds (such as maps). Size Limit helps you prevent JavaScript libraries bloat: If you accidentally add a large dependency, the tool will inform you and throw an error. You can use static social sharing buttons (such as by SSBG) and static links to interactive maps instead.Are you lazy-loading expensive scripts with Intersection Observer?
If you need to lazy-load images, videos, ad scripts, A/B testing scripts or any other resources, you can use the shiny new Intersection Observer API that provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document’s viewport. Basically, you need to create a new IntersectionObserver object, which receives a callback function and a set of options. Then we add a target to observe.
You could even take it to the next level by adding progressive image loading to your pages. Similarly to Facebook, Pinterest and Medium, you could load low quality or even blurry images first, and then as the page continues to load, replace them with the full quality versions, using the LQIP (Low Quality Image Placeholders) technique proposed by Guy Podjarny.
Opinions differ if the technique improves user experience or not, but it definitely improves time to first meaningful paint. We can even automate it by using SQIP that creates a low quality version of an image as an SVG placeholder. These placeholders could be embedded within HTML as they naturally compress well with text compression methods. In his article, Dean Hume has described how this technique can be implemented using Intersection Observer.
Browser support? Decent, with Chrome, Firefox, Edge and Samsung Internet being on board. WebKit status is currently in development. Fallback? If we don’t have support for intersection observer, we can still lazy load a polyfill or load the images immediately. And there is even a library for it.
> In general, it’s a good idea to lazy-load all expensive components, such as fonts, JavaScript, carousels, videos and iframes. You could even adapt content serving based on effective network quality. Network Information API and specifically
navigator.connection.effectiveType
(Chrome 62+) use RTT and downlink values to provide a slightly more accurate representation of the connection and the data that users can handle. You can use it to remove video autoplay, background images or web fonts entirely for connections that are too slow.Do you push critical CSS quickly?
To ensure that browsers start rendering your page as quickly as possible, it’s become a common practice to collect all of the CSS required to start rendering the first visible portion of the page (known as “critical CSS” or “above-the-fold CSS”) and add it inline in the
<head>
of the page, thus reducing roundtrips. Due to the limited size of packages exchanged during the slow start phase, your budget for critical CSS is around 14 KB.If you go beyond that, the browser will need additional roundtrips to fetch more styles. CriticalCSS and Critical enable you to do just that. You might need to do it for every template you’re using. If possible, consider using the conditional inlining approach used by the Filament Group.
Are you saving data with
Save-Data
?Especially when working in emerging markets, you might need to consider optimizing experience for users who choose to opt into data savings. The Save-Data client hint request headerallows us to customize the application and the payload to cost- and performance-constrained users. In fact, you could rewrite requests for high DPI images to low DPI images, remove web fonts and fancy parallax effects, turn off video autoplay, server pushes or even change how you deliver markup.
The header is currently supported only in Chromium, on the Android version of Chrome or via the Data Saver extension on a desktop device.
> Finally, you can also use service workers and the Network Information API to deliver low/high resolution images based on the network type.
Do you warm up the connection to speed up delivery?
Use resource hints to save time on
dns-prefetch
(which performs a DNS lookup in the background),preconnect
(which asks the browser to start the connection handshake (DNS, TCP, TLS) in the background),prefetch
(which asks the browser to request a resource) andpreload
(which prefetches resources without executing them, among other things).Most of the time these days, we’ll be using at least
preconnect
anddns-prefetch
, and we’ll be cautious with usingprefetch
andpreload
; the former should only be used if you are very confident about what assets the user will need next (for example, in a purchasing funnel). Notice thatprerender
has been deprecated and is no longer supported.Note that even with
preconnect
anddns-prefetch
, the browser has a limit on the number of hosts it will look up/connect to in parallel, so it’s a safe bet to order them based on priority (thanks Philip!).In fact, using resource hints is probably the easiest way to boost performance, and it works well indeed. When to use what? As Addy Osmani has explained, we should preload resources that we have high-confidence will be used in the current page. Prefetch resources likely to be used for future navigations across multiple navigation boundaries, e.g. Webpack bundles needed for pages the user hasn’t visited yet.
Addy’s article on Loading Priorities in Chrome shows how exactly Chrome interprets resource hints, so once you’ve decided which assets are critical for rendering, you can assign high priority to them. To see how your requests are prioritized, you can enable a “priority” column in the Chrome DevTools network request table (as well as Safari Technology Preview).
For example, since fonts usually are important assets on a page, it’s always a good idea to request the browser to download fonts with
preload
. You could also load JavaScript dynamically, effectively lazy-loading execution. Also, since<link rel="preload">
accepts amedia
attribute, you could choose to selectively prioritize resources based on@media
query rules.A few gotchas to keep in mind: preload is good for moving the start download time of an asset closer to the initial request, but preloaded assets land in the memory cache which is tied to the page making the request. It means that preloaded requests cannot be shared across pages. Also,
preload
plays well with the HTTP cache: a network request is never sent if the item is already there in the HTTP cache.Hence, it’s useful for late-discovered resources, a hero image loaded via background-image, inlining critical CSS (or JavaScript) and pre-loading the rest of the CSS (or JavaScript). Also, a
preload
tag can initiate a preload only after the browser has received the HTML from the server and the lookahead parser has found thepreload
tag. Preloading via the HTTP header is a bit faster since we don’t to wait for the browser to parse the HTML to start the request. Early Hintswill help even further, enabling preload to kick in even before the response headers for the HTML are sent.Beware: if you’re using
preload
,as
must be defined or nothing loads, plus Preloaded fonts without thecrossorigin
attribute will double fetch.Have you optimized rendering performance?
Isolate expensive components with CSS containment — for example, to limit the scope of the browser’s styles, of layout and paint work for off-canvas navigation, or of third-party widgets. Make sure that there is no lag when scrolling the page or when an element is animated, and that you’re consistently hitting 60 frames per second. If that’s not possible, then at least making the frames per second consistent is preferable to a mixed range of 60 to 15. Use CSS’
will-change
to inform the browser of which elements and properties will change.Also, measure runtime rendering performance (for example, in DevTools). To get started, check Paul Lewis’ free Udacity course on browser-rendering optimization and Emily Hayman’s article on Performant Web Animations and Interactions.
We also have a lil’ article by Sergey Chikuyonok on how to get GPU animation right. Quick note: changes to GPU-composited layers are the least expensive, so if you can get away by triggering only compositing via
opacity
andtransform
, you’ll be on the right track.Have you optimized rendering experience?
While the sequence of how components appear on the page, and the strategy of how we serve assets to the browser matter, we shouldn’t underestimate the role of perceived performance, too. The concept deals with psychological aspects of waiting, basically keeping customers busy or engaged while something else is happening. That’s where perception management, preemptive start, early completion and tolerance managementcome into play.
What does it all mean? While loading assets, we can try to always be one step ahead of the customer, so the experience feels swift while there is quite a lot happening in the background. To keep the customer engaged, we can use skeleton screens(implementation demo) instead of loading indicators, add transitions/animations and basically cheat the UX when there is nothing more to optimize.
Testing And Monitoring
Have you tested in proxy browsers and legacy browsers?
Testing in Chrome and Firefox is not enough. Look into how your website works in proxy browsers and legacy browsers. UC Browser and Opera Mini, for instance, have a significant market share in Asia (up to 35% in Asia). Measure average Internet speedin your countries of interest to avoid big surprises down the road. Test with network throttling, and emulate a high-DPI device. BrowserStack is fantastic, but test on real devices as well.
Is continuous monitoring set up?
Having a private instance of WebPagetest is always beneficial for quick and unlimited tests. However, a continuous monitoring tool with automatic alerts will give you a more detailed picture of your performance. Set your own user-timing marks to measure and monitor business-specific metrics. Also, consider adding automated performance regression alerts to monitor changes over time.
Look into using RUM-solutions to monitor changes in performance over time. For automated unit-test-alike load testing tools, you can use k6 with its scripting API. > Also, look into SpeedTracker, Lighthouse and Calibre.
Quick Wins
This list is quite comprehensive, and completing all of the optimizations might take quite a while. So, if you had just 1 hour to get significant improvements, what would you do? Let’s boil it all down to 10 low-hanging fruits. Obviously, before you start and once you finish, measure results, including start rendering time and SpeedIndex on a 3G and cable connection.
<head>
of the page. (Your budget is 14 KB). For CSS/JS, operate within a critical file size budget of max. 170Kb gzipped(0.8-1MB decompressed).dns-lookup
,preconnect
,prefetch
andpreload
.Likes
Shares