Get NitroPack with up to 37% OFF
Eliminating render-blocking resources is one of the most important web performance optimizations. It's also one of the most common suggestions given by Google's PageSpeed Insights.
In this article, we're going in-depth on the topic of render-blocking resources, including:
Let's get started.
Render-blocking resources are files which when the browser encounters them, it must download, parse, and execute them, before doing anything else, including rendering.
The two types of resources that Lighthouse (the technology that powers PageSpeed Insights) flags as render-blocking are CSS and JavaScript files:
Stylesheets are render-blocking when they don't have a disabled attribute or a media attribute that matches the user's device;
Script tags are flagged when they're placed in the page's head tag without a defer or async attribute.
These resources are called render-blocking because when the browser encounters them, it must download, parse, and execute them, before doing anything else, including rendering. More on that in a bit.
Other common resources like images are not render-blocking. However, you still need to optimize and lazy load them for optimal performance.
To better understand why render-blocking resources are a big problem, let's see what happens when they're left unchecked.
The Critical Rendering Path outlines the steps browsers take to visualize a page, or in other words, turn HTML, CSS, and JavaScript into pixels on the user's device.
The path includes a few different moments - Document Object Model (DOM), CSS Object Model (CSSOM), render tree, and layout. Each step is pretty complicated, so if you want a deep technical understanding, check out our guide on optimizing the Critical Rendering Path.
For now, it's important to remember that as the browser goes through a page's HTML, it may encounter external scripts and stylesheets. When that happens, the browser must stop what it's doing, download, parse, and execute them.
As a result, crucial actions like visualizing (rendering) the page can't be performed by the browser's Main Thread. That's why these resources are called render-blocking.
The more of them we serve, the slower content appears on users' screens. That's one of the dangers of working with large scripts and stylesheets.
The worse that could happen when loading a lot of render-blocking resources is to lose your visitors due to slow load times.
On top of that, your PageSpeed Insights score will take a hit as well.
How so?
The PSI performance score is based on five metrics, each of which has a different weight:
Loading a lot of render-blocking scripts directly affects three of the metrics:
If you’ve been struggling with improving your performance score, now you know what might have been the culprit.
Let’s see how to identify the render-blocking resources.
A lot of tools can help you debug issues with render-blocking resources. Here, we'll only focus on Google's PageSpeed Insights (PSI) and WebPageTest since they're more than enough to locate problematic files and estimate their impact.
PSI has an audit that detects render-blocking resources. You can find it in the Opportunities section.
If you click on the suggestion, you'll see which files you need to remove or optimize, plus the potential load time savings.
WebPageTest takes things further by providing a waterfall chart. You can find it by testing a page, scrolling down, and clicking on any of the three test runs.
The render-blocking resources are marked by a small "X" to the left of their name.
This lets you see their impact on load time, plus the order in which browsers handle them.
Now that you know how to find problematic scripts and stylesheets, it's time to optimize them.
There are three optimization categories when it comes to render-blocking resources:
CSS optimizations;
JavaScript optimizations;
Best practices for both CSS and JavaScript optimization.
Let's start with CSS.
When it comes to understanding web performance (and online experiences in general), it's essential to make the distinction between above-the-fold and below-the-fold content.
Above the fold is what users see without scrolling. For example, this is our website's above-the-fold section on a 13,3 inch. (2560x1600) display.
Since users want (and expect) quick load times, we need to serve this upper part of the site instantly. In fact, we can even neglect the lower half for a bit to prioritize above the fold elements.
If we do that, our page can feel faster for visitors, even though the full load time might not be incredibly fast. In other words, we can improve the page's perceived performance.
To pull that off, we need to remove or minimize render-blocking CSS. One technique that lets us do that is Critical CSS.
Critical CSS (or critical path CSS) is the CSS applied to above the fold elements. Put simply, it's the CSS responsible for the content that's immediately visible when a user opens a page.
The process by which we can implement Critical CSS involves three steps:
Finding and extracting the CSS responsible for above the fold content on different viewports;
Inlining that CSS in the page's head tag;
Deferring the non-critical CSS.
When implemented correctly, this process removes render-blocking stylesheets and improves perceived performance.
For the first step, you can use Chrome DevTools' Coverage tab to find how much of each stylesheet remains unused after the initial load.
Open a page you want to inspect, right-click and select "Inspect". From there, click on "Sources" and open the "Coverage" tab. Reload the page to start capturing coverage.
CSS that remains unused after all above the fold elements have been loaded should be considered non-critical and vice versa.
Now, many tools can also extract critical CSS for you:
Sitelocity offers a free tool that only requires you to input a URL. This approach isn't viable at scale, but it's still useful for testing and playing around with different CSS configurations;
For the more technical crowd, critical is an awesome npm module for the job.
Once we have our Critical CSS, we need to inline it in the page's head tag.
For the final step, we need to defer the non-critical CSS. Check out web.dev's detailed guide for more information on how to do that.
Also, remember that users don't view your site on the same devices. Desktop, mobile, and tablet devices have different viewports, so you have to set up Critical CSS for each viewport separately.
You can avoid that by choosing a point (for instance, 1300px) up to which all the CSS is considered critical. This isn't a perfect solution, but it's much easier to implement.
JavaScript is an extremely expensive resource to work with. For many websites, it's the main cause of slow and unresponsive pages.
In a recent tweet, Rick Viscomi (engineer at Google) pointed out that the frameworks that use the fewest KB of JavaScript have the highest % of pages passing the Core Web Vitals assessment.
In other words, having a lot of JavaScript usually translates to a poor experience for users. That's why optimizing scripts and their delivery has benefits that go way beyond rendering speed.
With that out of the way, let's talk about the optimization techniques.
The defer and async attributes make scripts non-blocking and can reduce the overall impact of third-party code.
While similar, these attributes have important differences:
Scripts with the defer attribute keep their relative order. The browser doesn't wait for them to render the page but does execute them in order. For example, say we have two scripts - script 1 and script 2 in that order. If we defer both, the browser will always execute script 1 first, even if script 2 was downloaded first. Deferred scripts are also executed before the DOMContentLoaded event. In other words, they run only after the HTML has been loaded and parsed.
Scripts with the async attribute are completely independent. Whichever loads first is executed first. They also run independently of the DOMContentLoaded event, i.e., they can be executed even if the document hasn't been fully downloaded.
Scripts that need the DOM or whose order is important should use the defer attribute. On the other hand, ad, analytics, and other independent scripts should generally use async.
Code splitting refers to splitting up your JavaScript bundles and sending what's necessary only at the very beginning. This reduces the initial load on the browser since a lot of the JavaScript is served on demand.
JavaScript module bundles like Webpack and Rollup either split code into chucks automatically or provide easy ways to do so. Some popular frameworks like Next.js and Gatsby even have Webpack setup by default.
Code splitting helps with both rendering speed and interactivity since the browser's Main Thread is also responsible for responding to user interactions. The less JavaScript it needs to handle, the better.
Phil Walton (engineer at Google) has a similar strategy, called Idle Until Urgent.
Of course, refactoring your website's code is a tough job, which can't be completely automated. There's no tool that can rewrite your site's code from scratch to produce a new, high-performance version.
That's why these types of optimizations take more effort and specialized skills, but their performance gains can be huge. Even a single line of inefficient JavaScript code can make the site significantly slower or even non-responsive.
These last two optimization techniques can be applied to both resource types. They're relatively standard practices, but the first one (removing unused code) can be tricky to implement since it requires a lot of technical expertise.
It's pretty obvious that unused code should be removed or at least reduced to a minimum. You already saw how to find unused CSS with the "Coverage" tab in DevTools.
There are different instances in which unused CSS can remain on a page.
For example, global CSS files usually have thousands of rules. If you're on the home page, you don't need the CSS that styles blog posts and vice versa. But because the rules are in a global stylesheet, the browser has to deal with them, regardless if they contribute anything to the page.
At NitroPack, we have a feature that deals with this exact issue. More on that in a bit.
The same idea applies to JavaScript:
Unused scripts can be left from plugins and third-party services, which you aren't using but haven't removed completely. You can learn how to do remove them properly in our guide on deleting inactive plugins.
Now, just because DevTools shows a lot of unused CSS or JavaScript doesn't mean that these resources are useless. Parts of them are likely used on other pages or when users start scrolling or clicking, for example.
Don't just start removing things left and right. This is a delicate process, especially if you're doing it by hand. Check out dead code elimination for more details on the tools and techniques for dealing with unused code.
These final techniques don't eliminate render-blocking resources, but they help mitigate their effect. Plus, they're really low-hanging fruit at this point, so it's worth implementing them by default.
First, compression reduces code file size via the application of various algorithms. Compressed files have different binary code than the originals, making them much lighter.
On the other hand, minification removes unnecessary elements from the code like whitespace, comments, and line breaks. The file size reduction isn't as great as with compression, but the end result is still easier to handle for the browser.
Many hosting companies apply these techniques by default. If you're not sure whether that's the case for your site, check out the file names and their HTTP headers. Minified files usually have ".min" in their name, while compressed files have a content-encoding header, usually with a "gzip" or "br" value.
You can also try this compression checker. It tells you whether gzip or Brotli (the two most popular compression techniques) is enabled on your site.
All of the above-mentioned techniques could be implemented in your WordPress site. The good news is you can use WordPress plugins that will do it for you. The bad news - you’re going to need several plugins for all JS, CSS, and code optimizations.
And as you know, that might cause some compatibility and code bloating issues.
The best alternative would be to find an all-in-one solution that can apply all optimization without bloating your code.
Enter NitroPack.
For NitroPack users, a lot of the optimizations discussed in this article are applied by default.
We even have a case study showing how removing render-blocking resources helped one of our clients reduce their homepage's load time by over 6 seconds.
Load time filmstrip without NitroPack:
Load time filmstrip with NitroPack:
Removing render-blocking resources was one of the key optimizations that made that possible. Here's a quick breakdown of NitroPack's features that played a big role:
Critical CSS. Our service automatically generates Critical CSS for every page on client websites;
Reduce Unused CSS (RUCSS). This feature (available in our "Advanced Settings") finds and reduces unnecessary CSS rules. This directly affects how fast the browser builds the render tree;
JavaScript Lazy Loading. Our default optimization mode lazy loads JavaScript until user interaction. This helps prioritize content rendering (HTML, CSS) to provide a better loading experience for visitors. We also have a Delayed Scripts feature, which lets users specify which scripts to load with a delay;
Minification and compression. Lastly, our service automatically minifies and compresses code files.
Again, check out the Mikalsen Utvikling case study for more details on how these and other optimizations affect a website's load times.
If you've dealt with the render-blocking problem, you're well on your way toward a fast page experience and happier visitors.
Among other powerful resource prioritization techniques, there are other areas you should focus on before calling it a day. Here are a few useful links that can help you understand, debug, and fix common site speed problems:
Core Web Vitals: How To Measure & Improve Them (2021 Guide) - the Core Web Vitals are three metrics that Google uses to determine how visitors experience your website. Improving them can have a positive impact on your SEO;
Understanding and Improving Time to First Byte (TTFB) - reducing your website's TTFB is essential if you want to provide a fast experience for visitors. This article shows how you can do that;
Image Optimization for The Web: The Essential Guide - images are the largest elements on modern web pages. Without optimizing them, you can't hope to have a truly fast website.
Lastly, remember to focus on field metrics (like the Core Web Vitals) when evaluating the success of your optimization efforts. Green page speed scores might look cool, but they don't fully capture how users experience your site. Focus on what matters to your visitors.
Evgeni writes about site speed and makes sure everything we publish is awesome.