Largest Contentful Paint (LCP) measures how long it takes for the largest above the fold element to load on a page.
Reducing your website’s LCP helps users see the essential content on your website faster.
The “Diagnostics” section in PageSpeed Insights shows you which element triggers the LCP metric.
To improve a page’s LCP time, you have to optimize that element’s load time.
Here are five proven ways to do that.
Five optimization categories help fix LCP problems on most websites:
All of these also help with other performance metrics like FCP, CLS and TTI.
Image optimization is a collection of techniques that can improve all load metrics and reduce layout shifts (CLS).
Compression means applying different algorithms to remove or group parts of an image, making it smaller in the process.
There are two types of compression - lossy and lossless.
Lossy compression removes parts of data from the file, resulting in a lower quality, lightweight image. JPEG and GIF are examples of lossy image types.
Lossless compression maintains roughly the same image quality, i.e., it doesn’t remove any data. It produces heavy, high-quality images. RAW and PNG are lossless image types.
To find the ideal compression level for your website, you have to experiment. Fortunately, there are lots of great tools for the job:
You can use imagemin if you’re comfortable with command line tools;
If not, beginner-friendly tools like Optimizilla also do a great job;
At NitroPack, we also offer adjustable image quality as part of our image optimization stack.
Also, remember that as your website grows, you’ll likely add more and more images. Eventually, you’ll need a tool that optimizes images to your desired level automatically.
The tricky thing about choosing between image formats is finding a balance between quality and speed.
High-quality images are heavy but look great. Lower-res ones look worse but load faster.
In some cases, high-resolution images are necessary to stand out from the competition. Think photography and fashion sites.
For others (news sites and personal blogs), lower-res images are perfectly fine.
The choice here depends on your personal needs. Again, you have to run tests to see how much image quality affects your visitor’s behavior.
Here’s a quick checklist of rules you can use as a guide:
Use SVG for images made up of simple geometric shapes like logos.
Use PNG whenever you have to preserve quality while sacrificing a bit of speed.
For an optimal balance between quality and UX, use WebP while keeping the original format as a backup since WebP doesn’t have 100% browser support. That’s the strategy we use at NitroPack.
Again, don't forget to experiment with compression levels after choosing your image type.
A classic mistake when working with images is serving one large image to all screen sizes.
Large images look good on smaller devices, but they still have to be processed entirely. That’s a massive waste of bandwidth.
A better approach is to provide different image sizes and let the browser decide which one to use based on the device. To do that, use the srcset attribute and specify the different widths of the image you want to serve. Here’s an example:
As you can see, with srcset, we use w instead of px. If you want an image version to be 600px wide, you have to write 600w.
Again, this method outsources the choice of image size to the browser. You just provide the options.
You should also use DevTools to check how images look on different viewports.
When it comes time to change image sizes use Smart Resize to resize in bulk.
Note for WordPress users: Since version 4.4., WP automatically creates different versions of your images. It also adds the srcset attribute. If you’re a WordPress user, you only need to provide the right image sizes.
This final tip is about optimizing how fast the browser discovers hero images.
Hero images are usually the most meaningful above the fold elements, so loading them faster is crucial for the user experience.
Fobres’ site preloads the largest above the fold image on the homepage:
This technique tells the browser to prioritize that specific image when rendering the page.
Preloading can dramatically improve LCP, especially on pages with hero images that are loaded with:
The background-image property in CSS.
Since browsers discover these images late, using link rel=preload can improve both actual and perceived performance.
If you’re loading hero images with JS or CSS, check out this tutorial on preloading images by Addy Osmani.
If left unoptimized, they can slow down the page load and consequently - hurt your LCP.
Here’s how you can optimize them.
Minification removes unnecessary parts from code files like comments, whitespace and line-breaks. It produces a small to medium file size reduction.
On the other hand, compression reduces the volume of data in the file by applying different algorithms. It typically produces a huge reduction in file size.
Both techniques are a must when it comes to performance.
Some hosting companies and CDN providers apply these techniques by default. It’s worth checking to see if they’re implemented on your site.
You can use the “Network” tab in DevTools and analyze the response headers for a file to see if that’s the case:
Most minified files have “.min” somewhere in their name. Compressed files have a content-encoding response header, usually with a gzip or br value.
If your site’s files aren’t minified or compressed, I suggest you get on it right away. Ask your hosting company and CDN provider if they can do this for you.
If they can’t, there are lots of minification and compression tools, including free ones.
Implementing Critical CSS is a three-step process involving:
Finding the CSS that styles above the fold content on different viewports;
Placing (inlining) that CSS directly in the page’s head tag;
Deferring the rest of the CSS.
You can start by using the “Coverage” panel in DevTools to figure out how much of each CSS file is used on the page.
You can arrange the resources by type and go through each CSS and JS file.
Obviously, CSS that is not used on the page isn’t critical. On that note, it’s worth trying to remove or reduce this unused CSS, as it can slow down rendering. That’s why we built the Reduce Unused CSS feature for NitroPack.
Once extracted, inline the Critical CSS in the head tag of your page.
Finally, load the rest of the CSS asynchronously. Google recommends using link rel="preload", as="style", a nulled onload handler and nesting the link to the stylesheet in a noscript element.
Also, don’t forget to consider different viewports. Desktop and mobile users don’t see the same above the fold content. To take full advantage of this technique, you need different Critical CSS based on the device type.
Again, NitroPack does all of this for every page on your site.
For more information on code-splitting, check out this article by web.dev.
Reducing initial server response time is one of the most common suggestions in PageSpeed Insights.
Here are some of the steps you can take to fix this issue:
Upgrade your hosting plan. If you’re on a cheap, shared hosting plan, you need to upgrade. It’s impossible to have a fast website with a slow host server.
Optimize your server. Lots of factors can impact your server’s performance, especially once traffic spikes. Use this tutorial by Katie Hempenius to assess, stabilize, improve and monitor your server.
Take maximum advantage of caching. Caching is the backbone of great web performance. Many assets can be cached for months or even a year (logos, nav icons, media files). Also, if your HTML is static, you can cache it, which can reduce TTFB significantly.
Use a CDN. A CDN reduces the distance between visitors and the content they want to access. To make your job as easy as possible, get a caching tool with a built-in CDN.
Use service workers. Service workers let you reduce the size of HTML payloads by avoiding repetition of common elements. Once installed, service workers request the bare minimum of data from the server and transform it into a full HTML doc. Check out this tutorial by Philip Walton for more details on how to do this.
This approach offloads tasks (data fetching, routing, etc.) away from the server to the client.
Also, using HTTP/2 Server Push and link rel=preload can help deliver critical resources sooner.
Finally, you can try combining CSR with prerendering or adding server-side rendering in the mix. The approach you take here depends on your website’s tech stack. The important thing is to be aware of how much work you’re putting on the client and how that affects performance.
For a deep dive into the topic, I recommend this comprehensive guide to Rendering on the Web.
These three attributes help the browser by pointing it to resources and connections it needs to handle first.
First, use rel=preload for resources the browser should prioritize. Typically, these are above the fold images, videos, Critical CSS, or fonts. It’s as simple as adding a few lines to head tag like this:
When preloading fonts, the like as=”font”, type=”font/woff2” and crossorigin help the browser prioritize resources during the rendering process. As a bonus, preloading fonts also helps them meet FCP, which reduces layout shifts.
Forbes.com uses this technique to reduce their font load time:
Next, rel=preconnect tells the browser that you intend to establish a connection to a domain immediately. This reduces round-trips to important domains.
Again, implementing this is very simple:
But be very careful when preconnecting.
Just because you can preconnect to a domain doesn’t mean you should. Only do so for domains you need to connect to right away. Using it for unneeded hosts stalls all other DNS requests, resulting in more harm than good.
Finally, to save time on the DNS lookup for connections that aren’t as critical, use rel=dns-prefetch.
Prefetching can also be used as a fall back to preconnect.
All of these techniques are extremely useful for improving your website’s performance metrics. Implement them if you haven’t already. Just be careful when selecting which resources to preload and which hosts to preconnect to.
Even if you don’t have any LCP concerns, it’s a good idea to periodically look at field data to detect potential problems.
If PageSpeed Insights doesn’t display this section due to lack of data, you can use different tools to access the CrUX dataset::
BigQuery - requires a Google Cloud project and SQL skills;
The Core Web Vitals report in Google Search Console - very beginner-friendly, useful for marketers, SEOs and webmasters.
Which tool you choose depends on your preference. The important thing is to be aware of any potential issues with your website’s LCP (and the other Core Web Vitals.)
Make sure to check the Core Web Vitals report at least once a month. Sometimes issues can pop-up in unexpected places and remain undetected for a long time.