Largest Contentful Paint (LCP) measures how long it takes for the largest above the fold element to load on a page. By tracking the biggest element, this metric focuses on the culmination of the loading experience.
Or to put it another way:
Reducing your website’s LCP helps users see the essential content on your website faster.
To find elements that affect this metric, use the “Performance” panel in DevTools. Hover over the LCP icon in the “Timings” section and it will point you to the largest visible element.
The “Diagnostics” section of the Lighthouse report also does the same job.
To improve a page’s LCP time, this is the element that has to load faster.
Five optimization categories help fix LCP problems on most websites:
All of these also help with other performance metrics like FCP, CLS and TTI.
Image optimization is a collection of techniques that can improve all load metrics and reduce layout shifts (CLS).
Compression means applying different algorithms to remove or group parts of an image, making it smaller in the process.
There are two types of compression - lossy and lossless.
Lossy compression removes parts of data from the file, resulting in a lower quality, lightweight image. JPEG and GIF are examples of lossy image types.
Lossless compression maintains roughly the same image quality, i.e., it doesn’t remove any data. It produces heavy, high-quality images. RAW and PNG are lossless image types.
To find the ideal compression level for your website, you have to experiment. Fortunately, there are lots of great tools for the job:
You can use imagemin if you’re comfortable with command line tools;
At NitroPack, we also offer adjustable image compression as part of our image optimization features.
Also, remember that as your website grows, you’ll likely add more and more images. Eventually, you’ll need a tool that optimizes images to your desired level automatically.
The tricky thing about choosing between image formats is finding a balance between quality and speed.
High-quality images are heavy but look great. Lower-res ones look worse but load faster.
In some cases, high-resolution images are necessary to stand out from the competition. Think photography and fashion sites.
For others (news sites and personal blogs), lower-res images are perfectly fine.
The choice here depends on your personal needs. Again, you have to run tests to see how much image quality affects your visitor’s behavior.
Here’s a quick checklist of rules you can use as a guide:
Use SVG for images made up of simple geometric shapes like logos.
Use PNG whenever you have to preserve quality while sacrificing a bit of speed.
For an optimal balance between quality and UX, use WebP with a JPEG backup. WebP doesn’t have 100% browser support, so it's good to have a backup in place.
Again, don't forget to experiment with compression levels after choosing your image type.
A classic mistake when working with images is serving one large image to all screen sizes.
Large images look good on smaller devices, but they still have to be processed entirely. That’s a massive waste of bandwidth.
A better approach is to provide different image sizes and let the browser decide which one to use based on the device. To do that, use the srcset attribute and specify the different widths of the image you want to serve. Here’s an example:
As you can see, with srcset, we use w instead of px. If you want an image version to be 600px wide, you have to write 600w.
Again, this process outsources the choice of image size to the browser. You just provide the options.
When deciding on the correct image sizes, use Google Analytics to figure out what percentage of your audience visits your site from a desktop or mobile device. The “Devices” report also has in-depth info about the specific devices your visitors use.
You should also use DevTools to check how images look on different viewports.
When it comes time to change image sizes use Smart Resize to resize in bulk.
Note for WordPress users: Since version 4.4. WP, automatically creates different versions of your images. It also adds the srcset attribute. If you’re a WordPress user, you only need to provide the right image sizes.
To learn more about image optimization for speed and SEO, check out our full Image Optimization Guide.
If left unoptimized, they can slow down the page load and consequently - hurt your LCP.
Here’s how you can optimize them.
Minification removes unnecessary parts from code files like comments, whitespace and line-breaks. It produces a small to medium file size reduction.
On the other hard, compression reduces the volume of data in the file by applying different algorithms. It typically produces a huge reduction in file size.
Both techniques are a must when it comes to performance.
Some hosting companies and CDN providers apply these techniques by default. It’s worth checking to see if they’re implemented on your site.
You can use the “Network” tab in DevTools and analyze the response headers for a file to see if that’s the case:
Most minified files have “.min” somewhere in their name. Compressed files have a content-encoding response header, usually with a gzip or br value.
If your site’s files aren’t minified or compressed, I suggest you get on it right away. Ask your hosting company and CDN provider if they can do this for you.
If they can’t, there are lots of minification and compression tools, including free ones.
Implementing Critical CSS is a three-step process involving:
Finding the CSS that styles above the fold content on different viewports;
Placing (inlining) that CSS directly in the page’s head tag;
Deferring the rest of the CSS.
For the first step, use the “Coverage” panel in DevTools. It visualizes how much of each CSS file is critical.
You can arrange the resources by type and go through each CSS and JS file. Most websites have one main stylesheet - that’s the one you should focus on.
Once extracted, inline the Critical CSS in the head tag of your page.
Finally, load the rest of the CSS asynchronously. Google recommends using link rel="preload", as="style", a nulled onload handler and nesting the link to the stylesheet in a noscript element.
Also, don’t forget to consider different viewports. Desktop and mobile users don’t see the same above the fold content. To take full advantage of this technique, you need different Critical CSS based on the device type.
Again, NitroPack does all of this for every page on your site.
For more information on code splitting, check out this article by web.dev.
Reducing initial server response time is one of the most common suggestions in PageSpeed Insights.
Here are some of the steps you can take to fix this issue:
Upgrade your hosting plan. If you’re on a cheap, shared hosting plan, you need to upgrade. It’s impossible to have a fast website with a slow host server.
Optimize your server. Lots of factors can impact your server’s performance, especially once traffic spikes. Use this tutorial by Katie Hempenius to assess, stabilize, improve and monitor your server.
Take maximum advantage of caching. Caching is the backbone of great web performance. Many assets can be cached for months or even a year (logos, nav icons, media files). Also, if your HTML is static, you can cache it, which can reduce TTFB significantly.
Use a CDN. A CDN reduces the distance between visitors and the content they want to access. To make your job as easy as possible, get a caching tool with a built-in CDN.
Use service workers. Service workers let you reduce the size of HTML payloads by avoiding repetition of common elements. Once installed, service workers request the bare minimum of data from the server and transform it into a full HTML doc. Check out this tutorial by Philip Walton for more details on how to do this.
This approach offloads tasks (data fetching, routing, etc.) away from the server to the client.
Also, using HTTP/2 Server Push and link rel=preload can help deliver critical resources sooner.
Finally, you can try combining CSR with prerendering or adding server-side rendering in the mix. The approach you take here depends on your website’s tech stack. The important thing is to be aware of how much work you’re putting on the client and how that affects performance.
For a deep dive into the topic, I recommend this comprehensive guide to Rendering on the Web.
These three attributes help the browser by pointing it to resources and connections it needs to handle first.
First, use rel=preload for resources the browser should prioritize. Typically, these are above the fold images, videos, Critical CSS, or fonts. It’s as simple as adding a few lines to head tag like this:
When preloading fonts, the like as=”font”, type=”font/woff2” and crossorigin help the browser prioritize resources during the rendering process. As a bonus, preloading fonts also helps them meet FCP, which reduces layout shifts.
Forbes.com uses this technique to reduce their font load time:
Next, rel=preconnect tells the browser that you intend to establish a connection to a domain immediately. This reduces round-trips to important domains.
Again, implementing this is very simple:
But be very careful when preconnecting.
Just because you can preconnect to a domain doesn’t mean you should. Only do so for domains you need to connect to right away. Using it for unneeded hosts stalls all other DNS requests, resulting in more harm than good.
Finally, to save time on the DNS lookup for connections that aren’t as critical, use rel=dns-prefetch.
Prefetching can also be used as a fall back to preconnect.
All of these techniques are extremely useful for improving your website’s performance metrics. Implement them if you haven’t already. Just be careful when selecting which resources to preload and which hosts to preconnect to.
Even if you don’t have any LCP concerns, it’s a good idea to periodically look at field data to detect potential problems.
Field data are gathered by the Chrome User Experience Report (CrUX). The dataset shows how real users experience your site.
You can use different tools to access the dataset:
BigQuery - requires a Google Cloud project and SQL skills;
The Core Web Vitals report in Google Search Console - very beginner-friendly, useful for marketers, SEOs and webmasters.
Which tool you choose depends on your preference. The important thing is to be aware of any potential issues with your website’s LCP (and the other Core Web Vitals.)
Make sure to check the Core Web Vitals report at least once a month. Sometimes issues can pop-up in unexpected places and remain undetected for a long time.