Core Web Vitals: Everything You Need To Know About Google's New Ranking Factor
In May 2020, Google announced an upcoming change to its ranking algorithm.
The goal of this shift: to add more ranking signals for page (user) experience.
Right now, the page experience signals are Mobile Friendliness, Safe Browsing, HTTPS and Intrusive Interstitials (i.e., horrible pop-ups on mobile).
Sometime in 2021, the Core Web Vitals will also join the mix and become ranking factors.
But what exactly are these Core Web Vitals?
Well, they’re three metrics, representing the load time, interactivity and visual stability of a page.
Together, they paint a complete picture of the actual user experience on a website.
So, if you want to improve the user experience and (soon) the organic rankings of your website, read on.
In this article, you’ll learn:
What First Input Delay (FID) Is
What Largest Contentful Paint (LCP) Is
What Cumulative Layout Shift (CLS) Is
Why These New Metrics Are Crucial For Your Website’s Success
How To Audit Your Website For Problems With The Core Web Vitals
3 Proven Ways To Optimize Your Website’s Performance
How Best To Implement Website Optimization Strategies
Let’s get started.
First Input Delay (FID)
First Input Delay (FID) is the time it takes for the browser to respond to the first user interaction. FID measures the delay after distinct actions like clicks or taps. Scrolls and zooms don’t trigger the FID metric.
If you keep your FID below 100ms, you’re in great shape.
In general, 100ms is an important threshold for users.
In his 1993 book Usability Engineering Jakob Nielsen points out the following:
“0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.”
Source: nngroup.com - “Response Times: The 3 Important Limits”
Human perception hasn’t changed much since then. And Google also treats 100ms as the border between good and bad FID.
Now, FID tracks the delay after only the first input. Why is that?
Well, first impressions are everything on the web. Users will instantly leave and most likely won’t return if a website frustrates them on their first visit.
You (generally) only get one shot at a first-time visitor.
That’s why keeping a low FID is crucial.
Of course, other input delays can also hurt the user experience. Improving your FIDs isn’t an excuse for neglecting all other delays.
But if you had to choose, the delay after the first input should be your priority.
FID is also unique in one other way: it’s a field-only metric (LCP and CLS both can be measured in the field and in the lab). It requires a real user to provide an input at a specific time.
If there’s no input, there’s nothing to measure. And a user clicking on the same button at different times will produce a different result. But we’ll get into that in a bit.
For now, it’s important to know that FID is a field-only metric.
That’s the reason Max Potential First Input Delay exists.
What is Max Potential First Input Delay?
Maximum Potential First Input Delay (MPFID) is the worst-case scenario for the delay between a user’s input and the response from the browser. MPFID depends on the length of the longest task running on the main thread.
You can see your MPFID under the “Performance” section of the Lighthouse report.
Again, this metric computes only the worst-case scenario. It does that by analyzing the main thread.
If you don’t know what the main thread is, here’s the TL;DR:
The main thread does the heavy lifting in terms of rendering a page. The browser relies on it for a ton of tasks, including responding to user interactions.
Now back to MPFID.
If we take the longest task on the main thread and imagine a user clicking a button right as the browser starts executing it, we arrive at the worst-case scenario or MPFID.
In the example above, the browser has to wait until Task One (the longest task) is finished before responding to the user input.
Keep in mind that users can click at any other time. In such cases, a different delay will occur. Sometimes there won’t be a delay.
For example, if the user clicks on a button in between tasks, the browser can respond immediately.
That’s why FID can change depending on when user interactions happen.
At the same time, MPFID gives you valuable information without needing a real user.
How To Check Your FID and MPFID
You can easily check your FID with Google’s PageSpeed Insights.
It’s one of the first things you see in the “Field Data” report.
You can also use the Lighthouse audit in Chrome to check for MPFID.
Go to a page you want to analyze, right-click and select “Inspect.” After that, click on the “Lighthouse” button.
Select what you want to test for and let Chrome do its thing.
The “Performance” category has information about your MPFID.
From there, it’s all about digging deeper into specific problems.
However, most websites don’t have a particular issue that only affects FID. A lot of the best practices for improving website performance in general will also help with FID.
We’ll talk about three proven ways to improve website performance at the end of this guide.
But if you want specific tips for FID optimization, check out this awesome article by the Lighthouse team.
For now, let’s see how FID stacks up to other popular performance metrics.
FID and Other Web Performance Metric
These three metrics complement each other perfectly.
However, if you’ve read our article on Total Blocking Time (TBT), you know TBT also resides in the same area.
So, FID, FCP, TTI and TBT all work together. I know it’s a bit of an acronym overload but bare with me.
We’ll look at each pair separately.
FID and First Contentful Paint (FCP)
FCP tells you when the first piece of content was painted.
That could be a picture, video, text, animation, or anything else described in the DOM.
So, FID and FCP measure different things. One deals with responsiveness and the other - with load time.
Together, both metrics give you a great sense of the first impression your website makes. And as I said, first impressions are crucial.
Also, user inputs usually happen after the first piece of content is painted. That’s why FID and FCP are inherently connected.
Measuring only one of them would give you an incomplete picture of the actual user experience.
FID and Total Blocking Time (TBT)
TBT is the amount of time, during which Long Tasks block the main thread and affect the usability of a page.
Again, we have similarities between FID and TBT. Both occur between FCP and TTI. And Long Tasks on the main thread can hurt them significantly.
Also, both FID and TBT measure the severity of the unresponsiveness on a page. The difference is that TBT measures it without user input. It simply adds together the blocking time for each Long Task.
On the other hand, FID calculates the delay for each specific case.
So, analyzing both metrics gives you a clear picture of how badly the interactivity of a page was affected by Long Tasks.
FID and Time To Interactive (TTI)
TTI is the time it takes for a page to become fully interactive.
A page becomes fully interactive when it responds to user input in less than 50ms and event handlers are registered for most visible page elements.
Again, FID is a great companion here.
A page might become interactive at any moment. But users can click or tap on the screen way before that.
This is where TTI falls short. After all, what’s the point of knowing when a page became fully interactive when the user experience might’ve been ruined way before that?
Users don’t care if a page became interactive in 6, 7, or 8 seconds. They care if a page was interactive when they needed it to be. And FID tells you exactly that.
That’s how FID fills the gap left by TTI.
Now that we’ve covered FID, let’s move on to Largest Contentful Paint.
Largest Contentful Paint (LCP)
Largest Contentful Paint (LCP) measures the time it takes for the largest above the fold content element to load. Items that appear only when a user taps the screen or scrolls down don't affect LCP.
A few things to unpack here.
First, the largest element can be a picture, video, or a block of text. We'll look at different examples in a bit.
Second, only above-the-fold content can affect this metric. If a user taps on the screen or scrolls down, the browser stops looking for another LCP candidate.
According to PSI, everything below 2.5s is considered a good LCP score. If your largest above the fold elements load faster than that, you're good.
So, the concept of LCP isn't that complex.
But why should you even bother with it? Especially since there are already two other similar metrics.
Because the other two metrics have significant gaps.
Let me explain.
The Shortcomings of First Contentful Paint (FCP) and First Meaningful Paint (FMP)
LCP's addition addresses some of the problems with FCP and FMP.
Actually, FMP will no longer be a part of PSI scores.
It was too complicated to track and prone to huge swings. And it wasn't as "meaningful", as the name suggested. In short, don't worry about FMP anymore.
Next, FCP is still around. But it now contributes less to the overall PSI score. Its weight is down from 20% to 15%.
Source: web.dev - “What's New in Lighthouse 6.0”
Why is that?
Well, FCP measures how long it takes for any content to be painted. Anything described in the DOM triggers FCP.
That sounds good until you realize you're measuring how long it takes for your loading screen to appear.
Other small and unimportant elements also trigger this metric.
That's not to say FCP is irrelevant. Loading screens, icons and animations are important. In general, progress indicators are great for improving the user experience.
But at the same time, they're not what your visitors came for. And they're not what you want them to see or remember.
That's why FCP doesn't paint a complete picture. And it's also why we need LCP.
Here's the deal:
With LCP, you know how long it takes for the largest element to load. And that’s an item you want your visitors to see and remember.
So, FCP tracks the beginning of the loading experience while LCP tracks the culmination. By analyzing both, you get a better picture of the actual user experience.
Let's check out two examples of LCP to see how it works.
Some websites rely on their largest element to survive.
News websites are a great example.
They have a shocking headline, picture, or video that they want visitors to see. If that doesn't load quickly, people won't stick around much longer.
For instance, this is the item that triggers the LCP metric in Yahoo News:
Source: Yahoo News
As you can see, it's the picture for the featured story of the day. Remove that and you're throwing half (or more) of your traffic out the window.
Next, let's look at the LCP for Google News.
Source: Google News
Here the featured headline triggers the LCP metric. Again, this is what people come to the site for. Not the Google News logo or the "More Headlines" button.
As you can see, the largest element often plays the most prominent role.
If Yahoo or Google only tracked their FCP, they’d miss out on crucial user experience insights.
How You Can Check Which Element Triggers LCP
You can easily perform this check on your own website. Or any website for that matter.
First, open the page you want to inspect in Chrome. Right-click and select "Inspect."
From there, go to "Performance."
Click the “Reload” button.
Chrome will analyze the page for a few seconds and give you a report.
In the "Timings" section, you can see a small LCP icon. When you hover over it, it will paint the largest element of the page blue.
And that's about it. You now know which element you need to work on to improve your LCP.
Let’s move on to Cumulative Layout Shift.
Cumulative Layout Shift (CLS)
CLS measures the significance of unexpected layout shifts on a page. Unexpected layout shifts occur when content on the page moves around without user input or prior notification.
A CLS score below 0.1 means a page is visually stable.
Source: web.dev - https://web.dev/cls/
But how does Google compute this score?
By calculating the impact fraction and distance fraction for each unexpected layout shift. These two numbers answer two key questions:
How much of the viewport did the shift affect?
How far did the elements move during the shift compared to the viewport?
As the impact and distance fractions grow, the CLS score gets worse.
Put simply, the CLS score is the sum total of all individual layout shift scores for each unexpected layout shift.
Check out this webinar by Annie Sullivan if you want a technical deep dive into how and why Google computes CLS this way.
For now, let’s talk about the expected and unexpected layout shifts.
The Difference Between Expected and Unexpected Layout Shifts
An expected layout shift is just what it sounds like.
You click on something that should change the layout of the page. And it does.
For example, say you’re on your favorite blog.
You’re looking for a specific article, so you click the search icon. A search bar opens up.
No surprises here. We’ve been conditioned to expect this result. Search icons look similar and clicking them has the same effect every time.
Now, imagine you open a page and see a video.
It has a clickworthy title and one of those shocking thumbnails. You try to click on it but it disappears.
Actually, it didn’t disappear.
It moved a few pixels down because an ad popped up. And you clicked on the ad instead of the video.
These are two examples of a layout shift. One is good and the other isn’t. You probably get which is which.
The point is: not all layout shifts are the same.
Some are awesome. When you click a button and it does what you expect, everything goes smoothly. A search bar appears, a new menu opens or a loading animation tells you to wait for a second. These are all expected or user-initiated changes.
On the other hand, some unexpected layout shifts can be awful. Poorly placed ads, pop-ups and weird animations can ruin the user experience on your site.
Google also differentiates between user-initiated and unexpected layout shifts.
They flag shifts that occur within 500ms of user interactions with a hadRecentInput attribute. These layout shifts don’t count towards your CLS score.
Now that you know what this new metric is, it’s only fair to ask: Does the CLS score really matter?
Is CLS Really That Important?
Right now, CLS doesn’t contribute much to your overall PSI score.
Source: web.dev - “What's New in Lighthouse 6.0”
And if you saw only this picture, you might be tempted to discard CLS all together.
But this 5% doesn't tell the whole story.
On one hand, CLS is an entirely new metric. Google still has to gather real-world data about it. It’ll take a while before we know how important it really is.
On the other hand, CLS is the first PSI metric that doesn’t report the speed of something. You can find it in the PSI report, but it has nothing to do with speed. It measures the user experience more than anything else.
Which brings me to my next point.
CLS is a user-centric metric.
Everyone's been going on and on and user-centric design and metrics. Google’s been talking about it for at least 10 years.
Finally, as part of the three Core Web Vitals, CLS will become a part of Google’s ranking algorithm sometime in 2021.
So, you have a new and unique metric that’s part of the modern way of thinking about website performance. In a few months, it will also directly affect organic rankings.
Not much to think about here.
Optimizing your CLS score is a must. Regardless of the small weight it carries right now.
You can start by learning to spot layout shifts.
How You Can Check For Layout Shifts
It’s common sense to check your website’s layout shifts.
And you don’t need fancy tools to do it.
Go to a few key pages - the home page, checkouts, ad landing pages - and ask yourself:
Does this button do what I think it should do?
Is this ad getting in the way of me clicking a button?
Are these pop-up windows helping me achieve my goal on the page?
You don’t have to be a UX expert to answer these questions. You just have to put yourself in the shoes of the user.
Which is what the CLS metric tries to do anyway.
You can also ask someone who’s never visited your site to browse it. It’s amazing how much you can learn from watching other people.
Besides that, Chrome's DevTools are another great way to detect layout shifts.
Right-click on a page you want to analyze and select “Inspect.”
Go to “More Tools” and select “Rendering.”
At the bottom, you’ll see a “Layout Shift Regions” option with a checkbox next to it. Select it.
Now every time a layout shift occurs, the shifted area will be highlighted.
This won’t tell you if a layout shift is “good” or “bad.” But it will help you find a few shifts you might’ve missed otherwise.
Besides ads, you can look for slow loading fonts and animations as a potential source of unexpected layout shifts.
So, now you know what the Core Web Vitals are.
It’s time to check if your website is optimized for them.
How To Audit Your Website’s Core Web Vitals In 5 Minutes
Now, let’s run through a quick process that you can use to audit your website.
After going through this exercise, you’ll know:
Your Current Core Web Vitals Score
Which Pages Cause Problems With Which Metric
Specific Actions You Can Take To Improve Your Website Performance
Let’s get started.
Start With The New Google Search Console (GSC) Report
Recently, Google introduced a new report in the GSC.
While not the most detailed, this report can give you some interesting insights.
I already touched on it when talking about FID, but let’s dive a little deeper.
When you click on “Core Web Vitals” in GSC, you’ll get a Mobile and a Desktop report. As you can see, GSC tells you how many poor, halfway decent and good URLs you have.
If you click on one of the reports, you’ll get a bit more information.
Now, when you click on one of the issues in the “Details” section, it won’t show you each URL that has a specific problem (for example, a long LCP). Even though that might be useful.
Instead, GSC will give you one example URL that has this issue.
Why is that?
Well, GSC assigns problems to a group of URLs. In this report, Google assumes that pages with similar content and resources have similar performance issues.
For example, let’s say you’re running an eCommerce website selling men’s shoes. You have hundreds of product pages with similar (or identical) structures.
Chances are, if the product image for one model isn’t optimized, the images on other product pages won’t be, either. Therefore, the problem won’t be isolated.
And if the product images are the largest elements on the product page, they’ll trigger the LCP metric.
As a result, you have a group of pages with the same problem - a slow LCP.
Now, once you have this general idea of your website’s problems you can start locating specific issues.
PSI and The Lighthouse audit can help you with that.
Get More Info With PSI and The Lighthouse report
PSI is a classic tool for evaluating your site speed. People generally know what it does and how it works.
You simply go to the PSI page and enter the URL you want to analyze.
Right now, Google gives you an assessment of the Core Web Vitals right at the top.
If there’s an issue with some of these metrics, you have work to do.
The problem is, PSI won’t tell you what causes your bad CLS score, for example. You’ll have to figure that out by yourself.
At the same time, you can get specific suggestions from the “Opportunities” and “Diagnostics” sections.
From there, you can start isolating and solving problems. We’ll touch on a few common ways to improve your Core Web Vitals score in the next paragraph.
For now, let’s go back to the Lighthouse report.
Lighthouse is the technology that powers PSI. In that sense, you won’t get much new information from the Lighthouse report if you’ve already checked your PSI score.
However, you can get useful information about your overall website performance.
I already touched on this in the FID section, but let’s quickly recap.
Open Chrome, go to a page you want to analyze, right-click and select “Inspect.” Select “Lighthouse.”
Click “Generate report” and let Chrome do its thing.
The “Performance” category has information about specific metrics. More importantly, it has details about your MPFID.
The report also has an “Opportunities” and “Diagnostic” section, but they (likely) won’t tell you anything new.
And that’s it. Three quick checks you can do in five minutes.
You now know if you have problems with one or more of the Core Web Vitals. You also have specific suggestions for improving your website.
You can start chopping away at the problems one by one.
However, there are three things you can (and should) do before anything else.
3 Proven Ways To Optimize Your Website Performance
Different websites have highly individual performance problems.
At the same time, a few underlying mistakes often cause these problems. Usually, these issues come from:
The number and size of the code files
And you can tackle each issue right now.
But keep in mind that these aren’t fixes for any specific Core Web Vitals problem. Implementing them isn’t guaranteed to fix your FID, for example.
At the same time, all three things are best practices for improving website performance.
You can’t lose by trying them out.
Minifying resources reduces their total size, making them easier to download
For example, here’s how a CSS file looks like before and after minifying it:
This process doesn’t affect the website’s functionality.
Code minification only removes the unnecessary parts of the code - line-breaks, whitespace, comments, etc. The browser can render the page just fine without them.
On the other hand, combining resources reduces the total number of files.
This decreases the number of HTTP requests the browser has to make. As a result, your website loads faster.
Of course, there’s more to code minification and concatenation. If you want a detailed explanation, check out our article on the topic.
But for now, all you need is the basic premise:
Having a few lightweight resources is way better than having a lot of big resources. It makes the browser's job a lot easier.
Implement Critical CSS
Eliminating render-blocking resources is one of the most frequent suggestions you’ll see in a PSI report.
These render-blocking resources stop the browser from visualizing the page as quickly as possible.
A quick way to resolve part of the issue is to implement Critical CSS.
Critical CSS is the CSS that’s applied to above the fold elements on each page. These elements should load first, before the rest of the page.
This fundamental issue here is that CSS files are render-blocking by default. The browser has to load, parse and execute all CSS files referenced in the head tag before anything else on the page.
By implementing Critical CSS, you make sure that above the fold content appears on the screen first.
That way, you give your visitors something to look at, even if the entire page isn’t fully loaded.
Meanwhile, the rest of the CSS (the non-critical CSS) can load asynchronously.
You can implement this strategy by inlining your Critical CSS. This means putting it directly inside the HTML markup, rather than in an external stylesheet.
That way, the browser can find it easily. As a result, above the fold content is served first.
As an added bonus, you also reduce the size of the external stylesheet by removing some of the code (the Critical CSS) from it. It’s a win-win situation.
Again, this is a slight oversimplification. If you want more details, check out our article on Critical CSS.
Optimize Your Website’s Images
Images are often the biggest offender when it comes to website speed.
And most websites are absolutely loaded with them.
Background images, product images, blog post images, small icons, infographics - modern websites contain a ton of them.
And while one or two unoptimized images won’t cause much trouble, a couple of hundred will.
That’s why you need to optimize them.
Image optimization means reducing the size of each image without affecting its quality. It’s a simple idea, but a difficult one to implement.
Choosing the right image type is a great place to start.
The classic image formats (JPEG, GIF and PNG) all have their pros and cons. If you want to learn about each one, check out our Essential Guide To Image Optimization For The Web.
But right now, going with the newer WebP format is your best bet for a few reasons.
First, WebP is currently developed by Google. The almighty search engine will be happy to see you using this format.
On top of that, WebP produces better results than both JPEG and PNG.
According to Google, WebP images are 26% smaller than PNGs and 25-34% smaller than JPEGs. Again, it’s a win-win situation.
Besides going with WebP as a file type, it’s also good to review all the pictures on your website and delete the unnecessary ones. Trust me, they’re there.
An image that doesn’t contribute to the overall goal of the page shouldn’t be there. Plain and simple.
How To Implement Website Optimization Strategies
So, you now know how to locate problems and three (relatively) quick ways to solve some of them.
From here, it’s all about implementing the solutions. But how can you do that?
As I’ve already mentioned in this article about the benefits of using a speed optimization service, there are three ways to optimize your website:
Do it by hand
Install 6-7 tools/plugins to take care of each part of the optimization (caching, image optimization, Critical CSS, resource minification, etc.)
Use a complete speed optimization service
Now, you can absolutely do everything by hand, or get a developer to do it for you. But it would be incredibly time consuming and expensive.
On top of that, website optimization isn’t a “one and done” type of task.
The web is always evolving. New best practices emerge all the time. You’ll have to constantly be on top of the game.
And with so many options, including free ones, it makes no sense to go about it this way. Which brings us to the second option.
Free tools/plugins that optimize each part of your website can work just fine. There are tons of solutions out there that do a decent job. And you can find them with a simple Google search.
But there are too many downsides to this approach as well.
First, it’s still time-consuming. Finding, installing and setting up each tool can easily take a couple of days.
Second, there’s no guarantee that a free tool will be updated when a new best practice is introduced. You’ll have to monitor that and act accordingly.
Finally, these tools work until they don’t. After that, you’re on your own. There’s no one on the other side to help you when a free tool suddenly breaks something on your website.
You can work around all these downsides. But you’ll spend a lot of time and energy setting up and monitoring the performance of each tool.
It’s not impossible, but it’s not efficient, either.
If you want to take care of your business and still have a fast website, the final option is your best bet.
Using a complete speed optimization service makes your life so much easier. For example, look at all the features we offer at NitroPack:
You get all of that in a single solution instead of six or seven separate tools.
On top of that, you have a support team that you can rely on in case something goes wrong.
There’s also no complex, 12-step installation process. Anyone, regardless of their technical expertise, can set up and use NitroPack.
In short, NitroPack provides instant results, reliability and efficiency. Check it out yourself by trying out our free plan (no credit card required).