The most important interventions to improve Core Web Vitals on the site

Put us to the test
Put us to the test!
Analyze your site
Select the database

It is now three years since Google introduced the concept of Core Web Vitals, the performance metrics that measure page responses to “real” user interactions (thanks to so-called field data) to enable us to solve the problem of negative user experiences on the site. Having become part of the Page Experience system and thus officially a ranking factor, for many site owners or managers these metrics are likely to be difficult to understand and deal with.Therefore, Google has shared a guide to the key optimization interventions to prioritize.

Core Web Vitals: best practices for optimizing your site

The Chrome DevRel team has spent the last year trying to answer one question: what are the most important tips to give developers to help them improve performance for their users? Out of this reflection (and intensive work) came the guide to best practices for optimizing Core Web Vitals and actually helping us decide what to prioritize when our time is limited.

Signed by no less than five authors — Googlers Philip Walton, Rick Viscomi, Barry Pollard, Brendan Kenny and Jeremy Wagner — the supporting page represents a collection of the pointers that Google believes are most effective for improving the performance of Core Web Vitals, adding to the many suggestions provided so far for improving the scores of these metrics.

Indeed, as the page states, each of the previous tips “can, individually, improve the performance of many sites,” but “the entire set of tips is certainly overwhelming and, realistically, there is no way that one person or one site can follow them all.” Unless web performance is our day job, in fact, it’s probably not easy to identify which interventions can generate the greatest positive impact on the site: for example, “you may have read that implementing critical CSS can improve loading performance and you may also have heard that it’s important to optimize your images,” the article continues, but how to decide what to focus on if they don’t have time to work on both?

How to improve Core web Vitals, the list of technical tips for taking action

It was therefore necessary to provide guidance that was thought out differently, taking into consideration not only the technical merits of a given recommendation, but also the human and organizational factors that influence the likelihood that developers will actually be able to adopt these recommendations. In other words, some recommendations may have a huge impact in theory, but in reality very few sites will have the time or resources to implement them; similarly, some interventions are critical, but most Web sites are already following these practices.

In light of this, Google’s new guide to improving Core Web Vitals focuses on three key points to overcome the limitations of the old approach:

  • Recommendations that can have the greatest impact in the real world.
  • Recommendations that are relevant and applicable to most sites.
  • Recommendations that are realistic and possible for most developers to implement.

The interventions on Largest Contentful Paint (LCP)

The first set of recommendations concerns Largest Contentful Paint (LCP), which is a measure of load performance, specifically estimating the time it takes for the main content of a page to become visible to users. According to statistics revealed by webdev, it is the one most sites struggle with, as only about half of all sites on the Web today meet the recommended threshold.

1. Making the LCP resource detectable from the HTML source.

According to HTTP Archive‘s Web Almanac 2022, 72 percent of mobile pages have an image as an LCP element: it follows that for most sites to optimize their LCP, they will need to ensure that such images can be loaded quickly.

What may not be obvious to many developers is that the time it takes to load an image is only part of the challenge, because another critical part is the time before an image begins to load, and the data suggest that this is where many sites actually stumble.

In fact, among pages where the LCP element was an image, 39 percent of those images had source URLs that were not detectable from the source of the HTML document. In other words, those URLs were not found in the standard HTML attributes (such as <img src=”…”>or <link rel=”preload” href=”…”>), which instead would have allowed the browser to quickly discover them and begin loading them immediately.

Waiting for CSS or JavaScript files to be fully downloaded, parsed, and processed before the image can even begin to load may already be too late for a page’s performance. In contrast, ensuring that the LCP resource is detectable from the HTML source can lead to measurable improvements and also unlocks additional opportunities to prioritize the resource.

As a rule of thumb, Google says, if our LCP element is an image the image URL should always be detectable from the HTML source. Some suggestions to make this possible are:

  • Load the image using an <img> element with the src or srcset attribute, without using non-standard attributes such as data-src that require JavaScript to render, as it will always be slower. 9% of pages obscure their LCP image behind data-src.
  • Prefer server-side rendering (SSR) over client-side rendering (CSR), since SSR implies that the markup of the entire page (including the image) is present in the HTML source. CSR solutions require JavaScript to be executed before the image can be detected.
  • If the image must be referenced from an external CSS or JS file, we can still include it in the HTML source code via a <link rel=”preload”> tag. Images referenced by inline styles are not detectable by the browser’s preload scanner so even if they are found in the HTML source, their detection may still be blocked when loading other resources, so preloading can be useful in such cases.

2. Prioritizing the LCP resource

A key first step in ensuring that the LCP resource can start loading early is then to make sure that it can be discovered from the HTML origin, but another important point is to ensure that the loading of that resource has priority, and is not queued behind a group of other less important resources.

    For example, even if the LCP image is present in the HTML source using a standard <img> tag, if the page includes a dozen <script> tags in the <head> of the document before that <img> tag, it may take a while for the image resource to start loading.

    The easiest way to solve this problem is to provide a hint to the browser about which resources have the highest priority by setting the new fetchpriority=”high” attribute on the <img> or <link> tag that loads the LCP image. This command tells the browser to load the image first, instead of waiting for those scripts to complete.

    Again according to Web Almanac, only 0.03% of eligible pages are taking advantage of this new API, so there are many opportunities for most sites on the Web to improve LCP with very little work. Although the fetchpriority attribute is currently supported only in Chromium-based browsers, this API is a progressive improvement that other browsers simply ignore, so Google “strongly advises developers to use it now.”

    For non-Chromium browsers, the only way to ensure that the LCP resource has priority over other resources is to reference it earlier in the document. Using again the example of a site with many <script> tags in the <head> of the document, to be sure that the LCP resource has priority over those script resources we can add a <link rel=”preload”> tag before any of those scripts, or move those scripts below the next <img> in the <body>. Although this works, it is less ergonomic than using b, so we must hope that other browsers will add support soon.

    Another critical aspect of prioritizing the LCP resource is to make sure not to do anything that causes it to deprioritize , such as adding the loading=”lazy” attribute. Today, 10% of pages actually set lazy loading on their LCP image. The guide suggests that we should beware of image optimization solutions that indiscriminately apply a lazy loading behavior to all images, and try to set up a way to ignore that behavior for the LCP image — if we are not sure which image will be the LCP, the article says, we can try “using heuristics to choose a reasonable candidate.”

    Deferring noncritical resources is another way to effectively increase the relative priority of the LCP resource: for example, scripts that do not feed the UI (such as analytics scripts or social widgets) can be safely deferred until after the load event is triggered, which ensures that they do not compete with other critical resources (such as the LCP resource) for network bandwidth.

    To summarize, Google encourages us to follow these best practices to ensure that the LCP resource is loaded early and with high priority:

    • Set fetchpriority=”high” to the <img> tag of the LCP image or the <link rel=”preload”> tag, if the LCP resource is loaded in this manner.
    • Never set loading=”lazy “on the <img> tag of the LCP image, otherwise we will lose the priority of the image and delay the start of loading.
    • Delay non-critical resources when possible: we can move them to the end of the document, using native slow loading for images or iframes, or load them asynchronously via JavaScript.

    3. Use a CDN to optimize the TTFB of documents and resources

    The previous two tips focused on ensuring that the LCP resource is discovered early and given priority so that it can start loading immediately. The last piece of this puzzle is to ensure that the initial document response also arrives as quickly as possible.

      As the developers of Chrome DevRel explain, the browser cannot start loading any subresources until it receives the first byte of the HTML document’s initial response: the sooner this happens, the sooner everything else can start as well. This time is known as Time to First Byte (TTFB), and the best way to reduce TTFB is:

      • Offer content as geographically close to our users as possible.
      • Cache the content so that recently requested content can be offered again quickly.

      The best way to do both of these things is to use a CDN, which distribute resources to perimeter servers, scattered around the world, thus limiting the distance those resources have to travel over the network to the end users. CDNs also usually have fine-grained caching controls, which can be customized and optimized for the needs of our site.

      Many developers are familiar with using a CDN to host static assets, but CDNs can also serve and cache HTML documents, including dynamically generated ones.

      According to Web Almanac, only 29% of requests for HTML documents were served by a CDN, which means that there is a significant opportunity for sites to request additional savings.

      Some suggestions for configuring CDNs are:

      • Consider increasing the duration of content caching – for example, is it actually critical that content is always up-to-date? Or can they be out of date for a few minutes?
      • Possibly also consider caching content indefinitely and then clearing the cache if/when we perform an update.
      • Find out if it is possible to move dynamic logic currently running on the origin server to the edge (a feature of most modern CDNs).

      In general, any time we can serve content directly from the edge (avoiding a trip to the origin server) is a performance win. And even in cases where we have to make the trip back to the origin server, CDNs are generally optimized to do it much faster, so it’s a win either way.

      The interventions on Cumulative Layout Shift (CLS)

      The deep guidance published on web.dev then focuses on the second Core Web Vitals, namely Cumulative Layout Shift (CLS), which we know is a measure of the visual stability of web page layout. As of 2020, statistics say that this metric has improved greatly on the Web, but about a quarter of Web sites still do not meet the recommended threshold, thus providing a great opportunity for many sites to improve their user experience.

      1. Set explicit dimensions on any content loaded from pages

      Layout changes usually occur when existing content moves after other content has finished loading: therefore, the main way to mitigate this problem is to reserve as much of any required space as possible in advance.

      The easiest way to correct layout shifts caused by unsized images is to explicitly set the width and height attributes (or equivalent CSS properties), but according to HTTP Archive even today 72% of pages have at least one unsized image. Without an explicit dimension, browsers will initially set a default height of 0px and may then cause a significant layout shift when the image is finally loaded and the dimensions are discovered. This represents a huge opportunity for the collective Web, and that opportunity requires much less effort than some of the other recommendations suggested in the article.

      It is also important to keep in mind that images are not the only contributors to CLS. Layout changes can be caused by other content that typically loads after the initial rendering of the page, including third-party ads or embedded videos. The aspect-ratio property can help combat this problem: this is a relatively new CSS feature that allows developers to explicitly provide an aspect ratio to images and non-image elements. This will allow us to set a dynamic width (e.g., based on screen size) and have the browser automatically calculate the appropriate height, much the same way they do for images with dimensions.

      Sometimes it is not possible to know the exact size of dynamic content because it is, by its very nature, dynamic: however, even if we do not know the exact size of the element, we can still take steps to reduce the severity of layout changes. Setting a reasonable min-height is almost always better than allowing the browser to use the default height of 0px for an empty element. Using a min-height is also usually a simple solution, as it still allows the container to grow to the height of the final content, if necessary, and helps reduce that amount of growth from the full amount to a hopefully more tolerable level.

      2. Making pages suitable for bfcache

      Browsers use a browsing mechanism called back/forward caching, or bfcache for short, to instantly load a previous or next page in the browser history directly from a memory snapshot.

      The bfcache is a significant performance optimization at the browser level and completely eliminates layout changes during page loading, which for many sites is where most of their CLS occurs, and according to Google, “the introduction of bfcache has generated the biggest improvement in CLS that we’ve seen in 2022.”

      Despite this, a significant number of Web sites are not eligible for bfcache and therefore lose this Web performance win for a significant number of browsing: unless our page is loading sensitive information that we do not want restored from memory, the experts’ advice is to make sure our pages are eligible.

      From a practical standpoint, as site owners we can verify that pages are eligible for bfcache and work on any reasons why they are not: Chrome already has a bfcache tester in DevTools and this year plans “to improve the tools with a new Lighthouse audit that performs a similar test and an API to measure it in the field.” In addition, the article adds, bfcache will generally improve other Core Web Vitals as well (although for now it seems to be giving the biggest gains right on CLS) and is considered one of several instant navigations available to dramatically improve page navigation.

      3. Avoid animations/transitions that use layout-inducing CSS properties.

      Another common source of layout shifts involves element animations: for example, cookie banners or other notification banners that scroll from the top or bottom often contribute to CLS. This is particularly problematic when these banners exclude other content, but even when they do not the animation can still impact CLS.

        Although the HTTP Archive data “cannot definitively link animations to layout changes,” the data show that pages that animate any CSS property that might affect layout are 15 percent less likely to have a “good” CLS than pages overall, and furthermore some properties are associated with worse CLS than others. For example, pages that animate margin or border widths have “poor” CLS at nearly twice the rate that overall pages are rated as poor.

        This is perhaps not surprising, say Chrome DevRel developers, because every time we transfer or animate a layout-inducing CSS property, layout changes will occur, and if these layout changes are not within 500 milliseconds of a user interaction, they will impact CLS.

        What may surprise some developers is that this is true even in cases where the element is brought outside the normal document flow: for example, absolutely positioned elements that animate top or left will cause layout changes, even if they are not pushing other content away. However, if instead of animating top or left you animate transform:translateX() or transform:translateY(), the browser will not update the page layout and therefore will not produce any layout changes.

        Preferring to animate CSS properties that can be updated on the browser’s compositor thread has long been a best practice for performance because it moves that work to the GPU and out of the main thread and-in addition to supporting overall performance-can also help improve CLS.

        As a general rule, we should never animate or transfer any CSS property that requires the browser to refresh the page layout, unless we are doing so in response to a user tap or button press (but not hover). And, when possible, we should prefer transitions and animations using the CSS transform property.

        The Lighthouse Audit Avoid Uncompounded Animations warns when a page animates potentially slow CSS properties.

        The interventions on First Input Delay (FID)

        Of course, the last set of recommendations concerns First Input Delay (FID), which is a measure of a page’s responsiveness to user interactions.Although most sites on the Web currently score very well in FID, Google has documented shortcomings of the FID metric in the past and believes there are still many opportunities for sites to improve their overall responsiveness to user interactions.

        The guide also explicitly mentions the new Interaction to Next Paint (INP) metric, called “a possible successor to FID,” and all of the recommendations below apply equally well to both FID and INP. Because sites perform worse on INP than FID, particularly on mobile devices, Googlers urge developers to seriously consider these responsiveness recommendations, even if they find FID “good.”

        1. Avoid or interrupt long tasks

        Activities or tasks are any part of the discrete work done by the browser, and include rendering, layout, parsing, and compiling and executing scripts. When tasks become long tasks, that is, 50 milliseconds or more, they prevent the main thread from responding quickly to user input.

        According to the Web Almanac, there is a lot of evidence to suggest that developers could do more to avoid or interrupt long tasks; although splitting long tasks might be a more challenging effort than the other tips presented, it is still “easier” than other techniques excluded from the article (based on the assumptions above).

        The principle to adhere to is to strive to do as little JavaScript work as possible, but we can help the main thread a bit by splitting long tasks into smaller ones: one way to achieve this is to often prioritize the main thread, so that rendering updates and other user interactions can happen more quickly.

        Another option is to consider using APIs such as isInputPending and Scheduler API:

        The first – isInputPending – is a function that returns a Boolean value indicating whether a user input is pending: if it returns true, we can pass the main thread through so that it can handle the pending user input.

        The Scheduler API is a more advanced approach, allowing work to be scheduled based on a priority system that takes into account whether the work being done is visible to the user or in the background.

        By interrupting long tasks, we give the browser more opportunities to accommodate critical user visible work, such as handling interactions and any resulting rendering updates.

        2. Avoid unnecessary JavaScript

        According to Google, there’s no doubt about it: websites are sending more JavaScript than ever before, and the trend doesn’t seem likely to change anytime soon. A massive reliance on JavaScript, however, creates an environment in which activities compete for the attention of the main thread, and this can certainly affect the responsiveness of the Web site, especially during that crucial startup period.

        This is not an unsolvable problem, however, and we have some optimization options:

        • Using the coverage tool in Chrome DevTools to find unused code in our Web site resources. By reducing the size of resources needed during startup, we can ensure that the Web site takes less time to parse and compile code, resulting in a smoother initial user experience.
        • Sometimes unused code that we find using the coverage tool is marked as “unused” because it was not executed during startup, but is still needed for some functionality in the future-this is code that we can move into a separate pack via code splitting.
        • If we use a tag manager, it is important to check tags periodically to be sure they are optimized or even still in use. Older tags with unused code can be deleted to make the JavaScript code in the tag manager smaller and more efficient.

        3. Avoid large rendering updates

        JavaScript is not the only thing that can affect Web site responsiveness: rendering can also be an expensive type of work in its own right, and when large rendering updates occur, they can interfere with the Web site’s ability to respond to user input.

        Optimizing rendering work is not a simple process and often depends on what we are trying to achieve; at any rate, the guide tells us, there are a few things we can do to ensure that rendering updates are reasonable and do not extend into lengthy tasks:

        • Avoid using requestAnimationFrame() to do any nonvisual work. The requestAnimationFrame() calls are handled during the rendering phase of the event loop: when too much work is done during this step, rendering updates can be delayed. It is essential that any work done with requestAnimationFrame() be reserved exclusively for tasks involving rendering updates.
        • Keep DOM size small. The size of the DOM and the intensity of layout work are related: when the renderer needs to update the layout for a very large DOM, the work required to recompute the layout can increase significantly.
        • Use CSS containment. CSS containment relies on the CSS contain property, which provides instructions to the browser on how to perform the layout work for the container on which the property is set, including also isolating the layout scope and rendering to a specific root in the DOM. This is not always an easy process, but by isolating areas containing complex layouts we can avoid performing unnecessary layout work and rendering.

        Improve Core Web Vitals with these affordable interventions

        Core Web Vitals are an important metric for providing a positive user experience and positively influencing ranking in Google.

        Improving page performance may seem like a daunting task, especially considering that there is a mountain of Web Vitals to consider, but this new guide is sufficiently concise, realistic, and applicable to most Web sites, and more importantly can make a significant impact in terms of achieving results and exceeding our pages’ Core Web Vitals scoring thresholds.

        By following these tips, which include as mentioned using a CDN to reduce TTFB, setting explicit sizes for on-page content to improve CLS, making pages suitable for bfcache, and avoiding JavaScript and unnecessary animations/transitions for FID, we can understand how to better manage our time, implementing feasible and practical solutions that really can help us improve performance.

        Try SEOZoom

        7 days for FREE

        Discover now all the SEOZoom features!
        TOP