Ready for the Page Experience Update? Everything there is to know

The user and his experience at the center of the SEO, or at least in a prominent position, along with the content and other aspects that contribute to determine the positioning of pages on Google: is about to start the Page Experience Update, the algorithmic update with which Google introduces as a ranking factor a series of technical elements that evaluate the performance of the pages of the sites compared to the experience provided to users. Let’s try to make a general recap about this important update and the methods to calculate the scores of the metrics (where there are some news).

The long wait toward the new ranking factors update

We have been waiting for it for over a year, since Google introduced the new set of metrics it would use to calculate the performance of web pages that affect the quality of the user experience, which we then learned to call by the name of Core Web Vitals. Also in 2020, we discovered that these metrics would become a ranking factor along with other technical elements, in what was precisely called Google Page Experience Update, initially scheduled for May 2021 and then slipped until mid-June.

And it is right here that we are now, ready to see the concrete effects of this “algorithmic revolution” in terms of ranking variations for our projects, in the awareness that, by precise choice of the search engine, the experience on the pages will not be fully integrated into the ranking systems of Google before the end of August. This is for mobile devices, while the update for desktop pages should be launched completely before the end of the year.

The current panorama of the Web is still lagging behind these issues: a recent research by Conductor and Digital Marketing Depot reveals that only 13% of sites analyzed in three key industries such as Retail, Tech and Finance/Insurance meet the Vitals Core Web benchmarks.

Introduction to Page Experience Update

It may be useful, at this point, to summarize what the update actually is to the experience on the pages in the Google Search results (translation from the official support pages of Big G), referring to the indications of Patrick Kettner, Developer Advocate of the company, which in a new series of videos on the Google Search Central Youtube channel  offers us a complete overview on the subject.

Google spiega cos'è Page Experience update

The first episode is understandably focused on definitions: the Page Experience Update is “a new input to the Google Search ranking”, which evaluates “how good it is indeed for users the experience on every page of the Web”. Google has “always said that user experience is essential for search placement” and now, “thanks to advances in browser capabilities, we can confidently list any content in the carousel of top stories and normal search results”, and site developers and owners have precise technical references to follow to offer standard-compliant pages.

Objective metrics to measure performance

One of the most relevant aspects of this update is that, so far, “the concept of web performance has always been subjective“: what is fast on my computer, says Kettner, “can be unacceptably slow for people who use the site”.

Instead, the Page Experience “allows us to focus on these people and make sure that they are actually experiencing the site in a way that also satisfies us”. In practice, “establishes a basic goal toward which teams can work on development to make sure that users are having an excellent UX”.

The novelty is that this update looks at and measures the user experience: the metrics that make up the Page Experience “are, in part, collected by the Chrome User Experience Report, a set of public data that shows how fast or not millions of Urls on the internet have been uploaded and behaved on real people’s real devices“.

Which metrics make up the Page Experience ranking factor

Within this complex ranking factor there are actually two different sets of metrics.

The first consists of elements defined as “Boolean values“, or “things that you have or you don’t have”: mobile friendliness, pages with HTTPS protocol, safe browsing, absence of intrusive interstitial ads.

I 4 valori booleani del Page Experience

The second group contains “metrics that evaluate your page based on what are the experiences of your real users on their real devices“, or Core Web Vitals, defined as “a list, updated annually, of metrics that define what it takes to have that great user experience we are talking about”. Currently, as we know, the first three Essential Web Signals identified by Google are First Input Delay, Largest Contentful Paint and Cumulative Layout Shift.

The meaning of Page Experience

In conclusion, Kettner explains that the Page Experience “consists in working to ensure that your users get what they want in the best way possible“: it offers data and reports tailored to the real users of the pages, and then highlights concrete and problematic aspects that we can improve.

However, we need to understand an important aspect: the Page Experience “concerns what the user is actually experiencing and is not something that applies to your entire website”; each user uses the site differently, while each single URL is examined individually.

To clarify this point, Kettner invites you to check out the Page Experience Report in Search Console (Report Experience with Pages), which “gives an impressive view on which parts of the site need more work“: for example, the home page “may have fantastic results, but your content pages may be less than ideal”. And this could lead to high home placements, while the other pages fall behind.

But “the big news is that none of this is a black box: all Vitals Web Cores are extracted monthly from the Chrome User Experience Report”, and then everyone can monitor and verify them.

Google updates the tools to measure Core Web Vitals

The Page Experience Update is designed to highlight pages that offer excellent user experiences, but this waits for remains “only” one of the different factors that are taken into account by Google systems: translated in other terms, It is not said that the update really leads to an upheaval of the rankings and, moreover, given its gradual implementation, there may not be immediate shocks and Google will still be able to monitor any unforeseen or unwanted problems.

And the progressive nature of this update is also noted by the fact that, just in recent weeks, Google has updated some methods by which it measures these metrics to include sharp formulas in response to cases brought by developers in the field.

Detlef Johnson’s article on Search Engine Land helps us to orient ourselves in this complex scenario, explaining how to use lab data and field data for page experience optimization.

How measurements of Core Web Vitals work

Core Web Vitals are high-performance speed metrics that result in “a stable, viewable, usable experience for a device’s data viewport, including offscreen content up to 9,000 vertical pixels”.

Faster is better, as it is easy to guess, and therefore in principle lower metric ratings are better.

Field data, which is taken into account for ranking, varies depending on the power of the real user’s device, screen size, and network connectivity. Laboratory data has default values for these and, except in the case of Page Speed Insights, can be calibrated by developers to simulate all types of conditions; these lab data are not considered for rankings.

Changes to the First Contentful Paint

Although it is not officially included in the first three CWV, the components of the First Contentful Paint (FCP) however contribute to essential web signals: recently, the threshold to get a score of “good” increased from 1.0 to 1.8 seconds.

FCP takes into account Time to First Byte (which is more a reflection of the server’s response time than something that can be manipulated directly with the code) and the time needed to process rendering block resources such as CSS.

Changes to the Largest Contentful Paint

Small changes have also come about the Largest Contentful Paint, which is “a significant milestone in the life cycle of a page”: originally, the metric did not include any off-screen elements, while instead it now locates the larger element even if it was later removed from the DOM page once discovered, or when you qualify multiple images of the same size.

These situations occur when carousels load and cache content for off-screen slides.

Another useful change is that LCP now ignores background images.

Changes to the Cumulative Layout Shift

News, finally, for the CLS: to avoid situations such as extremely long browsing sessions compromise the scores of this metric, the smaller “window” sessions are limited to 5 seconds, marked as terminated by an interval of 1 second as a limit to find the 5 worst seconds of page layout displacement.

This seems to be a “much better representation of change than counting completely uncovered sessions that can last 20 minutes or longer and give scores that are exaggerated and out of proportion.

The usefulness of PageSpeed Insights data

According to Johnson, the trick to optimizing pages for Core Web Vitals is “learning more than an official way to retrieve scores, which becomes further complicated by how to think about the data you’re viewing”.

Even Page Speed Insights, often the first reference for SEO professionals, does not alone provide enough information to “tell the whole story”: the tool is in fact designed to “Provide a complete snapshot to developers for troubleshooting performance issues“; when available from Crux, field data aggregated over a previous 4-week period is useful for comparisons, but the appearance of laboratory and field data will undoubtedly show a difference between the two reports.

Variance is in fact “a natural event between test sessions and when comparing the tests of different devices and/or networks”: the data on the field therefore vary as much as the audience of a given website. PSI field data, therefore, “represents a range of data, aggregated over the previous 28 days, up to the value of the most recently completed day”.

When to expect ranking changes (and which time ranges will be examined)

The author then says that the Page Experience ranking factor “could presumably rely on the same aggregate scores as the previous 28 days”, but it is unlikely, “because it would be much more powerful if it were based instead on the Bigquery aggregate data set prepared for 28 days the previous month”.

This means, therefore, that “any changes to the ranking will take effect on the second Tuesday of each new month”.

In this way, BigQuery data for CrUX reports would be “subjected to a performance optimization process that prepares the previous month’s data for public consumption”. Currently, such indexing, and possibly caching of certain query responses, allows “CrUX users to run queries historically until the end of 2017, when the data was first collected”.

Practical difference between lab data and field data

Johnson delves further into the complex issue of the difference between lab and field data, recalling that Lighthouse Lab scores in PSI are “calibrated to be representative of your higher percentiles” for the worst-case scenarios, like that of weak browsers on slow networks. With this choice, Google intentionally makes sure that developers have richer feedback to more easily solve problem areas that may occur, although such situations are less common in the real world.

If lab scores were “indicative of more average conditions, they would not reveal performance bottlenecks that developers need to see to make changes to improve the page experience under stressful conditions”.

On the contrary, the data on the field provide examples of real-world use, and more precisely “are indicative of the audience recorded by the browsers with the real use of your website”. They are important “because they are what Google uses for the ranking factor of the experience with the page”.

Almost always, field data scores will be better than lab data scores for the same page. In addition, field data may “remain stable over time once prepared for long-term storage performance, with newly prepared data on a monthly basis, while laboratory data may differ with each new test“.

When browsers have the necessary permissions to transmit scores, field data is sent and collected for use by PSI, of any Vitals Core Web tool that uses the Crux open API or for those who write Vitals Core Web Javascript in web pages. The only way to examine the data in the field in real time is “opt to write the Javascript code and collect it for personal use in a console or browser repository, or send it to Google Analytics”.

Using lab data to optimize

The open source Lighthouse project feeds lab data, has been implemented in Dev Tools and can also be installed in a package provided with your own Command Line Interpreter (CLI). Lighthouse in Dev Tools can be configured so that it corresponds to reduced power and speed or increased compared to the default “upper percentile”: in practice, we could simulate variable power and speed, for example, if we have the means to provide more elaborate experiences at certain simulated thresholds, implementing a progressive improvement strategy.

Final considerations

Ultimately, we should really be interested in solving problems and improving performance factors that affect the page experience, even if we are not developers or do not care about SEO, because these elements “are incredibly important for the way our pages and applications rendering pages, including native Webview applications, are experienced by real users in the field“.

Too often, an app that receives intense usage results in countless hours of frustration and can have a negative impact on profits: for example, Detlef Johnson tells a personal experience with “the native apps of the Nytimes on smartphone-sized screens, which load content and scripts slowly even in the best network conditions”. This results in delays in scrolling and clicking even after rendering a main news item in the window; in addition, “it can be really terrible to navigate through continuous layout changes when ads are loaded at a late time”.

Google has many case studies that show the positive effect on revenue generated by implementing performance corrections, leading to more positive experiences. Returning to the example of the well-known newspaper, Johnson concludes that “if it were not for the value of the contents of the Nytimes, their app could be further derided and suffer a much lower use than what it currently enjoys”.

7 days for FREE

Discover now all the SEOZoom features!
TOP