Google’s Render budget: what it is and how to improve it

Put us to the test
Put us to the test!
Analyze your site
Select the database

Whenever we think of Google search, we imagine an automatic and smooth process: Googlebot scans our pages, analyzes their content, and the results are ready to appear in search engines. However, there is considerable complexity behind this mechanism, mainly related to the management of computational resources: in a nutshell, the more a site uses advanced technologies such as JavaScript, dynamic CSS or client-side loaded content, the more Google has to invest resources to be able to correctly render what the user and Googlebot itself will see. This is the context in which the concept of render budget, i.e., the rendering time Google allocates to web pages, fits: this concept has a fundamental value and affects “searchability,” and understanding what it is and how to optimize the render budget becomes crucial for anyone who wants to maximize the proper indexing of their pages.

What the render budget is

The render budget is an expression that refers to the limit of resources Google allocates to render web pages, especially those that make extensive use of JavaScript.

Don’t waste Googlebot’s time!
Analyze your site and optimize the elements that risk wasting Google’s budget
Registrazione

To specify, we can say that the render budget is the amount of resources used to render a page, that is, to make sure that Googlebot can interpret content that is not immediately available in static HTML. The rendering process is particularly relevant when we interact with JavaScript, as it requires an additional layer of processing, and today many modern sites rely on JavaScript frameworks to enrich the user experience.

While scanning an HTML page happens relatively quickly, in fact, when a page’s resources depend on JavaScript or dynamic CSS the process becomes more complex and requires the search engine to not only visit the page, but to “execute” it to fully understand the available content.

This additional step is critical for indexing because Google, in order to “see” everything on a page, must render it properly. If content is not made available in HTML form or not rendered correctly through clean rendering, Google may not find or understand the information, effectively penalizing the site’s ability to appear in search results.

From this we already sense that managing the render budget takes on critical importance: Google does not have unlimited resources and cannot afford to thoroughly render all pages at the same speed or intensity. The search engine then makes a series of choices about which pages can receive the most resources in terms of rendering and how much time to devote to each. High-performing pages that are optimized for rendering already server-side or have minimized JavaScript code will receive a larger share of the resource budget than pages that suffer from heavy runtimes or poorly optimized technologies.

The importance of the render budget is related to the page’s ability to be indexed properly. Moreover, poor management of the rendering experience not only impacts the indexing process but also wastes resources, generating pages that may be considered incomplete or even irrelevant at the SEO level.

Render budget and indexing: why it matters

Render budget is a very relevant concept in the current landscape of technical SEO and yet, outside of more specialized circles, it is still not well understood. Let’s try to shed some light on it by explaining not only what it is, but more importantly why it has become essential to know how to manage it in order to ensure a good ranking in search results.

To begin, we need to understand what happens “behind the scenes” when Googlebot scans and indexes the pages of our site. First of all, Googlebot accesses the HTML code of a web page-during this stage, it simply examines everything already in the source, such as text, links, images, and other static resources. However, the structure of many modern websites is no longer based only on static HTML pages : more and more developers are adopting JavaScript to create dynamic and interactive web pages , offering users personalized content in real time. This introduces a new complexity in indexing.

While a static HTML page can be immediately read by Googlebot, a dynamic page, full of JavaScript and advanced CSS, requires an additional step, namely rendering. Googlebot does not immediately see the JavaScript-generated content, but to gain access to it, it must “render” it by executing the JavaScript code associated with the page. And this is the exact point at which the render budget comes into play.

The rendering node: why it is complex and crucial for SEO

Rendering is particularly important in the SEO context because it directly affects Google’s understanding of the page content . If, for example, the main content of a page is dynamically generated by a JavaScript script, Google will not be able to index that page correctly if the code is not fully executed. In other words, when the rendering does not work perfectly, the risk is that Googlebot will “see” only part of the content, ignoring those that are loaded later by script.

By the expression rendering we refer to a very specific operation, namely the processing of a web page to allow its content to be correctly viewed by both users and Googlebot. On a technical level, Googlebot does more than just “copy and paste” the HTML code of a page: it must execute scripts, load JavaScript files, and process images, CSS , and other resources to achieve the final appearance and interpret the visible content. For pages that make heavy use of JavaScript, this rendering process becomes essential because not all content is immediately present in the HTML file sent from the server.

Over the years, Google has made tremendous progress in rendering JavaScript, evolving its Chrome-based rendering engine. However, the time and resources needed to perform this process are still limited. Pages that take too long to render or that depend on unoptimized JavaScript can suffer delays in indexing or, in the worst case, not be indexed at all. This is one of the reasons why Single-Page Applications (SPAs) often have difficulty achieving good visibility in SERPs if not implemented correctly.

We should not forget that non-rendered content is considered virtually absent from the eyes of search engines, and this can have a devastating impact on a site’s SEO performance. A classic example is the heavy use of JavaScript for internal navigation or link generation : if Google fails to follow these links because they are not rendered in time, an important part of the site may remain inaccessible to search engines, resulting in loss of visibility and organic traffic.

How the render budget works: a practical analogy

We have defined the crawl budget as the limit of resources and time that Google devotes to each site in deciding whether and what to crawl: using the same approach, the render budget is then like the time, or rather the computational resources, that Google invests further to fully render dynamic pages.

Exactly like the crawl budget, the render budget is not unlimited. Each website, in direct proportion to its importance and popularity, receives a certain “share” of resources that Googlebot can apply for rendering tasks. This means that the larger and more relevant the site, the higher its priority and thus the time or resources devoted to rendering its dynamic content.

Imagine Googlebot as a visitor entering a library. Crawling is the act of walking quickly through the aisles, observing and noting the titles of the books on the shelves – in this way, the person can quickly get an idea of the potentially interesting volumes. This is a superficial exploration process , which allows one to get a general idea of the available content, but just as book titles do not tell the whole story of the content, the same happens with crawling: Googlebot sees only the information immediately present in the HTML code.

Rendering , on the other hand, is the next operation in which the visitor chooses to open the books and spends time flipping through the pages to read and get the full details. Some volumes may have all the information already immediately available (such as static HTML pages on a site), while others will have content hidden deeper or more complex, thus requiring more effort and additional operations, such as at Googlebot with the execution of JavaScript.

In this context, the render budget represents the computational resources Google devotes to enable its “visitor” to not just stop at the headlines, but to also open and read in depth the dynamic pages of a site. The more complex the content, the more resources Googlebot will have to use to fully process and index everything on it.

What are the concrete implications of the render budget

The render budget becomes relevant when the content of a web page is not immediately available in the HTML source, but instead is generated and dynamically displayed only after the associated JavaScript code has been executed. Multitudes of modern web resources work in this way, particularly through the use of frameworks such as React or Angular that transform the way content is generated and distributed to users.

Think of the aforementioned Single-Page Applications, which are web pages or applications that load only a single document and then dynamically update the content as the user interacts with the page. In these situations, when a user first accesses the site, they receive only a short portion of “skeletal” HTML, with other parts of the page (such as text, images, or buttons) being added dynamically through JavaScript. Without properly executing that code, Googlebot would see only a blank or partially complete page, losing a lot of crucial information.

This mechanism poses a number of challenges. Not every page on a complex site will receive a complete rendering in real time. In fact, Google has to manage millions of pages that make use of dynamic rendering, and no matter how large its infrastructure is, it cannot render everything all the time and for everyone. As a result, pages that are less important or that require a lot of computational resources to render may experience delays in indexing, or even may not be rendered at all.

One of the most common problems associated with poor optimization of the render budget is just that: unrendered or partially rendered content. If the core content of a page depends on JavaScript and Google fails to render it within a given period, that content will not be indexed or valued as it should. This, in turn, can damage the site’s SEO , severely limiting its ability to appear prominently in search results.

Render budget: a deciding factor for JavaScript content

A study by Onely showed how Google takes up to 9 times longer to properly render pages built with JavaScript than static HTML pages. The complexity that comes from processing heavy JavaScript files, combined with advanced CSS resources and asynchronous requests, can significantly increase rendering times and consume computational resources that could otherwise be allocated elsewhere.

In practice, the more complex JavaScript resources a page requires to be fully interactive and viewable, the more computational resources Google must allocate to render it.

The origin of the render budget concept

The concept of render budget was not formally introduced by Google in an official form, but it started to take hold in the SEO landscape around 2019, in the wake of the increasing dependence of websites on advanced technologies, such as JavaScript, which brought up issues related to the proper indexing of dynamic content.

Specifically, it was Kazushi Nagayama, former Webmaster Trends Analyst at Google, who first employed the term in his article “Render Budget, or: How I Stopped Worrying and Learned to Render Server-Side”, published precisely in August 2019.

Nagayama noted that despite the search engine’s progress in handling JavaScript, the increased use of Single-Page Application (SPA) and modern frameworks such as React or Angular put Google under pressure in terms of the computational resources required to render entire pages. With this in mind, it highlighted how Google’s ability to actually see the contents of a page (especially in large web projects producing thousands of URLs) was limited by an issue related to available resources. To make such processing sustainable, Google needed to prioritize and balance its resources, and so it was that the idea of a dedicated budget for rendering took hold .

The opening to JavaScript rendering

Nagayama ‘s article started from his own first-hand experience, recalling that when he was still working for the Mountain View company, in 2014, he was tasked with breaking the news on the official blog that Google Search would start rendering web pages that used JavaScript. This meant that many pages that were not previously indexable began to appear in search results, greatly improving the overall searchability of the web.

At the time, the team began by rendering a small part of the Index, gradually increased, and eventually managed to deprecate the old AJAX crawling scheme. Although the work done by Google’s large team of engineers has made it possible to make giant strides with respect to the rendering of web pages that run JavaScript (an achievement of which the Japanese developer says he is “personally very proud”), there are still some clarifications to be made, and the first is rather important and drastic: as of 2019-and to some extent still today-Google still does not index all pages in JavaScript with client-side rendering.

It follows, then, that no one can use JavaScript to create websites that rely entirely on client-side rendering and expect Google to index all their pages. And, therefore, you should know that full client-side rendering in single-page applications can still hinder the searchability of a site, especially for large sites.

How Google’s render budget works

Thanks to Nagayama ‘s post, the metaphor of a “budget” parallel to the crawl budget then began to emerge, fueled by studies and experiments showing how much more time Google needed to properly render a JavaScript page than a plain HTML page.

It then became apparent that, just as with the crawl budget, a site’s ability to be properly indexed was reduced when its dynamic resources were not optimized to render efficiently. This eventually led SEO specialists to put the concept of render budget at the center of their strategies as a limited resource that can have a tangible impact on sites’ organic visibility .

Nagayama had for the first time provided additional information to the work of Martin Splitt, Google’s Webmaster Trends Analyst, who in various circumstances had nonetheless called for using JavaScript responsibly and optimizing onpage aspects, with particular reference to site performance in order to deliver content to users more quickly and, at the same time, simplify Googlebot scans.

Concretely, the starting point was a diagram made by Martin Splitt himself to explain how Google’s rendering processes are set up, designed to be an earlier step than the actual indexing, as also noted in the image.

Come funziona il rendering di Google

This is the focal point: only if Google judges that a page deserves to be rendered does it add it to the Render Queue to wait for the Renderer to process it and return the rendered HTML. Thus, it is implicitly suggested that Google needs to judge the importance of the page’s content before actually seeing what the content is: even if it has not yet rendered the page, it needs to try to hypothesize what added value rendering that content brings to the Index.

And this is where, according to Nagayama, there is a close structural similarity between the rendering problem with crawling management: even in the crawling phase Google must make a structured assumption about the importance of the page it has discovered on the web before actually completing the crawl.

Google’s problems with managing crawling and rendering.

In crawling, the biggest bottleneck is usually the server resource on the side of the crawled websites: Google needs to make good predictions about how much load the hosting can tolerate, and decide how fast to crawl the site, striking a good balance between user needs and refresh rate. This is a very sophisticated system that large teams of crawl engineers have built, although it receives little third-party attention.

On the other hand, the biggest bottleneck in rendering is server resources from Google. Of course, there are new resources such as JavaScript and JSON files that Google needs to crawl, which will add to the “crawl budget.” However, while some of the resources can be cached to minimize the crawl, pages that need rendering generally have to be returned from scratch each time. And even Google cannot freely use its computing resources to have an index with fully updated, fully rendered pages from the entire Web.

How Google chooses which pages to render: popularity and quantity of pages

Google must then decide which pages to render, when and how much. And just like the crawl budget, it uses signals to calculate priorities: in this case, the most important are the popularity of the site and the number of URLs that need rendering.

If a site is popular and needs a lot of rendering, then it gets more resources; if there is no need for rendering, no resources are allocated. If there is a website that needs a lot of rendering but is not very popular, Google will allocate some of the resources to render some of the pages deemed important within the website.

Nagayama also provides a practical example, applying this reasoning to a scenario in which “you have created a new single-page application that performs full client-side rendering.” Return the HTML template and send the content separately in JSON files, he continues: how much rendering budget will Google have allocated?

Nagayama explains that there are many cases where the site does not have scannable URLs (you have to give permanent URLs to the states you want to scan!), but even if the operations are done well, it is quite difficult for Google to understand how important a new site is to users. Most likely, Google will slowly give budget to test the waters as it tries to gather signals about the site. As a result, not all URLs will be indexed on the first pass, and it may take a long time before they are fully indexed.

Priority signals to estimate render budget

Not all pages have the same weight or relevance in Google’s eyes, so the search engine must make thoughtful decisions about where to allocate additional rendering resources to process complex or large pages. We can therefore assume that, in determining when and how to allocate rendering resources for a given site, Google relies on a number of prioritization signals that help optimize the distribution of the render budget: among the main ones are the popularity of the site, the performance of its resources, and the amount of JavaScript running.

A first criterion that guides the prioritization of the render budget concerns as mentioned the popularity of the site. Large, prominent sites with significant traffic flow or a strong presence in the online landscape tend to receive more attention and resources from Google. The logic is simple: a more popular site is more likely to have content that is relevant to users and, consequently, deserves more resources to ensure that all content is properly indexed. A less well-known site, on the other hand, might receive fewer resources, resulting in slower rendering times or more careful selection about which sections of the site deserve to be processed first.

Other factors that influence Google’s decision include the performance quality of the site and its resources. An optimized site, which loads quickly and minimizes HTTP requests and heavy JavaScript files, will require fewer resources to process, which may cause Google to devote more render budget to direct examination of all its pages. Conversely, a site with excessive loads of JavaScript, slow loading times, or complex content to execute may see a smaller budget allocated. This does not mean that less optimized sites will not be rendered and indexed, but the whole process will be slower and less efficient.

Google also adopts a strategy based on the importance of hierarchy and freshness of content. When a new page is published, the crawl budget and render budget overlap: Googlebot will visit the page, identify whether or not it requires rendering, and decide whether or not to devote immediate time and resources to perform it. However, if the page is not considered a priority (e.g., if it contains few traffic signals or does not show new updates of relevant content), rendering may occur later or be partial. Pages that are frequently updated or have obvious freshness signals , on the other hand, will receive more attention treatment, thus a larger render budget investment.

Tips for managing a site’s render budget

Nagayama also presents a thought-provoking case, namely, “My site has been in the index for a long time and Google knows how popular it is : does that mean it’s okay to do a complete revamp of the site to render it client-side?” According to the developer, no: if you have returned HTML that doesn’t need much rendering, and then suddenly changed all your URLs to render client-side, Google needs to crawl again and render all the URLs. Both the crawl budget and render budget have a negative impact on the site in this configuration, and as a result it could take months, even more than a year, to get all your URLs to the index after the site is refreshed.

In conclusion, this article gives us a number of very useful pointers to the best management of site rendering speed for Google. According to Nagayama, client-side rendering in large-scale websites is still unrealistic if you want URLs to be indexed efficiently.

For him, the ideal approach is to render simple HTML that does not require any JavaScript execution. Rendering is an extra step that Google supports, but that means forcing Google to put extra effort into understanding the content-a waste of time and resources, which can slow down the indexing process. In contrast, if you implement dynamic rendering, returning server-rendering content to Googlebot and continuing to serve client-side rendering to users , it will be easier for Google to process the site, but this also means committing your servers to do the work for Google.

The final consideration, then, is that simple, static HTML continues to be the fastest and most accurate solution for those who want their site to be indexed quickly and at scale. Client-side rendering can hinder website performance in SERPs, especially for news sites that need to be indexed quickly or for large-scale web platforms that produce hundreds of thousands of new URLs every day.

Google has worked hard to recognize pages that use JavaScript, and this effort makes it years ahead of other search engines in understanding today’s web, and it works well up to medium-sized websites.

It is guaranteed to evolve even more, but it must be recognized that the current system is not perfect, just like any other engineered system. Where Google fails, webmasters must step in to provide help from the other side: if your service produces hundreds of thousands of URLs every day, Nagayama says, then you probably need to pay attention to these details and proceed with caution.

This engagement of all participants in the web ecosystem can help create a world where information is delivered quickly to the users who need it: “Let’s keep building a better web, for a better world,” Kazushi Nagayama writes in closing.

Eliminating waste: how to optimize your render budget to improve indexing

Given the sensitivity of the topic, which involves a very specific aspect of technical SEO, it is useful to go and look a little more analytically at what best practices can be adopted to minimize overuse of resources, and thus to better manage the render budget and ensure that Google properly indexes all the pages of a site.

One of the most effective steps is to reduce the amount and complexity of JavaScript and CSS codes , of course, so as to make it easier for Googlebot to view and understand the key content of the page without engaging in too heavy computational processes.

A particularly useful solution, especially for dynamic sites, is prerendering, a technique for generating a static version of the page, ready to be served to Googlebot, greatly simplifying and speeding up the rendering process. In this way, the search engine gets a complete version of the page right away, while the JavaScript is executed only on the user side.

Techniques such as dynamic resource caching also play an important role: resources such as repetitive JavaScript files and some parts of CSS can be stored locally to prevent Google from having to download and process them repeatedly, thus lightening the rendering load.

How to measure and improve the render budget: the concept of render ratio

To understand whether Google is effectively using the render budget on our site, we need to start with a simple question: can the search engine render the most relevant content correctly? The concept of render ratio, which is also discussed in this in-depth Prerender article, helps us give a precise answer to this question.

Render ratio is a metric that allows us to understand how much of our site is actually rendered by Google compared to what is simply scanned in HTML. Although there is no direct tool provided by Google that gives us an explicit measurement of this ratio, it is still possible to infer the render ratio by comparing the number of indexed pages that contain content rendered only after JavaScript processing with those indexed based on content that is immediately present in the HTML code.

To measure the render ratio, we can:

  1. Identify an “HTML phrase” , which is a phrase that appears on all pages directly in static HTML code.
  2. Identify a “rendered phrase” , i.e., a phrase that appears only once JavaScript is rendered.

Next, using an advanced search command such as site:youdomain.com “HTML phrase ” we can check how many pages have been indexed with the immediately available HTML content. The same process is repeated for the rendered phrase (site:tuodomain.com “rendered phrase”), allowing us to compare the number of indexed HTML pages with the number of rendered pages. By dividing the number of rendered results by the number of indexed HTML pages, we can calculate our render ratio, which is the ratio of rendered content to purely scanned content .

A low render ratio means that Google fails to properly process content loaded via JavaScript, which can hurt the site’s visibility in search results. If Google does not render the content, it will not be considered relevant for indexing, regardless of whether users can see it correctly on the browser. The consequence is that important pages that rely centrally on JavaScript for content may stay off Google’s radar.

Google’s words on render budget and rendering

Although Google does not technically use the term “render budget” in its official documents, often some of the company’s leading public voices, such as Martin Splitt and John Mueller, have addressed the issue of JavaScript rendering and the impacts this technology can have on a site’s indexing effectiveness.

In particular, Google clarified that rendering complex pages, especially those that use JavaScript to generate content, requires significant computational resources, which are distributed gradually, according to priorities Google assigns based on site popularity, content quality, and code efficiency.

In various speeches, Martin Splitt explained that while there is no real “track” of the monetary cost of rendering, Googlebot still performs careful management of the resources required. JavaScript, which has become a foundational element on many websites, slows down indexing: this is not a problem related only to the complexity of the code, but rather to the extra time required for Google to understand what exactly a page that makes extensive use of dynamic resources contains. The bottom line is that if a page is not rendered correctly, it will not be indexed and, as a result, will not appear in search results.

John Mueller has repeatedly stressed the importance of not overloading Googlebot with unnecessary JavaScript, recommending Server-Side Rendering (SSR) or prerendering techniques as solutions to alleviate the amount of computation Google has to do to make content visible. Mueller also explained that handling rendering correctly is not just a matter of avoiding technical errors: poor implementation can also cause numerous problems in terms of disseminating crawl budgets on pages that actually provide incomplete or duplicate content.

Although Google is constantly working to improve its ability to render dynamic content, rendering remains a significant challenge. For small and medium-sized sites, Google can currently handle JavaScript efficiently, but for large sites, the use of caching and static rendering resources continues to be highly recommended.

JavaScript rendering issues: timing and resources

JavaScript has brought incredible opportunities for interactivity and customization to the modern Web, but it has also introduced significant challenges to the search engine indexing process, chief among them longer rendering times and intensive use of computational resources. When Googlebot visits a page, the crawling process is relatively fast, unless the content is contained in JavaScript scripts. In this case, Googlebot has to “execute” JavaScript, loading and processing all resources related to the page, which requires an additional allocation of resources known as render budget.

Registrazione
Optimize your site
SEOZoom tools help you show Google your top pages

One of the problematic aspects of JavaScript rendering is the extra time calculated between crawling and rendering. As mentioned earlier, studies have shown that Google takes up to nine times longer to render JavaScript-based pages than pages built on plain HTML: this delay is due to the fact that Google puts pages in a kind of rendering queue, a queue in which pages wait to be processed, depending on the resources available on the Googlebot side. The more complex and heavy JavaScript to execute, the longer the time it takes to complete rendering increases-and all of this can delay or slow down the indexing of entire sections of the site.

Adding to the problem is that every JavaScript resource that Google has to load (.js files, frameworks, asynchronous scripts) affects the limits of a site’s render budget . The more files to process, the heavier the page will be in terms of HTTP requests, the more resources will be required to render. This can lead Google to fail to properly render crucial parts of the page, such as text or internal links, negatively affecting indexing and, inevitably, organic traffic. When content or links are inserted via JavaScript and not executed correctly, Googlebot may fail to discover or interpret them, leaving large areas of the site unscanned or unindexed.

The ability to minimize rendering times and optimize resource consumption thus comes to play a crucial role in SEO. Implementing technologies such as Server-Side Rendering (SSR) or dynamic rendering can dramatically reduce the time it takes to complete rendering, allowing Google to access content faster, without having to wait for heavy client-side processing to complete. Employing tricks such as caching repetitive scripts and lazy loading of less critical resources can also improve the efficiency of the indexing process. Such practices also help avoid wasted crawl budget, another element affected when Googlebot fails to crawl or render a page efficiently.

Try SEOZoom

7 days for FREE

Discover now all the SEOZoom features!
TOP