Google Bert, still barely visible effects in SERP

A whole week is now passed from Pandu Nayak’s announcement, but up until now Google BERT doesn’t seem to have triggered the anticipated (and greatly feared) SERP earthquake: “the biggest jump ahead” in the history of the Google Research system, that should have impacted at least one query out of 10, is not really leaving a heavy mark in its ride.

Google BERT, promises not (yet) kept?

Summing it up quickly, last October 25th Search vicepresident officially announced the introduction of an innovative neural artificial web technique to help the understanding of the human language: Bidirectional Encoder Representations from Transformers, or BERT, for short. This system, applied to the search engine algorithm, allows to automatically encode written or vocal info in a more natural language and to better understand what is really important in a query.

Pandu Nayak explicitly talked about “the biggest jump ahead of the last 5 years” for the search engine and of the effects on the 10% of the english reasearch queries in the USA. A we were saying, though, the SERPs analysis does not highlight that deepness of change in rankings and answers, but that probably depends on the nature of the algorithm.

What is Google BERT useful for

The update intervention has a useful function mainly on those longer and conversactional query, or on that kind of queries in which prepositions have deep influence on the final meaning: before BERT, the algorithm used to not really understand the context of the words in the research string, as we were explaining in the past examples, while is now possible to researche in the most natural and personal way.

Trackers does not succeed in monitoring long tails

Considering this preamble, it seems credible that we cannot (yet?) see the practical effects of the update in SERPs because we are still using tracking tools that cannot analyze with extreme precision long tail keyword, the longer and more diluted queries, the ones actually involved in the new algorithm.

A similar position has been taken by Moz’s Pete Meyers, that admitted that the MozCast tool “keep tracks of the shortest terms and the so-called head terms, so not that kind of phrases that would probably require BERT’s natural language elaboration (NLP)”. Similar the evaluations by RankRanger, another american tool, that only take notes of some light fluctuation inside the SERPs.

There aren’t major disruptions like back with Panda and Penguin

Anyhow, whoever was waiting for revolutions like the ones happened with the launches of Google’s Panda and Penguin algorithms (or with the more recent broad core updates) was left disappointed: the main research results in the US, for now, are substantially unchanged, with minor fluctuations that fall within the defininition of “routine activities”.

No interventions to operate on sites

There is one fact more to linger on, resuming Barry Schwartz’s considerations on Search Engine Land: there is no real onpage optimization work to do because of BERT. The system’s work is to help Google to better understand users’ researches and the search intent hidden behind natural language: this means, though, that we can (indefinitly) stop thinking about SEO copywriting as a kind of writing intended for machines and to the search engine, now moving to write good for real users that will read those contents (and for all those systems more and more capable to interpret words like human beings).

How to write for Google BERT? Thinking about people

Our final advice on how to write for Google is to continue producing quality contents for people, to keep trying to make pages special providing useful info and matching their original intent, to keep on bet on purposeful pages as Martin Splitt was saying some time ago about onsite SEO factors. And to possibly use SEOZoom with its tools, always more and more “update-proof”.

7 days for FREE

Discover now all the SEOZoom features!
TOP