Every year we “make thousands of improvements to Google, from improving our ability to understand language to new features that organize information in useful ways”: this opens the in-depth analysis that Mountain View has dedicated to the news emerged from Search On, an event broadcast a few days ago in live streaming to tell the latest updates concerning the complex system of Google Search, characterized by the increasing use of Artificial Intelligence – AI – to “help people understand the world around them”.
Google BERT now active on all queries in english
The first announcement concerns Google BERT: at the time of launch, almost exactly a year ago, it was presented as an algorithm that could impact on almost 10 percent of all queries, but in reality “BERT is now used in almost all queries in English, helping you achieve superior quality results for your questions”.
There is not much to do to optimize the content of the site for Google BERT, because the goal remains to offer useful and quality information to users: what changes and progresses is Google’s ability to understand this content and how it is related to the user’s query.
A new algorithm to reduce misspells
It is a specific post by Prabhakar Raghavan (Senior Vice President, Search & Assistant, Geo, Ads, Commerce, Payments & NBU of the company) to provide some additional detail on Search improvements, geared to increase “our ability to understand your query and classify the relevant results for that query”.
Goes in this direction, as well, the new algorithm of understanding and spelling correction, which helps Google to understand the context of words with spelling errors, so as to allow you to respond with the right results “in less than 3 milliseconds“, providing targeted suggestions and improving the quality of the experience in a remarkable way.
According to Raghavan, “one in 10 queries every day is spelled incorrectly“, but thanks to this new system, which uses a deep neural network, Google has significantly improved “the ability to decipher misspellings“, and this “Single change makes a better spelling than all of our improvements in the last five years”.
Google now ranks the text’s passages
Another big breakthrough is the ability to understand the text and extrapolate the significant parts for the user’s query: now in fact Google is “able not only to index web pages, but also individual passages of pages“, better interpreting the relevance of specific passages and not only the overall page “to find that specific information, the needle in the haystack, that you are looking for”.
Very specific searches can be very difficult to get properly, writes the Googler, “since sometimes the single phrase that answers your question could be buried deep in a web page”. As you see from the image (taken from the Google article, as well as the others on the page), thanks to the new understanding skills, Google can understand that the specific step (right) is much more relevant for a specific query than a larger page on that topic (left).
This technology will improve 7 percent of search queries in all languages when it will be fully operational for all versions of Google and could be a critical evolution for SEO, because it potentially expands the number of pages that compete in the ranking for the same topics (without even being optimized exactly for that keyword).
How passage identification works and what it means to the SEO
As far as we know for now – especially on Search Engine Land – the identification and classification of passages is made thanks to a change to the ranking system itself, and not to the indexing method: Google will continue to index complete pages, but its systems will also take into account the content and meaning of the passages in determining what is most relevant to a query, while previously they looked essentially at the page as a whole.
In practical terms, if before to make the difference for the pertinence were strong signals of the specificity of a page – like the title – could now assume even more value the management of the headings of the paragraphs, that might help Google to locate that single section of text that matches precisely the query and the user’s intent, even though the rest of the content speaks of something else and moves away from that topic.
This process is different from what happens with featured snippets, which are portions of text extracted from a page that Google systems have rated as relevant to the query in its entirety, and thus represent the most relevant passage of content that is relevant in itself.
Subtopics and secondary topics
The article also announces another interesting news, coming by the end of the year: the subdivision of content in subtopics, secondary topics that help users to deepen a certain interest.
Through the application of neural networks, the search engine is able to understand all sub-topics related to the main topic and provide “greater diversity of content when searching for a broad theme”.
As you can see in the gif, if we are looking for exercise equipment at home “We can now understand relevant secondary topics such as cheap equipment, premium choices or small space ideas and show a wider range of content for you on the search results page”.
Understanding of key moments in videos
Using a new approach based on artificial intelligence, Google is now able to “understand the deep semantics of a video and automatically identify key moments”, starting playback from that exact moment even without manual signaling with the tags of the video creators.
This evolution “allows us to tag those moments in the video, so you can navigate them like the chapters of a book: whether you’re looking for that one step in a recipe tutorial or the winning home run in a reel of highlights, you can easily find those moments”.
Instead of classifying entire videos on the basis of the general topic, Google will be able to analyze videos in detail, assign a tag to each section to describe what it’s about and then send users directly to those sections of a video. The technology is already being tested and “we expect that by the end of 2020 it will be used in 10 percent of searches on Google”.
A deeper understanding thanks to data
Since 2018 Google has been working “on the Data Commons Project, an open knowledge database of statistical data launched in collaboration with US Census, Bureau of Labor Statistics, World Bank and many others”, because “sometimes the best search result is a statistic, but often statistics are buried in large data sets and not easily understandable or accessible online”.
And so, the merging of these datasets “was a first step and now we’re making this information more accessible and useful through Google Search”.
Basically, if we query how many people work in Chicago the search engine uses “Natural language processing to associate your research with a specific set of billions of data points in Data Commons and provide the right statistics in a visual and easy-to-understand format”.
To allow you to easily explore the topic in more depth, there will also be “other relevant data points and context, such as statistics for other cities”.
In practice, this feature is a kind of featured snippet, which ignores the webpage of origin and shows the statistics directly in the search result, as an answer to a question, but also offering the opportunity to discover and research the subject in more depth.
Access to high quality information during COVID-19
The work on real-time information still continues, so to help users “navigate your world and do things more safely and efficiently”. A first concrete example concerns the level of busyness of an activity, or peak hours, to help people maintain distance and avoid too crowded places.
Moreover, Google added a new feature to Live View to “help you get essential information about a company before you even enter”.
These changes were necessary in light of the upheaval caused by the Coronavirus, as Google again explains in another post: “The algorithms of the times of Google Maps have long been able to identify crowding patterns and peak times for a place, but with established social distance measures and activities regulating schedules or even temporarily closing due to COVID-19, our historical data were no longer reliable in predicting what the current conditions would be”.
And so, “to make our systems more agile, we started to prioritize the latest data over the previous four to six weeks to quickly adapt to changing patterns of peak times and real-time crowd information, with the intention of bringing in a short approach similar to other features, such as waiting times”.
Still with the same goal, Google is also “adding COVID-19 security information to the forefront of business profiles in Google Search and Maps”, which will help you know “if an activity requires you to wear a mask, if you have to book in advance or if the staff is taking additional safety precautions, such as temperature controls”. In addition, thanks to Duplex conversational technology, “local businesses can keep their online information up to date, such as opening times and store inventory”.
Only in the last year “Duplex has been used to bring more than 3 million updates to activities such as pharmacies, restaurants and grocery stores, which have been seen over 20 billion times in Maps and Search”.
Helping quality journalism through advanced search
Google also keeps on going on with its commitment to journalism, which has become explicit in recent days with the launch of the new Google News Showcase and with the support of Journalist Studio, “our new suite of tools to help journalists carry out their work more efficiently, securely and creatively through technology“.
The latest novelty is called “Pinpoint, a new tool that brings the power of Google Search to journalists”, allowing them to “quickly sift through hundreds of thousands of documents” and automatically identify and organize entities such as “most frequently mentioned persons, organisations and localities”. For Google, “quality journalism often comes from long-term investigative projects, which require time-consuming work that sift through huge collections of documents, images and audio recordings.”which technology can simplify and optimize.
Updates to Google Lens for more visual searches
For many topics “seeing is the key to understanding”, writes Prabhakar Raghavan, and in fact Google has long been pushing on the visual side, both with image search and with video and other tools: a new landing are the features of Lens and AR in Google Search, which “help you learn, shop and discover the world in new ways“.
For example, with Lens you can now “get step-by-step support for homework on math, chemistry, biology and physics problems,” a support for those facing the challenges of home learning. Social distancing has also “radically changed the way we shop, so we’re making it easier to visually buy what you’re looking for online, whether you’re looking for a sweater or want to take a closer look at a new car, but you can’t visit a showroom”.
Hum a tune to Google so to recognize the song
Another feature now debuting – much lighter in scope – is the ability to hum a tune to the Google app (by clicking on the microphone icon or on the button Search a Song) to finally find out the name of the song that we have left in our heads or that we have just heard.
The AI models of the search engine manage to match the melody to the right song – even if we are out of tune – offering various results in response, with indication of the matching percentage.
What makes Google Search different
In conclusion of this long overview of the evolution of Search, Prabhakar Raghavan devotes a reflection on the traits that make Google Search a different and distinct search engine from others, which “helps you find exactly what you are looking for”, especially now, because “there has never been more choice in how people access information and we must constantly develop cutting-edge technology to ensure that Google remains the most useful and reliable way to search”.
There are four key elements that form the basis of all the work to improve search and answer trillions of questions each year, and that make Google “useful and reliable for people who come to us every day to find information”.
The understanding of all info in the world
The first front of Google’s work is “the deep understanding of all the information in the world, whether this information is contained in words on web pages, in images or videos, or even in the places and objects around us”.
Thanks to investments in Artificial Intelligence, “we are able to analyze and understand all kinds of information in the world, just as we did by indexing web pages 22 years ago. We are pushing the boundaries of what it means to understand the world, so even before you type a query, we are ready to help you explore new forms of information and insights never before available”.
Information of the highest quality
People rely on Search to get the highest quality information available and “our commitment to quality has always been what sets Google apart from day one”. Every year we “launch thousands of optimizations to improve Research and rigorously test each of these changes to make sure people find them useful”. “Our ranking factors and policies are applied fairly to all websites and this has led to widespread access to a variety of information, ideas and point of views“.
A hold on privacy and safety
In order to protect people and their data, Google is investing in “first-rate privacy and security: we have guided the industry to protect you when searching with Safe Browsing and Spam Protection”, and “we believe that privacy is a universal right”, so “we are committed to providing every user with the tools they need to be in control”.
Open access to everybody
Last but not least, Google is committed to ensuring open access to all: “our goal is to help the open web thrive, sending more traffic to the open web every year since Google was created”.
As Prabhakar Raghavan reminds us, “Google is free for everyone, accessible on any device, in more than 150 languages around the world and we continue to expand our ability to serve people everywhere”, so that “wherever you are, whatever you’re looking for, whether you’re able to sing it, write it, say it or view it, you can Google it“.