How will Google be with MUM? Applications in Search and impacts on the SEO
It is the new milestone of Artificial Intelligence applied to Search, and in the first tests has already provided incredible support to improve the quality of results in vaccine related queries: the focus on Google MUM is increasingly high, and could not be otherwise given the premises related to this system of science fiction features. Now Pandu Nayak has also provided clearer guidance on how Google intends to apply MUM, on the planned roadmap and how the company is working to ensure that the technology is applied responsibly, so you can really radically change the way users interact with the most used search engine in the world.
Google MUM, roadmap to the application
These topics were the focus of the chat between the VP of Google Search and Search Engine Land on the near future of the search engine thanks to MUM, the latest arrival of Big G in the understanding of the language applied to the Search.
For over two decades, search engines have practically worked the same way: the user inserts a text (or voice) query and the machine returns a mix of organic links, advanced results and ads; Of course, in recent years they have improved their ability to determine the intent, provide relevant results and incorporate different vertical (such as images, video or local research), but the premise has remained the same, at least so far.
With more recent advances, such as BERT, search engines have in fact increased language processing capabilities, allowing them to better understand queries and return more relevant results. Now Google has taken a further step forward, presenting and testing the Multitask Unified Model (MUM), a technology that is a thousand times more powerful than BERT and combines language comprehension with multitasking and multimodal input capabilities.
A milestone in language understanding
Since the beginning, Google has compared MUM to BERT, and it is therefore easy to define the new model a more advanced version of the previous one: both are based on transformer technology and MUM integrates BERT’s language comprehension skills, but it is based on a different architecture (T5 architecture) and is able to do much more.
As Nayak reminds us, MUM “learns simultaneously in 75 languages: this is good, because it allows us to generalize from data-rich languages to languages with a lack of data”. In addition, so MUM’s apps can be more easily transferred into multiple languages, and help strengthen Google Search in those markets.
MUM is not limited to text: multimodality and multitasking
The differences do not stop there, and another distinctive element is the multimodality of MUM, that is, it has not limited capabilities to text, but also extended to the use of video and images as input. In practice, the new technology is able to understand the content of the images and the intent behind each request in a precise and fast way.
In addition, it is “inherently multitasking”, as Nayak points out in the article: the natural language activities it can handle include (but are not limited to) classification pages for a particular query, document review and information extraction.
MUM can manage multiple tasks in two ways: on the training side and on the use side. For the Search VP, being “trained on multiple tasks, it learns these concepts in a more robust and general way: that is, it applies them to more tasks rather than just a single task, which would make them fragile when applied to a different task”.
Google and MUM, how and when it will be used
Very interesting are also the ideas provided about the actual implementation of MUM, because Nayak reveals that Google does not provide a use as a single feature or a launch in Search: rather, should be “a platform on which different teams can build different use cases”, anticipating that “many teams within the Research will use MUM to improve any activity they are doing to help the system”, as in the above example of the COVID vaccine.
In fact, in the short term MUM will focus largely on the transfer of knowledge between languages: once again, the work on vaccines exemplifies this functionality, because the technology has identified 800 variants of vaccine names in 50 languages in a few seconds. In this regard, the article notes that Google already had a subset of names of COVID vaccines that would have activated the COVID vaccine’s experience in search results, but MUM allowed it to get a much broader set of vaccine names, and then activate those results in multiple situations, when suitable.
Medium-term projects on the technology
In the medium term, however, work focuses on multimodality, which “it will be like a new search ability that we didn’t have before,” Nayak said, expanding the image search example that Prabhakar Raghavan first used during Google I/O.
In his vision of MUM in Search, Nayak describes an interface where users can upload images and ask text questions about those images: instead of a simple answer that could lead to a zero-click search, Nayak imagines Google returning relevant results, which bridge the gap between the uploaded image and the user’s query.
The first tests in this regard leave room for optimism, but in any case the VP of Big G stresses that the exact implementation of these objectives is set “in the medium term” with uncertain timing.
Long-term plans: creating more solid experiences for users
MUM’s ability to understand language at a much deeper level is also at the basis of the company’s long-term plans, with the aim of supporting “a much deeper understanding of information and convert that deeper understanding of information into stronger experiences for our users”.
To date, search engines still struggle to bring out relevant results for some specific and complex queries, as in the example provided to Google I/O: if a hiking enthusiast seeks “I climbed Mount Adams and I want to climb Mount Fuji next fall, what should I do differently to prepare?” At the moment, Google may not be able to deliver useful results. Therefore, the user will be forced to split his question into individual queries and verify the results, thus putting together the answer alone. In general, according to estimates provided by the company, to complete complex tasks on average people employ eight searches, but MUM will help a lot in this regard.
Specifically, says Nayak, MUM will “take that search query, which represents a complex need for information, and divide it into a kind of individual information needs“, thus helping Google to answer with results related to fitness training, soil characteristics of Mount Fuji, differences in climate and so on.
This is “what you do in your head when you think of individual questions and we think that MUM can help us generate queries like these,” adds the Search VP, because it will be able to put together results for these searches and maybe even “to insert a text that connects all this to the original and more complex question you had”. This means organizing information, understanding (and showing) what the connection is, so that you will be able to “enter and read the article on the best equipment for Mount Fuji or the tips for hiking or something like that in this richer way”.
One reason why this is a long-term goal is because it requires “a rethink of why people turn to Google with complex needs rather than with individual questions”. In addition, Google itself should also convert the complex need, expressed by a user’s search term, into a subset of queries and the results of such queries should be properly organized.
But MUM will not only be a system of questions and answers
But you do not have to make a mistake and think of MUM just as a “answers-to-questions system – that is, you come to Google with a question and we just give you the answer”, because the vision on this technology is much broader and “such a system of answering questions for these complex needs that people simply have is not useful”.
Thus, MUM should not further impact on zero-click searches, which by nature are “the simplest and most objective searches that are often resolved directly on the search results page” but rather drive traffic to relevant content in the open Web that provide answers to complex issues such as “hiking tips, finding a school for your child or figuring out in which neighborhood to live”which cannot be satisfied with a short and direct answer.
Mitigating costs and risks of MUM’s development
However, as never before in the development of research technology and models, we must also assess the potential ecological impact and the demand for large data sets: Google claims to be aware of these considerations and is taking precautions to apply MUM responsibly.
First, there is awareness that “these models can learn and perpetuate prejudices in training data in ways that are not exceptional, if there are undesirable distortions of any kind, and that is why the company is monitoring MUM’s training data.
“We do not form MUM on the entire web corpus, but on a subset of high-quality corpus web so that all the undesirable biases in low-quality content, in adult and explicit content, do not even have an opportunity to be learned because they are not even introduced to MUM“, Nayak reveals, recognizing however that even high quality content may contain prejudices, which the process of evaluating the company tries to filter.
The process has already been tested with BERT, subjected in the months prior to launch to “an unprecedented amount of assessments just to make sure there were no worrying patterns”, by intervening with measures to mitigate the situations in which such patterns occur. Similarly, he anticipates the VP, “I expect that, before we have a significant launch of MUM in Research, we will make a significant amount of assessment to avoid any kind of worrying model“.
Another issue is that of ecological costs: building large models can be both expensive and expensive in terms of energy, and therefore have a harmful impact on the environment. Google has also worked on this, and a research team has recently published a rather comprehensive and interesting document on the climate impact of various large models built internally or externally that “Stresses that, based on the particular choice of model, processors and data centers used, the impact of carbon can be reduced by up to a thousand times”, without forgetting that Google has been carbon-neutral since 2007, and “therefore, any type of energy is used, The impact of carbon has been mitigated by Google”.
MUM and SEO: pochi rischi per il futuro dell’attività di ottimizzazione per Google
In light of this interview, there are some considerations to be made about the possible impact of MUM on the SEO: first, Nayak has ruled out that this technology may increase the trend of click-free searches, because it will not be a “question-answering system” nor, therefore, will unjustly give priority to Google’s products over those of competitors (a concern shared by marketers and regulators).
This also means that MUM will not be the tombstone of the SEO, as feared by various analysts at the time of the announcement of this novelty: as said by John Mueller just a few weeks ago, the optimization activity for Google will not become obsolete despite the impact of technology and AI, but will evolve to positively take advantage of the advanced tools available.
If we want to look further, we can assume that Google will become even more skilled in interpreting the content and the language used, making the keyword less central, which could be less decisive in better positioning a web page.
Certainly not a new concept, for those who read this blog, already summarized in the “there is no keyword” formula, which then takes even more value in the light of developments in the work of Google.
According to some analysts, there will be no concept of “optimizing content for MUM“, because there will be no direct way to play with the search engine, since the algorithm can understand natural language. And so, even if the keywords will continue to be important because the queries will still contain them, we will have to think more and more about the intent and definitively stop writing articles for the algorithm, finally writing them only for the readers.