For over twenty years, Google has been trying to organize the world’s information in the best way to make them universally accessible and useful. The basic text search has evolved over time, bringing to a much more sophisticated level finding news and information on the web with today’s search mode that is decidedly more natural and intuitive: now it is possible to search directly with the smartphone camera or ask a question aloud to some voice assistant.
During the annual event,Search On, the Mountain View company showed how the progress of artificial intelligence they are making it possible to transform search products once again. In fact, Google is far beyond the classic search box to create search experiences that work like minds and are multidimensional like people. Google’s idea is to have a world where the user will be able to find exactly what he is looking for by combining images, sounds, text and voice, just like people do in everyday life. It will be possible to ask questions in fewer words, or none at all, and Google will continue to understand exactly what users mean. this is the most natural and intuitive concept of research.
Google: here are 3 important new developments
Making the visual search more natural
Cameras have been around for hundreds of years and are usually seen as a way to preserve memories or, nowadays, to create content. But a camera is also a powerful way to access information and understand the world around us, so that the camera is the next keyboard. That’s why Google introduced Lens in 2017, which allows you to search for what you see using the camera or an image. Now people use Lens to answer 8 billion questions every month. This is why the Google company is making visual search even more natural with themultiple searcha whole new way to search using images and text at the same time, similar to how you might find yourself pointing at something and asking a friend.
Google introduced multiple search earlier this year as a beta in the US and, on the occasion of Search On, announced that this it will be extended to more than 70 languages in the coming months. And there is the novelty of “multisearch near me“, which allows you to take a picture of an unknown object, such as a plate or a plant, and find it in a nearby place, such as a restaurant or garden shop.” Multisearch near me “will also be distributed in English in the United States this autumn and will certainly arrive in other countries in the future.
Translating the world around us
One of the most powerful aspects of the visual understanding its ability to break down language barriers. Thanks to advances in artificial intelligence, the Mountain View company went beyond text translation to translate images. People already use Google to translate image text over 1 billion times a month, into more than 100 languages, so they can instantly read shop windows, menus, signs and more.
But often the combination of words and context, such as background images, give meaning. Now it is possible to merge the translated text with the background image thanks to a machine learning technology call Generative Adversarial Networks (GAN). If you point the camera at a magazine in another language, for example, the translated text overlaps realistically on the underlying images.
Explore the world with immersive visualization
The quest to create more natural and intuitive experiences also extends to exploring the real world. Thanks to advances in computer vision and predictive models, it is now possible to completely reimagine the definition of a map. This means that the 2D map will evolve into a multidimensional view of the real world, which will allow you to experience a place as if you were there.
Just as real-time traffic in navigation has made Google Maps so much more useful, Google is making another significant change in mapping by bringing useful information, such as the weather and crowding of a place, to life with theimmersive visualization in Google Maps. With immersive visualization, there will be no limits on the facts, but Google will help the user get an idea of a place before they even go there.
Let’s say you are interested in meeting a friend at a restaurant. You can zoom in on the neighborhood and the restaurant to get an idea of what it might be like at the date and time you intend to meet, visualizing elements such as the time and the presence of people. By merging our advanced images of the world with Google’s predictive models, it is possible to get an idea of what a place will look like tomorrow, next week or even next month. There is an expansion of this system with aerial views of 250 landmarks, and in the coming months, immersive visualization will arrive in five major cities, and more are on the way.
Thank you for reading this post, don't forget to subscribe!