Site icon Gadgets Africa

How to use Google’s A.I. ‘multisearch,’ which uses text and images

How to use Google’s A.I. ‘multisearch,’ which uses text and images

‘Multisearch’, which is powered by Google’s artificial intelligence can now be accessed by Google users around the world.

Google said on Thursday, that its improved “multisearch” capability will now be available to global consumers on mobile devices anywhere Google Lens is now available.

The search tool, which allows users to search using both text and images at the same time, was first released in April as an attempt to update Google search to better take advantage of the capabilities of smartphones.

A version of this, “multisearch near me,” which focuses on local companies, will be made accessible globally in the coming months, as well as multisearch for the web and a new Lens function for Android users.

As previously stated by Google, multisearch is powered by A.I. technology known as the Multitask Unified Model, or MUM, which can analyze material in a number of formats, such as text, photographs, and videos, and then draw insights and connections between subjects, thoughts, and ideas.

Google included MUM in its Google Lens visual search functionality, allowing users to contribute text to a visual search query.

“We redefined what we mean to search by introducing Lens. We’ve since brought Lens directly to the search bar and we continue to bring new capabilities like shopping and step-by-step homework help,” Prabhakar Raghavan, Google’s SVP in charge of Search, Assistant, Geo, Ads, Commerce, and Payments products, said at a press event in Paris.

For example, a user may use Google Search to find a photo of a shirt they like, then ask Lens where they can find the same pattern on a different sort of clothing, such as a skirt or socks.

They may aim their phone at a broken part of their car and search for “how to fix” on Google. This combination of words and graphics may assist Google in processing and understanding search queries that it could not previously manage or that would have been more difficult to enter using text alone.

The strategy is most useful when searching for items that you like but in other colours or styles. You may also photograph a piece of furniture, such as a dining set, to identify products that match, such as a coffee table.

 

How does Google Multisearch work?

1. Touch the ‘Discover’ (Android) or ‘Home’ button on the bottom left of the Google app (iPhone).

2. Touch the camera symbol near the Google search bar to launch the smartphone camera.

3. You can now photograph the object you want to search for. If the image is already saved in your phone’s gallery, hit the symbol next to the shutter button to upload it.

4. Swipe up to go farther and then select the ‘Add to your search button’ at the top of the images to match the menu at the bottom of the device screen.

5. Make an effort to be detailed. Before searching again, add further details such as colour, brand, and so forth.

6. Finally, the results will be displayed on your screen which would include text and images.

According to Google, consumers can filter and refine their results in multisearch by brand, colour, and aesthetic aspects.

Last October, the service was made available to users in the United States, and it was later expanded to India in December.

Google states that as of Thursday, multisearch is available to all users worldwide on mobile, in all languages and regions where Lens is available.

Google said on Thursday that the “multisearch near me” variation will be expanded as well.

 

 

Exit mobile version