Wanted to find out alternative sources for a picture? Or what it represents? Google Lens (or Google Image Search) helped you out with visual results but not as many complex, related information – until now.
Google’s AI system – Multitask Unified Model – which is also known as MUM was announced in May and is taking over devices these months.
The rollout means users can go with their investigations further instead of stopping after the objects in a picture are identified.
For example, before this latest version of AI rolled out, users could take a picture of their bike and maybe identify the part in the frame.
Now, they can go a step further by asking questions regarding the bike part detected by the software. Like, if that part is damaged, Google can offer tutorials on how to fix it or where to potentially get it replaced.
The questions feature is as simple as it sounds. At a tap, a text box pops up and it’s up to you to type the query.
This was beyond our imagination five years ago which just shows how much AI has advanced and in how many ways machine learning can be used to make our lives easier.