Google continues to bet big on machine learning with a new AI-driven product called Google Lens.
The vision-based system will essentially allow your smartphone to understand what is going on in an image. Now if you point the camera at a flower, Google Lens will be able to tell you its exact name.
If you take a photo of a restaurant, Google Lens won’t offer redundant information like “it’s a restaurant”, but instead will pull up the name of the establishment, as well as display ratings, reviews or opening hours.
But wait it gets even better – users can point Google Lens to a router sticker and have it automatically connect to a network. No additional steps need to be taken. Maybe you see a poster of your favorite band around town, just hold up the Assistant, tap the Lens icon and buy tickets to the show right there on the spot.
The new tool will be making it out to Google Photos and the Google Assistant at first, but will eventually make it out on all Google products. It’s mostly likely going to arrive baked into Google’s camera app with the next-generation Pixel phones.
In Google Photos, activating Lens will allow users to see more details about what’s shown in the images.
Google’s new product sounds very similar to what Bixby can do on the Samsung Galaxy S8. Samsung virtual assistant uses the phone’s camera to take a photo and then offer suggestions of related products on Pinterest.
Speaking of which, Pinterest also has a feature also called Lens, designed to help users recognize objects in real time. But at this point, Google Lens seems to be the more advanced.
Google Lens builds on work previously done by Google with tools such as World Lens, which allowed users to hold up the camera to a foreign sign and get the translation, but also on Google Goggles, a service which delivers additional information about paintings, landmarks or barcodes. Expect Google Lens to launch later this year.