Google Lens Uses Machine Learning to Recognize Objects
Google today announced Google Lens, an image-recognition tool that relies on mobile cameras to perform searches. The tool is a significant advancement to the old Google Image Search app. Google says its neural networks are better than humans at recognizing objects. Using Google Lens, people can aim their camera at just about anything and Google will instantly perform a search and suggest results. For example, users can point their camera at a restaurant and immediately see the Google Search results for that restaurant, including reviews, hours, and location details. It can recognize object such as flowers, and much more. Google didn't say when Google Lens will be available.
Google Lens Reaches Pixel Handsets
Nov 20, 2017
Google has updated Google Assistant on its Pixel smartphones and added the Google Lens tool it first announced back in May. Google Lens is an image-recognition function that relies on mobile cameras to perform searches.
Google Lens Rolling Out to All Android Users in Google Photos
Mar 6, 2018
Google today said all Android users can try Google Lens via the latest version of Google Photos for Android. Google Lens is an image-recognition function that relies on mobile cameras to perform searches.
iPhone Owners Gain Access to Google Lens in Google Photos
Mar 16, 2018
Google today said iOS users can try a preview of Google Lens in the latest version of Google Photos (v3.15). Google Lens is an image-recognition function that relies on mobile cameras to perform searches.
Google Extends Neural Machine Translation to More Tongues
Mar 7, 2017
Google has updated its Google Translate service and made it possible to generate higher-quality translations for a handful of new languages. Specifically, Google has applied its neural machine translation technique to Hindi, Russian, and Vietnamese.