The Google Lens ended up overshadowing the recent I/O Keynote when it was first introduced by the American tech company.
However, there’s a good reason why it stole the show, considering that the new update has huge potential in changing the present augmented reality landscape.
According to the company, the technology utilizes machine-learning to classify objects in the real world that are viewed through the user’s phone camera. In addition, it also has the capability to analyze and interpret the objects with the end objective of anticipating what it is the user intends to do.
Google Lens can even automatically connect to a Wi-Fi router using optical character recognition for the username and password. The user can also read reviews of the restaurant in a pinch with the use of this new feature.
Google CEO Sunda Pichai said during the conference, “All of Google was built because we started understanding text and web pages. So the fact that computers can understand images and videos has profound implications for our core mission.”
The Google Lens will become part of the update for Google Photos and for the Android smart assistant in the future. Unfortunately, it’s not yet available for commercial release.
The potential of this new update should not be underestimated as it will change the way people use the search box and mobile devices. Instead of going to Google Search to type their queries, Google Lens will exploit visual media to narrow down the relevant results. It will also make use of the calendar, camera, and other native apps to provide info.
Voice search allows the user to skip typing on the search box, but one of its problems has been accuracy. Google Lens, in theory, won’t have such issues. Using the camera will allow the user to identify the type of chair or its manufacturer, for example. Once the technology is perfected, the user can then ask Google Assistant to order the same product online.
Google Lens will also have real-world applications that would be invaluable in bridging the language divide. Facebook is working on its machine-learning to hone its translator code in the platform, but Google’s AI may take it one step further.
For instance, instead of copy-pasting the words or sentences that need to be translated, the user will just point the mobile phone’s camera toward the text, and if Google follows through with its promise, you should get the translation results in a snap.