Google's most recent partnership with Movidius will make our smartphones smarter.
More specifically, the cameras in our smartphones could soon be equipped with machine learning technology that could help assist the blind and quickly translate foreign signs.
Movidius has worked with Google before on one of the Alphabet-owned company's famed projects, Project Tango. Using a mix of cameras and sensors, Movidius' technology in Project Tango allows devices to create three-dimensional maps of indoor spaces. As a result, future smartphones could have the ability to not just know where it is, but know how it's moving through space, too.
Though this latest collaboration between Movidius and Google hasn't been branded with a project name yet, it has the potential to equip future devices to also know what they're looking at via the camera.
In some way, Google already allows for this ability in Android devices. Google's Photos app can already recognize people and objects in photos. Search "dog", for example, and the app will pull up all the photos of dogs a user has in their Google Photos library; search for "Paris" and a user will see pictures of themselves posing in front of the Eiffel Tower.
The Photos app, however, needs to be connected to the Internet to perform these intelligent functions. That's because all of the complex computing to do so have to call back to a distant data center where algorithms do the entire grunt work analyzing our photos and processing our requests.
Movidius' tech packs those same machine learning abilities into a small chip that can fit inside the body of a smartphone. Called vision processing units or VPUs, Movidius' Myriad 2 line of these latest VPUs, will give next-generation devices autonomous abilities.
Combined with Google's already powerful machine learning infrastructure, Android phones, for example, would be free from the cloud to perform tasks like speech and image recognition without any latency and while cutting down on data usage. As Movidius' chip would already be a part of the device, all of this processing would happen in real-time – no loading times to wait for anymore.
Speech and image recognition on Android smartphones is just the beginning, too. Deeper integration into autonomous drones and vehicles could allow for a level of speedy intelligence that's required in such situations. A driverless car can't wait for instructions from the cloud, for example, when an accident could be waiting around the corner.
When exactly, however, we'll see this real-time intelligence in real life is unclear. "This collaboration is going to lead to a new generation of devices that Google will be launching. And they will launch in the not-too-distant future," says Movidius' CEO, Remi El-Ouazzane.