X

"Look And Talk" Feature Could Reduce Accidental Nest Hub Interactions

Featured image for

A new teardown of the Google app, performed by 9to5Google, appears to point to a new feature called “Look and Talk” that would reduce accidental interactions with the AI on Google’s Nest Hub. The feature, spotted in the Google app — on the beta channel, version 13.14 — works as its branding implies.

Namely, with Look and Talk activated, the feature would combine a user looking at the device and their voice to respond. That’s as opposed to simply responding anytime the “Hey” or “Ok Google” wake word is heard.

Advertisement
Advertisement

With the feature activated, camera-enabled smart displays would be able to see if the user is looking at the device. Then it could make a determination as to whether or not the user intended to activate Google Assistant. Or, conversely, if the wake word it heard was circumstantial and not intended to activate Assistant.

This goes further than just talking to Nest Hub, thanks to the “Look” part of the equation

Now, not only could this feature appear and help reduce accidental Assistant activations. That’s if Google ultimately releases it on a stable channel, at all. It could also be used to improve how users activate the AI-driven features, to begin with.

That’s based on Google’s reported explanation of additional features. Summarily, users would also be able to activate Google Assistant on camera-enabled smart displays without their voice.

Like the more general features of the above-mentioned “Look and Talk,” a Nest Hub device would use its camera to verify the user. In the case of look and feel, that would be both their voice and their face. So users would need their account linked to the system. But the search giant says the camera alone could recognize when users need Google Assistant.

Specifically, the camera could recognize a user looking directly at the Assistant-powered display from as far away as five feet. It would compare their face to its saved facial recognition. And use an algorithm to determine whether they’re trying to activate the AI. Then users wouldn’t need to use their voice at all.

Best of all, because the processing of voice and face recognition occurs locally, no data on either is sent over the cloud. The entire process would happen on-device.