Google’s Assistant is on a billion devices. Google sees it as a kind of connective tissue in its growing device ecosystem, from smartphones to the web and to Google Home/Nest smart speakers and displays.
This morning as expected, Google made several Assistant-related announcements. The most significant of them was relatively technical: the migration of speech processing from the cloud to the device itself.
Not exactly ‘search personalization.’ The Assistant announcements included “Duplex on the Web,” enabling quasi-automated rental car booking and movie ticket-buying online. There was also considerable discussion of Assistant-related personalization features. This is somewhat ironic because many SEOs have long believed that Google personalizes search results, though the company has repeatedly said it does not.
Accordingly, Google introduced “picks for you,” which are suggested recipes, podcasts and events (to start) based on users’ personal data and preferences. There was also discussion of a kind of individualized knowledge graph that enables the Assistant to understand subjective references (“Personal References”) and the relationships between things in the context of user history. This may not be “personalized search” per se, but may amount to something very similar in practice. The Assistant will also factor in context (like time of day) into its suggestions and recommendations.
Waze and Driving Mode. Google also announced that the Assistant is coming to Waze soon and that there will be a new “Driving Mode” for in-car use. It features a new UI with “a voice-forward dashboard that brings your most relevant activities—like navigation, messaging, calling and media—front and center.” Driving Mode also gains personalized insights and makes suggestions, based on user Gmail accounts and Calendar entries. It knows the restaurant you need to drive to, for example, based on your calendar.
Driving mode launches automatically when the device connects to Bluetooth in the car. Users can also invoke it by saying some version of “OK Google, let’s drive.
The most significant Assistant news. But by far the most significant Assistant news was the movement of speech processing onto the handset from the network. Google said it had reduced the computing power required to do speech processing from 100GB to “less than half a gigabyte.” The practical effect of that is that most of the speech processing can now take place on the smartphone — making the Assistant and its associated functions (opening apps, dictating messages) much much faster. It can also happen without a network connection.
According to Google, this new version of the Assistant, running locally on the device, will “deliver the answers up to 10 times faster.” It will be available on Pixel phones at some point later this year.
Why we should care. The performance and speed improvements discussed above are likely to widen the perceived performance gap between Google Assistant and its competitors (Siri, Alexa, Cortana, Bixby). The fact that most of the speech processing can happen on the device means that there’s going to be less latency and better outcomes for users. This will reinforce and probably increase Assistant usage and may motivate some users to opt for Android phones over the iPhone, especially with the new, cheaper Pixel 3a devices ($399).
Google is starting to monetize Assistant results, an indication that it’s seeing growing usage and preparing for a time when the Assistant may overtake conventional search on mobile devices.