Google held its Keynote in California this week; their annual developer conference gives us an idea of what new products, updates and other game-changers we can expect over the next few months.
This year’s event was a three-day-long rollercoaster of announcements relating to artificial intelligence and personalisation – with a few brief mentions of search thrown in as well!
Here’s a quick recap of some of the highlights:
Google has rebranded its “Google Research” division to “Google AI”, a move that clearly shouts where their R&D priorities are headed.
One of the most important and impressive announcements from the conference related to the concept of “continued conversation” in AI; the Google Assistant is getting an update which allows you to ask follow-up questions from your original command.
In short, rather than having to say “Okay Google” before each request, you only have to say it the first time. It will feel more like having a real conversation with the assistant, as opposed to barking a series of commands to get all the information you need.
“Duplex” was introduced on Tuesday as Google unveiled that its AI can now make phone calls on behalf of its owner. They showed an impressive demo of the Assistant calling a hair salon to book an appointment. Not only did we see the “continued conversation” update in action, but the AI sounded incredibly human, with added “umms” and “mm-hmms” to the point where it’s not clear if the salon employee knew that they weren’t talking to a real person.
Google has clarified that they will incorporate full transparency and disclosure into the design and that the human-sounding vocal cues were simply to “make the conversation experience more comfortable”.
Your Photos and Visual Search
Google Photos already gets an AI boost from built-in editing tools and the ability to organise your pictures into collages and slideshows. Now on top of this, we’ll see some new quick-fix options like colour enhancement, automatic rotation, and brightness correction.
While this offers a number of benefits for the end user, we’d infer that this is all in the service of improving its visual search capabilities. Every photo that Google can have represents more information and more opportunities to improve its image recognition ability. How can it get more photos? By providing you with the perfect tool to edit and manage all your photos, of course!
AI Personalisation has found its way to Google Maps
A new version of Maps is set to launch this summer. The core function of maps isn’t going away, but it looks like the purpose of the tool may be shifting away from your basic “help me get from A to B” to helping users explore new places and get the best out of their day/evening/holiday in the city they’re in.
Updates include a new “For You” tab, which will resemble a newsfeed of recommendations for new coffee shops, restaurants, bars, amusements etc in cities that you’ve “followed”. You’ll be able to see what places are trending and if anything new has recently opened in the area. Google will be using anonymised cohort analysis, i.e.: they’ll see where people are gathering, particularly those who go out a lot and use that to determine new trends.
We learned a little more about Google’s plans for the next phase of the Google Home assistant – to recap, smart displays are essentially the smart speakers we already know, such as Google Home or Amazon Alexa, but with a built-in screen in which you can see additional information for further context to the assistant’s answers, such as directions and videos.
Back in January, Google announced that it would be partnering with JBL, Lenovo, LG and Sony to embed Google Assistant into its products to develop smart display devices. Demos of these products were available at the I/O and we now know that they’ll be available to buy in July.
We are sorry that this post was not useful for you
Help us improve this post:
Tell us how we can improve this post