, and aren’t the same smart-home voice assistants they were at launch, or even a few months ago. All three AIs undergo regular updates that set them apart from one another — and keep the competition interesting for those of us following along at home (and in my case, at work, too).
Apple’s own yearly conference, , just happened too. And Amazon’s re:MARS conference is currently underway. That makes this a particularly busy time for smart home announcements, particularly related to Alexa, Google Assistant and Siri., the tech giant’s annual developer conference took place in May and there was .
Let’s explore the most recent changes to get a better sense of each assistant’s strengths, as well as where they need the most work — and what we hope to see in the future.
smart home space ever since it introduced its back in 2014. Now it has , powered by Amazon’s AI assistant, . Say “Alexa” to wake your speaker and then start talking. Alexa can help you with directions, order food and even or .has been a leader in the
Alexa is already a decently strong conversationalist, but it’s dependent on its wake word (Alexa) to initiate nearly every new line of conversation. For example, If I say, “Alexa, what’s the current temperature at the hallway thermostat?” I’d then have to say, “Alexa” again before requesting that the voice assistant “set the hallway thermostat to 68 degrees.”
Of course, it would be much more natural to say, “Alexa, what’s the current temperature at the hallway thermostat?” and then simply “Set my hallway thermostat to 68 degrees,” without requiring the wake word again.
Fortunately, Amazon recently introduced two new things that could make Alexa’s natural-language smarts even smarter.
First, Alexa will soon be able to handle Ben Fox Rubin, saw the feature on display at the Amazon re:MARS conference. In the video demo, someone asked Alexa about local movie times, bought tickets, locked down a restaurant reservation and scheduled an Uber — without having to say Alexa multiple times.without you having to repeat “Alexa.” This features is expected to roll out to US customers later in 2019 and will be specific to planning an evening out on the town. My colleague,
Amazon also has an award competition called the an Amazon’s blog post announcing the 2018 winner., encouraging colleges to design social robots that help develop Alexa’s natural language capabilities. It’s currently in its third year; University of California, Davis won first place last year, which included a check for $500,000 for their social robot, which “achieved an average conversation duration of 9 minutes and 59 seconds,” according to
The University of Washington, Seattle won in 2017; their social robot conversed for an average of 10 minutes and 22 seconds. By developing bots that can engage in longer conversations, the hope is that these student groups will help Amazon (and Alexa) find the best ways to maintain longer conversations of their own.
Google Assistant appears in Google” and “Hey, Google.”and Google Assistant is the brains behind those devices. Unlike Alexa, which is both the name of the voice assistant and the default wake word for Alexa-enabled speakers, Google Assistant devices respond to the phrases, “OK,
Like Alexa (and Siri, as you’ll see below), you can ask Google Assistant about the weather, the traffic, to adjust a smart LED for you — and much more.
Google Assistant is also a fairly strong in terms of natural-language conversations. I particularly like how you can. The voice assistant is patient as you go through the steps: You can ask it to go back to a previous step, repeat the current step and even ask for the next ingredient, how much of it you need, and what the conversions are, if any.
And, at I/O 2019, Google’s annual developer conference, the companywhere you can say “stop” to turn off an alarm without having to remember to say, “Hey, Google, stop” in your just-awakened grogginess.
The tech giant also introduced something calledat I/O 2019, a follow-up to the that would put a human-sounding voice AI on the other end of the phone line to assist you with booking appointments, reservations and more. Because it sounded so real, you wouldn’t necessarily know you weren’t talking to a person.
Duplex on the Web is text- rather than voice-based. Ask Google Assistant to make a dinner reservation for you and it will use the information it has about you toon the website. The idea is sound in theory: Let technology book your car rental and other basic information input tasks for you, but I wonder how well it actually works. Regardless, I’m certainly more comfortable with the concept of a text-based Duplex, rather than the voice-based AI.
iPhone, an iPod or a Mac: the , which you can control with Apple’s Siri voice assistant.than Amazon and Google, but it’s still a major competitor. There’s one Apple-branded, smart home device that isn’t an
Through Siri voice commands (and via the Home app in iOS), you can control smart home devices that are compatible with Apple’ssoftware. Like Alexa and Google Assistant, you can say, “Hey Siri, set my hallway thermostat to 68 degrees” or ask general questions.
While Siri tends to interface well with the the third-party smart home devices
that HomeKit supports, she typically falls behind when it comes to answering general questions and understanding natural-language queries.
Fortunately, the HomePod gotat that might help improve things, including the ability to recognize multiple voices, transfer audio from your iPhone to the HomePod and play live radio on iHeart Radio, TuneIn and Radio.com.
Alexa and Google Assistant already have multi-user voice recognition, meaning they can distinguish between my voice and my co-workers’ (). So Siri’s ability to tell who’s talking is something we’ve been waiting for, but it’s welcome nonetheless. This will mean that Siri should be able to give you reminders, music recommendations and other things that are customized just for you.
Apple also announcedat WWDC, which should make Siri sound less like an AI and more like a person. We’ll see how well it actually works when comes out later this year.
The current state of voice assistants
Amazon’s efforts to enable multiple requests without requiring the wake word each time is a huge step forward, even if it’s currently limited to planning a night out. Google’s “stop” alarm feature shows a similar trend to reduce reliance on the wake words or phrases and allow the conversation to flow more naturally.
And while we didn’t hear much about Duplex’s voice-based software at this year’s I/O, Duplex on the Web could streamline a ton of tedious web-chores that I’d just as soon skip.
Even Apple is stepping up with its new voice recognition feature and Neural Text to Speech software, designed to make Siri sound more human.
There’s a definite theme here, and it’s all tied to streamlining conversations and making them sound more like natural dialogue. I’m all for losing some of those repeated wake words to streamline conversations, but am creeped out by the idea of AI sounding so human that we won’t be able to distinguish between voice assistants and people —.
It’s an interesting time for the smart home voice control, with three major players competing for supremacy. A 2018 study saideven though Echo devices sold better, but that Alexa was catching up. We may be moving into a more incremental phase of improving voice assistants, rather than the mad dash to third-party announcements we saw in the first few years of the technology. It’s these gradual changes though that will ultimately move them forward to hopefully being more useful.