The amazement over Google Duplex at I/O 2018 quickly veered into criticism over the lack of disclosure and questions about the on-stage demo itself. I think, somewhat lost in this furor, is the sheer impressiveness of Duplex and what an achievement it is for technology. But what exactly is it, and how does it work? And what will it be able to do for you?
What is Google Duplex?
While online booking systems are increasingly common, the majority of businesses today don’t have one in place. In the US, 60% of smaller shops, according to Google, do not have a way to automate how they get customers and rely on phones.
Problematic for businesses, this is also an inconvenience for end users. With Duplex, the Google Assistant can place a call on a user’s behalf for appointments that are still only over-the-phone. That’s the primary use-case Google is demoing, for now.
Will people use it?
Some scoff at how much time Duplex really saves, but there are definitely occasions when something like it would be truly convenient and a real “assistant” for a relatively simple interaction. One ideal scenario is having your hands full — literally or figuratively — and attention divided. Rather than stretching out a call and possibly making errors, users can just direct one command to the Assistant.
In general, voice interactions help solve this problem and are increasingly common with the rise of Google Assistant apps. In a way, Duplex is the Assistant Action for those 60% of small businesses and helps them stay level with businesses that offer these automated experiences. Automation has always been the general trend with technology given how we are already automating home appliances with Routines and responses with Smart Reply.
Meanwhile, one use case that was immediately suggested by people on the web was the ramifications for accessibility. Duplex as Google has already demoed today could be used by those who have difficultly speaking due to a lack of language proficiency or social anxiety. This use case falls under Google’s idea of creating a general assistive service for all.
How does it work?
Sundar Pichai noted on stage that Duplex was a culmination of the company’s various efforts over the years in deep learning, natural language understanding, speech recognition, and text-to-speech.
Users can start by telling the Assistant that they want to schedule an appointment at such-and-such place, noting the desired time, activity, and other pertinent details. The Duplex system then takes this information and places a call to the business in the background.
After disclosing itself to be a machine and to be in accordance with several state laws, it will speak with a natural-sounding voice to the human on the phone. It takes into account the imperfections of the other person’s replies, including common occurrences like correcting mid-sentence, complex phrases, omitting words, and relying on context rather than explicitly stating things.
Meanwhile, Duplex’s voice relies on Google’s recent TTS advancements; inserting speech disfluencies (“hmm” and “uh”) and artificial pauses, which are naturally expected and give the system time to process. As noted in last week’s episode of Alphabet Scoop, we were told by a source that the Duplex’s collection of phrases might be prewritten with the system choosing the appropriate one to reply with.
Why is it controversial?
There are several issues that make Google Duplex controversial. What makes Duplex so natural — namely the uses of “uh” — is one thing that bothers people. At issue is how without proper disclosure it can be seen as tricking the human on the line. Not to mention how some laws require both parties to be aware that a phone call is being recorded.
Besides people’s natural inclination to dislike deception, the assumption that people are talking to a human raises the level of expectation for advanced speech and responses.
When calling the bank or customer support today, many first encounter an automated computer system that asks to note the problem before directing them to the appropriate department. When interacting with these less than perfect systems, people immediately try to game it by talking as simple as possible.
If Duplex is incapable of fully understanding advanced responses but provides the expectation of complexity though its natural voice, the human’s high effort and attempts on the line might be wasted. This results in Duplex calls becoming an inconvenience and hassle in contrast to the natural experience that Google wants to provide both parties.
Pichai ended this announcement by reiterating that Google is working hard to get “the expectation right” and that “done correctly it will save time for people and generate a lot of value for businesses.”
Meanwhile, the other, more minor controversy is criticism by some that Google’s I/O demo was less than forthcoming. One critique was how the two businesses did not identify themselves as is typical in a phone call, with Bloomberg later reporting that Google did edit the calls to protect the privacy of the stores.
What is the Turing test?
Created in 1950, the Turing test is a benchmark of whether a machine can “imitate” or pass as a human. Named after the father of artificial intelligence Alan Turing, this test sees one human converse with a computer, while a third person tries to determine which one is the machine. If the evaluator cannot tell, the machine is considered to have “passed.”
I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. — Alan Turing
Without disclosure, could a busy restaurant employee actually distinguish between the dozens of mundane calls they might get over the course of a working day? Meanwhile, if they weren’t aware of what the Google Assistant is at the start, would they just assume it was a human assistant on the other end?
Does Google Duplex pass it?
It really takes a close listen to analyze how the system is less than human in certain responses. It most likely does pass, but with the key qualifier that Duplex only passes it in the very specific task of making appointments. This is according to Alphabet chairperson and distinguished computer scientist John Hennessy at an I/O 2018 talk about the “Future of Computing.”
In the domain of making appointments, it passes the Turing test in that domain, which is an extraordinary breakthrough. It doesn’t pass it in the general terms, but it passes it in a limited domain and that’s really an indication of what’s coming.
The Turing test was more intended to rate an artificial general intelligence — or a machine that can perform any task done by a human. Something like Duplex, in an ideal fully working state, can only tackle one common task that is performed daily.
Google has been very clear that its system has a limited scope, “Duplex can only carry out natural conversations after being deeply trained in such domains.” It further emphasizes that by explicitly noting how Duplex “cannot carry out general conversations.”
Despite this, given the pace of recent of advancements, there’s noting to say what Duplex can do down the road. It’s application and possible training could one day be expanded to a whole host of other fields.
When can I use Google Duplex?
Google will begin testing Duplex this summer for making restaurant reservations and scheduling hair salon appointments. It’s unclear whether that means it will be available to a limited set of regular users or just testing internally within the company to fine-tune the system’s responses.
However, Google will be experimenting with a variant of the technology in the coming weeks for holiday hours. The idea is to have Google place just one call to a small business and update times on the Search Knowledge Graph card for everyone.
Originally Published on 9TO5Google