Are Google’s Chatbots Getting Too Human?
Last week Google CEO Sundar Pichai demo’d a new AI chatbot platform called Duplex, which can mimic a human conversation so well that you wouldn’t even know you’re speaking with a machine. The new system is a technological breakthrough to be sure, but tech ethicists are raising the question that just because Google can do something like this doesn’t mean it necessarily should.
Google’s Duplex demo included outbound telemarketing style calls in which the unsuspecting call recipient began speaking with a chatbot they thought was a person. Google’s system even came with the prerequisite “ums” and “ahhs”, so participants were really convinced it was a human interaction. The reaction of this demo wasn’t positive. Tech critic Zeynep Tufekci called the demo “horrifying,” and the initial positive audience reaction at I/O as evidence that “Silicon Valley is ethically lost, rudderless and has not learned a thing.” As a result of the backlash Google has already revamped Duplex to include a built in disclosure – which I’m guessing is some prerecorded indication to let someone know they’re speaking with a nearly-human computer.
Beyond just the immediate ethical question of how Google should use Duplex, comes the realization that we’ll all eventually be communicating with robots we think are human. In my book that means it’s just a when, and not an if, we all officially get creeped out by technology.