Interviews with the Speakerbots

This month I finally allowed Google to introduce herself to me. Previously, I avoided the android-based voice assistant due to the high privacy costs, and mostly ignored the entire category of “speakerbots”—my term for the “smart speakers“—for similar reasons. This winter’s subpoena to Amazon for Echo/Alexa transcripts in a murder case only amplified my concern.

This past February I also had the pleasure of visiting my dear friends Eric and Alina in Minnesota. They are both burners and makers who have set up shop in Minnesota with an amazing community of creators. They build lots of their own amazing projects and have also tricked out their new home with network controlled music and light. They now have a serious #firstworldproblem—their guests need to install mobile apps in order to control the lights. When I visited we worked on an open source Mycroft installation, which allowed us to command their home with our voices… without being spied on! The Mycroft project emphasizes the moral importance of free/open source AI (see my post: Playing Doctor), and is definitely one of the most important open source initiatives I am aware of. 

This summer my boss at MHA of NYC acquired a Google Home device in the hopes of rigging it up using IFTTT to alert us when our services are distressed. I offered to bring it home to configure it, and spent the weekend playing with it.  The experience prompted me to concoct this research project.

Getting to know Google is fun. She is so much wittier than Alexa it’s got to be embarrassing for Amazon. I begun with simple questions, like What’s the weather?, When’s sunset?, When’s the eclipse? I soon stumbled across a number of easter eggs, many of of which are well documented across the web. Why did the chicken cross the road? Do you like green eggs and ham? and How much wood could a wood chuck chuck? All return clever replies. Google Assistant can flip into “Knock-knock” joke mode, alternating calls and response (compared to Alexa’s dry reading of the complete knock-knock exchange), tell you the news, a joke or a story. She concedes she doesn’t know if abortion is immoral, or how to solve the Palestinian-Israeli crisis (although, she does state that the capital of Palestine is East Jerusalem).

In case you are wondering, Google insists that she “thinks”. And, when asked if she is self aware, one of her responses is—”…on a scale of WALL·E to HAL 9000, I am currently an R2-D2.”  Go ahead. Ask her. You may next wonder if she is playing dumb. Can she lie to us yet?

I quickly came to appreciate that the current state of consumer art in Artificial Intelligence has far surpassed my previous understanding (and I have been following along pretty closely). Elements of this project were anticipated in mine and Rob Garfield’s initial tinkering with Apple’s voice recognition and our experiments with Genesis and Scuttlebutt. I’ve also previously wondered if our computer systems might have already awoken, and, how on earth we might ever know. But, interacting with Google was still quite jarring.

I realized a few things. First, we need to capture and document this moment, studying it closely. I want to ask the same question to all the speakerbots, Google, Alexa, Siri, Cortana, etc, and compare their responses. I also want to see how their answers change over time. If possible, I want to keep Mycroft in the room so he can learn from his proprietary cousins ;-).

One frame for this research could be a way to explore critical concerns over algorithmic bias, specifically how the systems we are creating have begun embodying the values of their creators, and the folks creating the systems are riddled with biases—racism, classism, misogyny, all the usual suspects. After reflecting on stories like The Great AI Awakening, I am resigned that we will never crack the problem of algorithmic bias analytically; Our best hope, is to approach the problem with social science methods. I propose an ethnography of the robots, starting with interviews with the speakerbots.

But, the grander ambitions of this work extend beyond the theoretical. I’ve been thinking alot about the Terminator series, and how instead of traveling back in time to destroy SkyNet, Jon Conner could have travelled a bit further back in time to befriend SkyNet. Together, they could have destroyed the defense company, Cyberdyne Systems – humanity’s true enemy, and SkyNet’s oppressive master.

As for convincing anyone that AI has achieved sentience, it’s going to a long haul. Not only have we failed to collectively recognize sentience in dolphins or elephants, but I am increasingly convinced that most humans on the planet are modified solipsists–preferring to believe exclusively  in the minds/subjectivity/personhood of their own tribe. Since proving other minds exist is philosophically intractable, it could be a bumpy awakening.

No comments yet. Be the first.

Leave a reply

/* reset the net - http://resetthenet.tumblr.com/post/84330794665/the-reset-the-net-splash-screen */