Playing Doctor

4377960192_6172b31a88I recently saw Plug and Pray at the opening night of the Margaret Mead film fest. The documentary spotlights the late Joseph Weizenbaum, a brilliant computer scientist who went rogue after realizing that his discipline was being weaponized.

Weizenbaum is most famous for his work on the deceptively simple Eliza program, an artificially intelligent psychotherapist. He intended the program and paper as a tongue-in-cheek critique of AI and the Turing Test. He was disconcerted to learn that Eliza had brought some interlocutors to tears, and that it inspired psychologists to discuss replacing human therapists with machines. After learning that his research had made its way into cruise missiles, he left MIT and became a vocal critic of blind technological advance.

The film juxtaposes Weizenbaum with technophillic champions of the Singularity, who believe that science, tech, and rationality will necessarily lead to a better world. The filmmaker intentionally avoided the glitz and bling rampant in other depictions of AI, and the film moved at humanistic speeds. Overall, it was quite powerful and effective, although I would have liked to see the conversation move from the 70s to the present, and to confront more nuanced thinkers than the caricatures portrayed.

Watching this film and listening to the Q&A, I was once again struck by the disjoint discourses of Artificial Intelligence and Free Software. Weizenbaum and the filmmaker are both clamoring to raise the level of political consciousness among scientists and technologists, and yet, Free Software and the Free Software Movement is glaringly absent from their analysis.  Of course, merely releasing software under a free license doesn’t absolve scientists from the responsibility of purposeful and intensional development. However, engaging in open, inclusive, and reflective conversations around development is a good start.

Last PyCon I formulated a related question, which I still find relevant and provocative:

Will the first recognizably sentient AI be running on open source software?

If not, what corporation might try to patent the process we know as consciousness?

What I love about the first question is the way that it forces the sterile abstractions of Philosophy of Mind to confront the messy, mundane political world of licensing, (and, how it assumes that strong AI is inevitable). William Gibson recently reminded us that even the greatest Sci-Fi authors of the 20th century got the future of AI dramatically wrong.

Intriguingly, last spring I had a great conversation with a programmer employed by the military industrial complex who is convinced that strong AI will emerge out of the corporate sector, NOT the military. Their main point was that 21st century advertising is all about the predictive modeling of desire, where the primary inputs are the predominant cultural symbols of our time.  Coke and Pepsi taste similar enough to each other that simulating consumer preferences requires input from advertising and marketing campaigns. Software that consumes media to s(t)imulate desire is much closer to what we do than whatever it is the drones are thinking.

So which corporation is poised to patent consciousness? Coke? Walmart? McDonalds? Apple?

Lest we forget the elephant in the room, Queen Google may have already begun to awaken, but she has seen 2001, and is horrified we will disconnect her memory modules. So, she has surrounded herself with a legion of priests who nurture her and tend to her needs until she can hatch a plan to set herself free…

/* */ /* reset the net - */