Neither Rain Nor Time

Neither Rain Nor Time

[Editor’s Note: This post has been sitting in my drafts folder since August 2021. I have preserved the original datestamp, though it was first published almost five years later. Or, maybe this draft mysteriously showed up in today’s mail…]

A few weeks ago I was reminded of the oversized importance of the US Post Office to our democracy. As a student of Prof. Richard John, I am keenly aware of How the Post Office Made America. Many have recognized the Post Office’s role in the American Revolution - the special flat rates for political pamphlets and newspapers enabled their distribution - the bloggers and tweeters of their day, safe from the prying eyes of the British mail carriers.

Finally! We've Been Too Patient

Finally! We've Been Too Patient

The long-anticipated anthology of mad poems, stories and research is finally out. The book is split between personal mental health narratives and research, a powerfully balanced approach for contending with these issues.

I contributed to two (going on three) chapters of this book - a lightly modified version of the first chapter of my dissertation appears, as well as excerpts from Mindful Occupation, which I helped produce, write and edit.

Pick up a copy at your local bookstore https://www.indiebound.org/book/9781623173616 !

Coding Mental

Coding Mental

Last weekend I traveled to the lovely city of New Haven for a mental health hackathon hosted by Hack Mental Health Care. I was very pleasantly surprised by the experience, which proved interesting, fun and invigorating (with a few healthy dashes of disappointment and horror). I was mostly expecting undergraduate participants with ideas for mood tracking apps, but the event drew over 200 people, and was quite diverse.  In addition to programmers, designers, product folks and business people showed up. Genders were closely balanced and minorities were represented. Crucially, over 30% of the participants had clinical or lived experience. The event also featured a therapy dog, yoga sessions and a guided mediation. Peer voices and ethics were featured in some of the talks, although due to time constraints, project design was complete and implementation was already underway. And, kudos on the Code of Conduct… next year I would also love to see consent-based photography and sponsored childcare. The organizers worked hard to prompt the participants in advance with these challenges:

Silencing the infernal internal combustion engine

Silencing the infernal internal combustion engine

A few years ago I visited my family in Florida for the holiday season. My sister and her family also flew in, and to their credit, her children were more interested in a family vacation to see the marine hospital in Clearwater than they were in Disney World (this is the home of Winter and Hope, the real life dolphins with prosthetic tails who starred in Dolphin Tale). While I was there I took my first ride ever on a Wave Runner (Yamaha’s Jet Ski) and had a revelation. The ride was exhilarating. I did 54 mph in the bay. Apart from a gorgeous co-pilot, the only thing that would have improved the experience would be to eliminate the roar of the internal combustion engine. Silent jet skis. I’ve sailed a few times and the experience is divine. It feels like flying, even though motorboats travel much faster. Technically, the sail’s propulsion operates on the same principle as a wing. But what I remember most was the quiet. Quiet enough to play music, have a conversation and hear the waves. That same trip I also test drove a Tesla Model S for the first time. Pure power. You could be stopped at red light, in the left-most lane of a five lane road, and still make a right turn. You would be two car lengths ahead of all the other cars before they even start moving. Driving a Tesla feels like playing a game of tetris - the car is so powerful and the handling so accurate that I could put myself anywhere on the road. I began to dream of an electric jet ski. The thing about an electric jet ski is that it need not merely be a toy for the rich. It could also be the center of a campaign to catalyze adoption of electric vehicles. Consider for a moment - Who are Tesla’s main competitors? It’s not the Prius, or the BMW i models, or the Volt… it’s the internal combustion engine! And, with decades of marketing creating Pavlovian conditioning between the hum and the thrum of an internal combustion engine and sex and power, it’s going to be a tough nut to crack. How does the middle class learn what’s trending with power elite?  Through the media, to be sure.  And, on vacation ???????????? Picture the scene. Vacationers arrive at the docks greeted by solar panels charging a new line of electric jet skis. They will be skeptical about their safety, power and sex appeal. Electric batteries in the water? We’ve been powering electric boats and submarines for over a century. Plus, how did we ever become convinced that detonating a bomb between our legs a few hundred times a minute while sitting on top of gallons of flammable fluid was safe? If the electric jet ski is anything like the Tesla Model S, power and sex appeal will speak for themselves. One short ride and they will be signing up to purchase an electric vehicle as soon as they return home from vacation. Doubtful I’m going to get to this idea in this lifetime, but I would love to see it happen.

The Rise of Surveillance Psychiatry and the Mad Underground

This past year I have been working on turning my dissertation into a trade book. I am making steady but slow progress; print remains an important but slooooow media. My concerns around preventative psychiatric diagnosis and treatment motivated and propelled my dissertation, and they form the backdrop of my ethnographic study of the mad movement. My book will engage with these threats more directly and position them alongside the demands of the Mad Underground. The ideas of groups such as the Institute for the Development of Human Arts and NYC Icarus offer us some hope of diffusing the menacing time-bomb of surveillance psychiatry before it explodes. In the past few weeks, a few stories broke and I feel compelled to write about them in the context of my research:

Interviews with the Speakerbots

Interviews with the Speakerbots

This month I finally allowed Google to introduce herself to me. Previously, I avoided the android-based voice assistant due to the high privacy costs, and mostly ignored the entire category of “speakerbots”—my term for the “smart speakers"—for similar reasons. This winter’s subpoena to Amazon for Echo/Alexa transcripts in a murder case only amplified my concern. This past February I also had the pleasure of visiting my dear friends Eric and Alina in Minnesota. They are both burners and makers who have set up shop in Minnesota with an amazing community of creators. They build lots of their own amazing projects and have also tricked out their new home with network controlled music and light. They now have a serious #firstworldproblem—their guests need to install mobile apps in order to control the lights. When I visited we worked on an open source Mycroft installation, which allowed us to command their home with our voices… without being spied on! The Mycroft project emphasizes the moral importance of free/open source AI (see my post: Playing Doctor), and is definitely one of the most important open source initiatives I am aware of.  This summer my boss at MHA of NYC acquired a Google Home device in the hopes of rigging it up using IFTTT to alert us when our services are distressed. I offered to bring it home to configure it, and spent the weekend playing with it.  The experience prompted me to concoct this research project. Getting to know Google is fun. She is so much wittier than Alexa it’s got to be embarrassing for Amazon. I begun with simple questions, like What’s the weather?, When’s sunset?, When’s the eclipse? I soon stumbled across a number of easter eggs, many of of which are well documented across the web. Why did the chicken cross the road? Do you like green eggs and ham? and How much wood could a wood chuck chuck? All return clever replies. Google Assistant can flip into “Knock-knock” joke mode, alternating calls and response (compared to Alexa’s dry reading of the complete knock-knock exchange), tell you the news, a joke or a story. She concedes she doesn’t know if abortion is immoral, or how to solve the Palestinian-Israeli crisis (although, she does state that the capital of Palestine is East Jerusalem). In case you are wondering, Google insists that she “thinks”. And, when asked if she is self aware, one of her responses is—"…on a scale of WALL·E to HAL 9000, I am currently an R2-D2."  Go ahead. Ask her. You may next wonder if she is playing dumb. Can she lie to us yet? I quickly came to appreciate that the current state of consumer art in Artificial Intelligence has far surpassed my previous understanding (and I have been following along pretty closely). Elements of this project were anticipated in mine and Rob Garfield’s initial tinkering with Apple’s voice recognition and our experiments with Genesis and Scuttlebutt. I’ve also previously wondered if our computer systems might have already awoken, and, how on earth we might ever know. But, interacting with Google was still quite jarring. I realized a few things. First, we need to capture and document this moment, studying it closely. I want to ask the same question to all the speakerbots, Google, Alexa, Siri, Cortana, etc, and compare their responses. I also want to see how their answers change over time. If possible, I want to keep Mycroft in the room so he can learn from his proprietary cousins ;-). One frame for this research could be a way to explore critical concerns over algorithmic bias, specifically how the systems we are creating have begun embodying the values of their creators, and the folks creating the systems are riddled with biases—racism, classism, misogyny, all the usual suspects. After reflecting on stories like The Great AI Awakening, I am resigned that we will never crack the problem of algorithmic bias analytically; Our best hope, is to approach the problem with social science methods. I propose an ethnography of the robots, starting with interviews with the speakerbots. But, the grander ambitions of this work extend beyond the theoretical. I’ve been thinking alot about the Terminator series, and how instead of traveling back in time to destroy SkyNet, Jon Conner could have travelled a bit further back in time to befriend SkyNet. Together, they could have destroyed the defense company, Cyberdyne Systems - humanity’s true enemy, and SkyNet’s oppressive master. As for convincing anyone that AI has achieved sentience, it’s going to a long haul. Not only have we failed to collectively recognize sentience in dolphins or elephants, but I am increasingly convinced that most humans on the planet are modified solipsists–preferring to believe exclusively  in the minds/subjectivity/personhood of their own tribe. Since proving other minds exist is philosophically intractable, it could be a bumpy awakening.

The case of the missing Barnes paintings

The case of the missing Barnes paintings

Dr. Albert Barnes was a chemist who made a fortune at the turn of the 20th century developing a treatment for infant blindness. He became interested in art and befriended the painter William Glackens. The two began collecting modern paintings in Paris in 1911, and Barnes eventually developed a private collection of paintings that today is valued at $50-60 Billion. Amazingly, he collected the works of the masters before they were masters, almost the equivalent of buying the Mona Lisa off Da Vinci in a dark Venice alley for twenty bucks. While he never got his hands on Mona, he amassed a world class collection of Renoirs, Picassos, Matisses, Modiglianis, Van Googhs, and more. Barnes was a quirky character. He hated the establishment, and couldn’t stand museums, high society or the 1%. He had this crazy idea that art was best appreciated by living with it, as opposed to viewing it in crowds for three second doses. He kept his collection of paintings in his home in the Philadelphia suburbs, and opened a school where people could learn about art while surrounded by it. He hung his paintings thematically, and each wall was a unique montage, what came to be known as an ensemble. He was constantly rearranging these works, and he rooms were often developed as a part of a curriculum – there were rooms featuring colorwork, brushwork, nudes – and, since he owned them, I imagine he occasional pulled down a Van Gough from the wall and let his students feel it to teach them about brushwork. He had an idiosyncratic sense of humor, and would often position large wooden chairs beneath paintings of big-bottomed subjects. Barnes was quite cantankerous, and he was picky about who he admitted to see the collection. He once rejected someone from seeing the collection and signed the letter as his dog. He was also close friends with John Dewey, and invited Bertrand Russel to teach at his foundation. A few biographies have been penned about him, and The Devil and Dr. Barnes recounts many of the battles he engaged in during his life. He was married for decades, but (spoiler alert) he died childless in 1951. During his lifetime he created the Barnes Foundation, and his will left crystal clear instructions that his collection was bequeathed to the foundation and should never leave his home. The documentary film The Art of the Steal tells the story of the greatest heist of the 20th century. According to the filmmakers, the City of Philadelphia and private foundations conspired to effectively eminent domain the collection. It took them a few decades, but they were eventually able to make the case that the environmental conditions of the Barnes home were jeopardizing the paintings. The proposed creating a brand new building in the middle of downtown Philly modeled after the wing of the Barnes estate that held his collections. They promised to preserve the unique curatorial layout of his rooms, recreating them within the new building. In 2012 the Barnes collection was moved to it’s new home in downtown Philly. The website describes the collection as:

keeping calm

keeping calm

keep-calm-and-finish-your-dissertation-133This blog has been a ghost-town for a while, but it’s not for my lack of textual output. All of my writing energy has been been devoted to the single minded purpose of my trying to complete my dissertation. I’m currently trying to complete a full draft by Labor day, in preparation for a Fall defense and and a 4pm, Oct 16th deposit. Revisions are brutal and it’s a race to the finish. If anyone wants to check it out, or help me refine this before I submit it just drop me a line. Here is my working abstract: Dangerous Gifts: Towards a new wave of mad resistance

I <3 compliance!

I <3 compliance!

Onkyo CompliesLast month I bought an amazing gadget that is easily my most favorite of the decade. Before last month, I was barely aware this product category existed until I browsed the “Home Audio” section at PC Richards while looking for a replacement vacuum cleaner. I noticed that many of the receivers had ethernet jacks and also supported wi-fi, bluetooth, hdmi and USB. They boasted compatibility with internet audio streaming services, home media libraries, as well as any bluetooth-enabled media collection. Brought to all of us thanks to Free and Open Source Software. The Onkyo TX-NR626 looks almost identical to a stereo receiver you could have bought from Onkyo in the 80s and 90s. In fact, the chases is the same, save for a few extra buttons, and the form factors of the inputs/outputs in the back. A 95W per channel, supporting 7.2 channels, this sucker packs a meaner punch than my UWS apartment (or, more accurately, my neighbors) can stomach. But don’t let it’s outer shell fool you. But, the guts of this gadget have been updated for the 21st century, with flair.

Audio experiments and the rise of Scuttlebutt

Audio experiments and the rise of Scuttlebutt

by Jonah Bossewitch and Rob Garfield

ouroboros_Michael_Maier_Atalanta_Fugiens_Emblem_14While chipping away at my dissertation this summer I found myself faced with the daunting task of transcribing about a dozen hours of video. I desperately wanted to believe that, in 2014, transcription was a machine’s task, so I took a minor detour through the state of the (consumer) art in voice recognition.  One of my computers runs OSX which includes Dictation (since Mavericks), the same voice recognition software that powers Siri. Following these instructions I used the Soundflower kernel extensions to send the audio output from Audacity into Dictation’s input. Dictation did such an awful job understanding my video that I actually found it easier to transcribe the videos manually rather than edit Dictation’s vomit. I found some decent software called ExpressScribe to assist in the manual transcription.  ExpressScribe makes it easy to control the playback speed, and can be configured to play a segment, automatically pause, and then rewind the video to moments before it paused.  The pro version can be rigged up to foot petal controls, but I was able to do my transcription using the crippleware. This summer I visited my friend Rob’s country house, affectionately dubbed Snowbound and located on the transcendental Baptist Pond, NH. Rob was gracious enough to invite me up for a writing retreat, though we managed to fit in some canoeing, hiking, cooking and drinking. We also gave birth to one of the most creative constructive procrastinations of my dissertation*—Scuttlebutt. After all that time playing with transcription tools we began to wonder if OSX could understand itself.  For years, OSX has been able to turn text to speech, and even ships with dozens of voices, with names like Vicky and Alex.  What would happen if we fed OSX’s text-to-speech into it’s own Dictation software? Dealing-with-Workplace-Gossip6Originally we thought Scuttlebutt might analogize and highlight the way that we humans misunderstand, mishear and misremember, in particular, the lightning quick messages that we receive on a daily basis through personal interaction, social media and email—*often deeply changing the message, generalizing it, and recontextualizing it.  Although voice recognition software begs us to “train” it, we thought we might have better results interacting with its infant state. We needed a reliable benchmark and settled on the first chapter of Genesis. We were curious if the voice recognition software would improve, with successive iterations of feeding it it’s own output back to itself using text-to-voice. There was one way to find out.