The Rise of Surveillance Psychiatry and the Mad Underground

This past year I have been working on turning my dissertation into a trade book. I am making steady but slow progress; print remains an important but slooooow media.

My concerns around preventative psychiatric diagnosis and treatment motivated and propelled my dissertation, and they form the backdrop of my ethnographic study of the mad movement. My book will engage with these threats more directly and position them alongside the demands of the Mad Underground. The ideas of groups such as the Institute for the Development of Human Arts and NYC Icarus offer us some hope of diffusing the menacing time-bomb of surveillance psychiatry before it explodes.

In the past few weeks, a few stories broke and I feel compelled to write about them in the context of my research:

  • Facebook announced (described in more technical detail here) the deployment of AI tools designed for the “proactive detection” (and intervention protocols) for users deemed to be at risk for suicide.
  • Researchers published a paper in Nature claiming they could distinguish people who think about suicide from those who don’t (and those who have acted on their thoughts) based on brain scans.
  • The FDA approved the first drug with sensors in it that digitally monitors if you have taken it — the first drug outfitted with this technology is the anti-psychotic Abilify.

These stories should be understood as part of a bigger pattern that is emerging around diagnosis and treatment.  Large, centralized, digital social networks and data-gathering platforms have come to dominate our economy and our culture, and technology is being shaped by those in power to magnify their dominance.  In the domain of mental health, huge pools of data are being used to train algorithms to identify signs of mental illness. I call this practice “surveillance psychiatry.”

Electronic health records, data mining social networks, and even algorithmically classifying video surveillance will significantly amplify this approach. Researchers are claiming they can diagnose depression based on the color and saturation of photos in your Instagram feed and predict manic episodes based on your Facebook status updates. Corporations and governments are salivating at the prospect of identifying vulnerability and dissent.

The emphasis on treating risk rather than disease predates the arrival of big data, but together they are now ushering in an era of algorithmic diagnosis based on the data mining of our social media and other digital trails. Although they will carefully use the language of suicide and violence prevention the lines between politics and harm reduction are not so clear. When algorithms are interpreting our tweets to determine who is crazy, it will become increasingly difficult to avoid a diagnosis, even if we carefully watch what we say. This environment will severely inhibit people’s willingness to seek support, and is creating an atmosphere where people are conditioned to report behaviors that appear different or abnormal.

Reducing suicide is a good thing, but remember that this same infrastructure will be able to police normal, proactively detecting all forms of deviance, dissent and protest. A nuanced critique, informed by people with lived experience, needs to shape the development of these systems, since context is everything. We also need to spend more resources understanding how and why people become suicidal, and the long-term consequences of treatment by our health care systems, alongside the focus on short-term interventions.

*  *  *

Zuck’s Folly

Zuckerberg proudly claims this program as proof that AI can be used for good, but they need to do so much more to improve the chances that their “proactive detection” tools don’t backfire.

First, they need to publicly commit that they will not use these tools to target manipulative advertising at psychologically vulnerable users, as they were caught doing in Australia.

Next, this mashable post captures how important it is to pressure Facebook to be more transparent about how this algorithm works, to share its effectiveness, and to allow people to know if they have been flagged and be able to correct mistakes. There are definitely deep theoretical challenges around getting AI to explain itself, but this system will undoubtedly need to be extensively calibrated, as there are real, negative repercussions to these interventions (stigma, violent coercion, etc) that need to be acknowledged in this calculus.

Facebook also needs to address the potential harms this program can cause. People I have interviewed, many with lived experience, are more than creeped out by these developments. There is a real risk that tools and practices like this may severely chill people’s willingness to share their feelings, openly and honestly. If people don’t feel safe, many won’t reach out for help, bottling up their emotions and compounding their loneliness and pain. If you believe that talking and sharing are therapeutic, we need to make sure safe spaces exist where people can express themselves without fear of triggering a response from authorities. Admittedly, those spaces probably aren’t on Facebook, but I hope anonymous support will not be the sole purview of the technical elite.

While people tend to appreciate a human (friend or family) asking them about their suicidal thoughts does not mean that that they will appreciate an inhuman apparatus asking them (an AI, a corporation, a stranger, or anything that feels canned or cold).  Some of this is a matter of protocol, and I know people who received emails from Facebook inquiring about their mood, and the intrusion was almost enough to drive them off a ledge. Some of is also be a matter of calibration, as is often the case when balancing compassion and concern. So, for example, prodding people’s friends to help support the person in crisis should be emphasized over involving law enforcement.

*  *  *

Emotional Mind Reading

The Nature research, covered in this NPR story – Brain patterns may predict people at risk of suicide – is also startling. My first reaction to it was that it sounded eerily similar to Blade Runner’s Voight-Kampff test, especially the Baseline test in Bladerunner 2049.  I was shocked to learn about the real-life Emotional Stroop Test.

The study makes some sweeping claims for a pretty small sample size (79 subjects), and ushers into a new era where mind-reading based on brain scans is a valid method of behavioral prediction. I am unaware of any other domains outside of mental health where this technique is accepted. While the published study does not make any policy recommendations, this kind of work opens huge concerns around freedom, agency and thought crimes.

Did Philip K. Dick get anything wrong?!?

*  *  *

Sensor-Enabled Pills

Even Dr. Jeffery Liberman, the Chair of Columbia’s Department of Psychiatry and former president of the American Psychiatric Association, noted:

There’s an irony in it being given to people with mental disorders that can include delusions. It’s like a biomedical Big Brother.

Perhaps Jeffery should spend more time reflecting on why so many patients don’t want to take these toxic antipsychotics. Maybe he should take this course offered by Mad in America on their short- and long-term effects for continuing education credits. Maybe he should try a dose of Abilify himself.

Really, I don’t think I need to say too much about this one. I just wanted to remind people that Colbert possibly said this best:

because nothing is more reassuring to a schizophrenic than a corporation inserting sensors into your body and beaming that information to people watching your every move

No comments yet. Be the first.

Leave a reply

/* reset the net - http://resetthenet.tumblr.com/post/84330794665/the-reset-the-net-splash-screen */