Mindful Occupation: Part II

In a previous post, I described my initial involvement with #occupymentalhealth and birth of our forthcoming zine Mindful Occupation: Rising Up Without Burning Out.

I alluded to the heated debates that emerged around our work on this  zine and my direct participation in the local NYC ‘Support’ working group. It was through these deliberative processes and exchanges that I rediscovered the promise Occupy’s discursive ‘public space’.

As a researcher of the radical mental health movement, I recognized a unique opportunity in Liberty Park to explore the rhetoric around mental health, in context. I was hopeful that the activists involved in supporting the health and safety of the #OWS community would be critical of mainstream corporate medical models, and would be very receptive to alternative perspectives and language. The discussions that ensued were provocative and transformative, and  the experiences have helped me crystallize future directions in my research.

As the occupiers settled into Liberty Park the task of self-governance grew in scale, with complexity that rivaled running a small town. Dozens of working groups sprung up to meet the challenge of non-hierarchical, self-governance — many committed to modeling the kind of society they dreamt of living in, rather than replicating existing broken forms. The working groups took responsibility for the protester’s basic human needs – food, shelter, sanitation, safety, spirituality – as well as organizing, maintaining, and sustaining the occupation, over the short/medium/long term.

A number of working groups took up the challenge of maintaining the heath and well-being of the protesters, and in New York City these groups  organized themselves into the Safety Cluster. The Safety Cluster included people committed to mediation, non-violent communication, security and deescalation, as well as people committed to anti-oppression and reducing sexual harassment (the Safer Spaces working group). Additionally, there was a working group calling itself ‘Support’ that had been operating as a subgroup of the Medic working group. The Support group was comprised primarily of mental health professionals – social workers, chaplains, psychiatrists, and a few non-traditional emotional support practitioners. Together, the safety cluster developed protocols for handling interpersonal conflicts in the park, and organized nightly “community watch” shifts, where members of the community organized to support protesters, and identify and defuse conflict.

While some of my fellow collaborators on the Mindful Occupation zine felt more comfortable working with the Safer Spaces working group, I realized that the best education  happens outside of our comfort zones. Tension and conflict are inherent properties of activism, as activists attempt to question and dislodge accepted norms.

Initially, I thought that this particular group of mental health professionals would be very receptive to questioning psychiatry’s mainstream medical models. These individuals were volunteering  their time and energy at #OWS.  As it turned out, although I found many sympathizers and allies among the Support group, I was stunned by the systemic efforts to silence and marginalize voices from outside the mainstream. While many of the Support volunteers were fully engaged in critiquing social and economic injustice in the world at large, few seemed prepared to apply a self-reflective critique of their entrenched beliefs and professional norms.

Through countless interminable meetings and mailings, I witnessed efforts to exclude the voices of those without formal expertise and training. Voices outside of the mainstream had difficulty getting their issues on the meeting agenda and were actively excluded from some events and conversations. I remained committed to working with the Support group, although I did not always feel welcome.

Within the Support group, proposals were raised for the “community watch” volunteers to wear identifying badges which included their profession (e.g. social worker, chaplain, psychiatrist) and license number, and for an active recruitment of more psychiatrists to patrol Liberty park. Some of the medics insisted on “clearing” all of their patients medically, before turning them over to social and emotional support. Sounds reasonable until you begin to question what’s medical, and more importantly, what’s not? A head trauma might be medical, but what about a chemical imbalance? If all conditions are ‘medical’, then all authority around health and well being has been effectively ceded to a narrow range of medical specialists.

In subtler ways, i believe that some of the work in this group contributed to an atmosphere of fear and control in the park. Support’s role-plays often focused on the most violent scenarios, invoking the stereotype of the knife-wielding psychotic, and priming those on community watch to bring this anxiety with them throughout their encounters in the park. While the violence and sexual harassment in the park were unfortunately very real, some of the efforts to prevent these behaviors may have exacerbated them.

I witnessed that the providers of mental health services, with rare exceptions, found it incredibly difficult to listen to the recipients of their services. To ask and solicit opinions and stories, and incorporate their experience and judgment into the congress of their decision making.

I developed fresh insights into radical mental health through these encounters, that opened my eyes to much of what I had grown to take for granted. I learned that radical mental health has less to do with any particular dogmatic position — around hospitalization, medication, coercion, or diagnoses — and everything to do with authority and knowledge production. I learned that it is hard to find a proposition more radical than the disability rights mantra – Nothing about us without us!

#OccupyAuthority

Mobility Shifts: teaching & learning w/ video

Michael Preston and I have co-authored a chapter— Teaching and Learning with Video Annotations —for the recently released anthology, Learning Through Digital Media: Experiments in Technology and Pedagogy. This chapter recapitulates the history of multimedia annotation projects at CCNMTL, focusing especially on the pedagogies and learning outcomes that have motivated much of my work at CCNMTL work over the years. We discuss curricular activities which have stimulated the development of our VITAL and MediaThread multimedia analysis environments.

Learning Through Digital Media was edited by New School Professor Trebor Scholz in preparation for the upcoming Mobility Shifts: An International Future of Learning Summit (Call for Workshops: submissions due by July 1). The peer-reviewed book contains a series of practical applications of digital media to formal and informal learning situations, with a focus on teaching techniques across a range of services and tools. The “ambition of this collection is to discover how to use digital media for learning on campus and off. It offers a rich selection of methodologies, social practices, and hands-on assignments by leading educators who acknowledge the opportunities created by the confluence of mobile technologies, the World Wide Web, film, video games, TV, comics, and software while also acknowledging recurring challenges.”

Trebor throws a great conference. Mobility Shifts is part of a bi-annual conference series on Digital Politics.  The conference topic ’09 was digital labor, and in ’13 it will be about digital activism. Trebor is truly a performance artist when it comes to organizing conferences. He works really hard to get people talking to each other before the conference starts, so that when people arrive they are already in the middle of a conversation.  For the Internet as Playground and Factory he produced a series of short videos introducing participants to each other (mine is here).  This year he published a peer-reviewed anthology, available in a variety of formats, including hardcopy, PDF, ebook, and web-based.

Learning Through Digital Media was published in March 2011 by the Institute of Distributed Creativity under a creative-commons license (CC-BY).

Pick a corpus, any corpus

A few weeks ago I participated in a brainstorming session exploring the kinds of academic research projects the WikiLeaks archives might generate. Beyond the substantive specifics of the leaked cables, the media coverage of Cablegate, and their  impact on geopoltics, a central concern we recognised is the challenge of transforming torrents of qualitative data into narratives, arguments, and evidence .

The impact that technology is having on what’s knowable and how we go about knowing is a theme I have been chewing on for years – one that goes well beyond journalism, and cuts across the social sciences, law, education, etc. There is an urgency to this problem since the tools and techniques involved in these analyses are unevenly distributed.  High-end corporate law firms, marketing agencies, and political parties are all embracing new approaches to making sense of petabytes. Unfortunately, impact law firms, social scientists, and journalists often don’t even know these tools exist, never mind how to use them.  Part of what I call the organizational digital divide.

During our brainstorming I formulated a new twist on a possible research agenda. I realized how daunting it has become to evaluate and calibrate the emerging suites of digital instruments. There are many digital tools emerging that can be used to analyze large troves of data, but it is difficult to determine what each tool is best at, and if it does its job well.

One good way to benchmark our digital instruments is to select a standard corpus, and spend lots of time researching and studying that corpus until the corpus is fairly well understood. Similar to the role that the Brown Corpus played in computational linguistics, data miners need a training ground we can test, hone, and sharpen our digital implements. If we bring a new tool to bear on a well understood archive, we can evaluate its performance relative to our prior understanding.

Currently Wikipedia serves as the de-facto benchmark for many digital tools, though, since its a moving target, it is probably not the best choice for calibration. In many respects the selection of this kind of corpus can be arbitrary, though it needs to be adequately sophisticated, and we might as well pick something that is meaningful and interesting.

The Wikileaks documents are an excellent contender for training the next generation digital instruments and data miners. The AP is hard at work on new approaches for visualizing the Iraq War logs, and just last week there was a meetup for hacks and hackers working on the wikileaks documents Data Science & Data Journalism . It is easy to see how Knight funded projects like DocumentCloud converge on this problem as well. Ultimately, I think these efforts should move in the direction of interactive storytelling, not merely an passive extraction of meaning. We need tools that enable collaborative meaning-making around conceptual space similar to what Ushahidi has done for geographic space.

That way madness lies

Bossewitch, J. (2010). Pediatric Bipolar and the Media of Madness. Ethical Human Psychology and Psychiatry, 12(3), 254-268. doi: 10.1891/1559-4343.12.3.254

I am finally published in a peer-reviewed journal! Ethical Human Psychology and Psychiatry (available for purchase here – but my cut is exactly 0%). I wasn’t expecting much, and it’s mildly anti-climactic, but I have heard from a few people I never would have communicated with otherwise, and worked really hard to polish up this paper. Anyway, now its traditionally citable, which still means something (for the next few years, at least).

This paper is at least 2 years in the making.  It began when Rasmus Nielsen forwarded me a call for papers about drugs as a form of media for NCA ’09, and I participated in a panel  organised by Robert MacDougall (my slides). Around the same time as NCA, I also attended ICSPP and had the pleasure of meeting James Tucker and Peter Breggin. This meeting eventually led to my submission to EHPP – a journal that typically publishes articles by and for psychologists, psychiatrists, and social workers.  I was thrilled to help bring a dash of media and communications theory/research to that audience. Special thanks to Annie Robinson, Sascha Scatter, Bonfire Madigan, Brad Lewis, Biella Coleman, Philip Dawdy, Nicholas Mirzoeff, Julia Sonnevend, Ben Peters, and the Icarus Project for ideas, inspiration, and edits.

I have also reworked the main arguments in this essay into a chapter in the upcoming: Drugs & Media: New Perspectives on Communication, Consumption and Consciousness (edited by Robert C. MacDougall). I even worked on a McLuhanesque Tetrad around Prodromal diganoses (a.k.a. Psychotic Risk Syndrome).

Unfortunately, I was unable to convince Springer to go open access with my paper, but I tried and was able to deposit an open-access pre-print in the Columbia institutional repository, and also have a pre-print available here. If enough people make noise about open access, I hope the editors and publishers will eventually start to get the idea.

The issues raised in this paper are beginning to percolate into the mainstream. Last month Harpers published a (flawed) long  piece on predictive diagnoses: Which way madness lies: Can psychosis be prevented? Wired just ran a great piece on the backlash against DSM5, especially Psychotic Risk Syndrome, by one of the DSM IV contributors: Inside the Battle to Define Mental Illness. A good friend of mine from the Journalism school also just produced an investigative short-documentary on antipsychotics use among foster home children that just aired this weekend on PBS: The Watch List: The medication of foster children.

Finally, Crooked Beauty is coming to town next month for the 3rd  annual Reelabilities Film Fest – c’mon out to the launch party or one of the screenings:

Thursday 02/03/2011 1:00pm JCC of Mid-Westchester
Friday 02/04/2011 1:30pm Bellevue Hospital Center
Friday 02/04/2011 6:00pm New York City College of Technology
Saturday 02/05/2011 7:00pm The JCC in Manhattan
Monday 02/07/2011 6:30pm Solomon R. Guggenheim Museum
Tuesday 02/08/2011 7:00pm JCC of Staten Island

It’s going to be a great year.

Now Playing: Nothing but the whole truth

sword-justice-not-blindI recently learned about a fascinating  trend in litigation that is quietly transforming courtroom testimony, and is spreading fast and far – video depositions.

I talked with a consultant who helps attorneys process video depositions. In the courtroom, attorneys are juxtaposing live testimony with segments from depositions.  Video clips of witnesses reinforcing (or contradicting) themselves are far more powerful than merely reading back the transcript. The courtroom has always been about performance, but these videos have taken this to a new level, as savvy lawyers manipulate appearances and emotions. Increasingly all depositions are being recorded, just as they are transcribed.

Apart from the ways that courtroom proceedings are being transformed, I am also intrigued by the software that is undoubtedly in development to support these operations. In addition to conventional A/V support, working effectively with hundreds of hours of video involves archiving, indexing, distributing, editing, and clipping.  At about a day or two of testimony per witness, and dozens of witnesses per trial, the numbers add up pretty quickly.

As cases accumulate, and multiple associates begin working with and analyzing video, law firms will quickly recognize the desirability of networked, collaborative, video annotation environments.  Some large firms (and their vendors) may have already begun developing solutions. However, the consultant that I spoke with was storing video locally on a laptop hardrive and tracking it with an Access database, so opportunities are knocking. Without a doubt many of the tools that will be highlighted at the upcoming Open Video Conferene (OpenCast, Kaltura, and CCNMTL’s Mediathread come to mind) have overlapping feature and requirements.

Once again the organizational digital divide looms, and I am deeply concerned that only the high end corporate law firms will be able to invest in the competencies and capacities to make this work.  Meanwhile, the impact law firms (along with journalists and social scientists), will be playing catch up, handicapped by this powerful new differential.

I wonder how quickly this practice will spread?

Oyez, Oyez, Oyez!

Reconstruction time again

At a loss for wordsThis week the j-school was abuzz with the conversation successfully  provoked by the publication of a detailed comprehensive report, complete with recommendations, on how to save the endangered species of professional journalists.

One of the report’s two primary authors is my professor Michael Schudson, a thoughtful scholar and a great teacher who is eminently approachable for advice. My friend Dr. C.W. Anderson was the research assistant on the project, and I know he worked pretty hard to make this happen, though he didn’t go on a world tour with the authors.

The report was solid and it managed to gain alot of attention and stir up  a bit of a ruckus. The recommendations seemed reasonable to me, though not quite as radical as I would have hoped…

I have been involved in quite a few conversations around the future of journalism this year, and while there has been a great deal of conversation around how the forms of organization around journalistic production are changing, there has been very little talk about how what’s being produced is changing too.

I am reminded of Bob Stein’s predictions about the Future of the Book. One of his central riffs is his epiphany that the digital book is much less about ebooks and multimedia, and much more about a shift away from the book as a static, finished, complete, object. He imagines a new emergent form in perpetual beta, with multiple authors, and around which revisions, annotations, and communities form. Any of his talks that from the last few years probably picks up on this theme.

While many journalists are talking about producing articles using new media forms, the discussions remind me a bit of the early days of cinema, when they used to film plays.

I’m imaging a shift in journalism towards interactive storytelling, cumulative aggregation, and  distributed collaboration. We have begun to see hints of experiments along these lines in projects like Times Topics, Swiftapp, and Mapping Controversies, but this NPR project profiled last year in CJR really hits the mark: So Cool: How an economic weather map changed the climate. I think these strategies might easily apply to prose, not just data, interactive graphics, and maps.

Comparing journalism with education, will journalism only use new media to create the equivalent of a jazzed up, one-way, lecture? What does interactive story telling even look like? How will we teach the next generation of journalists to create works that are designed to be picked up, re-appropriated, and re-mixed?

With these ideas in mind, I would have loved to see some recommendations in this report designed in anticipation of this future, not merely to prop up yesterday’s decaying models. The patchwork of the future can be best supported by encouraging greater transparency, open licensing, and a culture of collaboration.  What about encouraging open licensing mandates to this foundation support? Mandate the sharing of primary sources? Teach journalists of the future to share, and to learn from their readers? These aren’t all policy recommendations, but I think they need to be thought through and woven into this conversation.

PS – While the future of journalism may be difficult to discern, the future of newspaper suddenly seems pretty clear 😉

The Interdisciplinary Kissing Problem

websLast week I participated in the architecture school’s visualization seminar and  was treated to a mind-blowing presentation by Tony Jebara, a Columbia Computer Scientist. Jebara is a young associate professor who researches machine learning, graphs, and visualizations, and is also the chief scientist at CitySense.com. His lab “develops novel algorithms that use data to model complex real-world phenomena and to make accurate predictions about them.” They also work on improving the readability of massive volumes of multi-dimensional data, and are currently focusing on making sense of networks of people and places (take a wild guess who else is interested in their work).

CitySense is an application that runs on mobile devices and from their location data…

…builds a network of real places (like the World Wide Web) and a network of real people (like FaceBook) in order to apply online techniques like smart search, recommendation, collaborative filtering, marketing, user clustering and prediction. Today, Sense Networks tracks almost 10,000,000 mobile phones and location devices through partnerships with carriers, device manufacturers, application developers, taxi fleets, and its own subscribers.

The system attempts to discern friendships as well as an establishment’s clientele based on calculations of user’s physical proximity to each other, combined with a growing semantic understanding of places (e.g. all the Starbucks in the city probably have more in common with each other (in some sense) than they do with their geographical neighbours). According to Jebara, the system is already able to infer the types of people who frequent certain nightclubs—students, hipsters, professionals, bankers, tourists—based purely on an analysis of these overlapping networks of millions of human and non-human actors. The cell phone carriers have also used this kind of analysis to reduce “churn”—if one person leaves the network, the carriers respond rapidly by offering promotions the defector’s close network to seal the leak before the damage spreads.

Incidentally, the matching algorithms that Jebara’s lab uses are the same family of algorithms that “power” google ad words, so at least for the time being, despite the captivating sacred geometry, I am sticking with human recommendations 😉

I left the talk with many questions and reflections:

  • I understand that calculating similarity is the special sauce in any recommendation implementation, but I am somewhat troubled by the conflation of proximity with similarity. Consider the Upper East Side Nanny or the parole officer and the parolee (although hollywood continues to try to convince us of the identity relation between detectives and criminals) – is there something essential being reduced or discarded when we reduce similarity to proximity?

    Is there any room in this model for the “kind” of relationship that motivates the physical proximity? Perhaps something similar to Yahoo/Berkeley TagMaps? Their work requires explicit human input, and is already anticipating feedback effects and users “gaming” the system. Perhaps the network effects CitySense models will disregard much of this “noise,” but, at what social/cultural/political expense?

  • Similarly, I don’t really understand how this algorithm justifies the “distance” calculation. The multi-dimensional spaces they are flattening are incommensurate. Unlike the dimensions of space-time, the spaces described are composed of values, possibly expressing orders of worth in different units that cannot be transformed between each other. What is the Pythagorean theorem really measuring here?
  • How can we do a better job representing more than a 2D scalar representation to users? Provide them with more dials and knobs so they can customize and tune what their view? How can we craft engaging environments that encourage interactive storytelling and help users carve narratives out of databases?

I think it is really important to understand these assumptions and possibly play with them. Would it make a difference if users had access to their own raw data, and understood the digital footprints they were leaving?  It seems quite  important that users understand the inner workings of these systems so they can take full advantage of them, and also not be exploited by them.

As Computer Science takes on more of the characteristics of Architecture and Urban Planning, we need to turn to questions of desire, intent, and purpose.

What impacts do we hope that CitySense will have on social life? Not in a naive deterministic sense, but in terms of catalyzing, facilitating, and favoring certain social interactions over others. What kinds of hypotheses about the kinds of impacts a system like this might have on an urban space? Amazingly, these hypotheses might be testable. How might this system be tuned to increase the odds of these outcomes, once we articulate the mission/purpose of the design?

Finally, what about the promise a tool like this might have for improving the public good? At work we are partnering with the Earth Institute, the School of Public Health, and others on third world development projects that might benefit greatly from the application of these visualizations (RapidSMS, a project tracking malnutrition in Africa comes to mind).

How can we work on balancing the organizational digital divide, and get these kinds of tools – the server side, not the client side – beyond the hands of corporations and letter agencies and into the hands of social scientists, architects, journalists, and educators?

Can we mobilize the student labor to benefit the public good?

Interview: Christopher Mackie on Knight’s Hyperlocal Gambit

Neon vintage micLast week I reflected on the Everyblock.com acquisition. Since then, Knight’s journalism program director has blogged about their perspective on the sale, and some great conversations have continued.  I have also had a wonderful opportunity to discuss the purchase with Christopher Mackie, a program officer at the Mellon Foundation. Chris is the Associate Program Officer in the Research in Information Technology program and is closely involved in Mellon-funded software initiatives.

Here are some excerpts from our conversation:

JB: Thanks so much for taking the time to share some of your thoughts on the recent purchase of Everyblock. As you know, Everyblock is a foundation sponsored, open-source journalism startup that was recently acquired by msnbc.com. Even though the Knight Foundation mandated that all the software they funded was released under an open (GPLv3) license, the future openness of this application is now uncertain. As an important funder of many valuable open source software projects I am wondering if you could share your reactions to this news? How do you feel about the outcome? Did the deal take you by surprise?

CM: Hi Jonah – good to talk with you! Before we start, let me be clear about a couple of things. First, I don’t speak for the Mellon Foundation on this, so all I can share are my own views. Second, I’m by no means the most knowledgeable person around when it comes to intellectual property issues. In fact, I can find several people who know more than I do without even leaving the building at Mellon. What I do have is a particular perspective on IP issues that has been developed in large part from my work with our information technology program. I hope that my perspective is useful, but I wouldn’t want anyone confusing it with either an official Mellon perspective or some sort of consensus view among experts. As far as I can tell, consensus only exists among IP experts on issues that no one cares about.

That said, as I follow the conversation, what appears to be happening with Everyblock is that a number of people are seeing for the first time some issues that have been seen before in other parts of the software space. In the process of thinking through the implications of those developments, they’re reinventing old arguments, most of which are insufficiently nuanced to be valid. Eventually, they’ll work it out, but right now, many people are still looking for too-simplistic answers.

JB: This moment is such a great learning opportunity to teach grantmakers and journalists some really important lessons about Intellectual Property, and the complexities of Open Source software, community, and culture – is there anything specific you think we can learn from this transaction?

CM: Rather than try to parse the many issues individually, let me just suggest a couple of basic principles that I use when I’m trying to advise projects on licensing issues:

First, “the context is more important than the license.” The debate over BSD/GPL tends to take place at a very abstract, ideological level. This is the wrong level: when it comes to licensing, I believe that you really need to get down and grub in the dirt. Licensing decisions are almost always made better when they’re made in a carefully contextualized fashion.

The single most important contextual dimension I know concerns the “organizational complexity” of the product. That’s my own, made-up term to describe the need to integrate your project with other organizational systems, human and software. Organizationally complex software requires significant adaptation or customization in most installations – which implies the need for significant vendor involvement in many installations. A good example of an organizationally complex system is something like a financial system, which tends to have to connect to all sorts of other software and to interact with all sorts of human workflows. Good examples of organizationally simple software are things like a Web browser or a word processor, which ought to work out-of-the-box without any customization or integration.

If you have an organizationally complex product, BSD licenses tend to work better than GPL. Why? BSD licenses don’t scare off the vendors who have to poke around the insides of the product in order to support it, and who worry that their private IP may be compromised by an accidental contact with a GPL’d product’s innards. I’ve seen the arguments about whether this is actually a valid concern, by the way, and I’m not particularly invested in learning the right answer, if there even is one. As long as vendors believe or fear it to be true – and many do – then it might as well be true. Without vendors, it’s hard for an organizationally complex project to thrive, so BSD tends to win out in those sorts of projects.

A second dimension concerns the degree of “market power” held by the users. Market power depends on the ability of users to recognize themselves as having shared interests and then to act on those shared interests. A user community that has market power can issue a credible threat to punish a misbehaving vendor; one lacking market power, cannot. This often isn’t a simple determination; for instance, consider Mozilla. At the core of the Mozilla community, as with most open source communities, is an intense, dedicated group that sees itself as having shared interests and clearly has the will to punish someone who attempts to misuse the Mozilla IP. But do they have the ability? After all, they’re only a tiny fraction of all Mozilla users. The rest are a widely distributed, diffuse group that would never imagine themselves as having much in the way of common purpose, beyond the desire to have a free Web browser. Which constituency matters more in calculating market power? It almost certainly depends on the context.

Some people object to the phrase “market power,” preferring terms like “strength of community” or “trust.” I’m not too worried about what one calls it, but I will say this: once you get past the rhetoric, it mostly boils down to the community’s ability to deliver a credible threat to punish a malfeasant vendor. If the user community ceases to value the project enough to want to defend it against vendor malfeasance, or ceases to be able to act together effectively to deliver that defense, then, however much they value the project individually, it is unlikely to stay open no matter the license.

There are other dimensions to think about, too; for instance, a project having multiple vendors is safer than one with only a single vendor, or none, because non-colluding vendors tend to act in ways that keep each other well-behaved. But those are the biggest two, in my experience so far.

Earlier, you brought up the Sakai and OpenCast projects, both of which have been funded by us (and by other foundations, such as the William and Flora Hewlett Foundation, as well). I believe that these two characteristics are why Sakai and OpenCast, as well as other community source projects, are able to use BSD-style licenses (they actually use the Educational Community License, or ECL, which is almost-but-not-quite the Apache license). Community source software projects produce organizationally complex products deployed by a coherent community of institutions willing and able to exercise market power if needed. For instance, the community of higher education institutions seems to have no trouble understanding their common interest in keeping Sakai’s IP open, even if they’re not Sakai users themselves–and as a group, they seem to have the will and ability to punish vendors that attempt to misbehave. Most vendors sell more than one product into these institutions, so they stand to lose more than they can gain from bad behavior on any single project like Sakai. The result: there is virtually no evidence of significant vendor malfeasance in any of the community source projects, despite the use of a license that in theory allows any vendor to close the code at any time. The closest you can find is the Blackboard patent dispute—which is a challenge to the ownership of the IP, not its licensing, and in which Blackboard has been careful to steer clear of any direct threat to the Sakai community. But would every vendor’s good behavior continue if the community stopped caring about Sakai? I seriously doubt it.

On the other hand, if you have a product which is organizationally simple, as well as having a relatively powerless user community, then get thee to the GPL, because the temptations to steal and close the code just become too great for some vendors to resist. We’ve seen some examples of that, recently, too. Still, don’t believe that the GPL will protect you if your community cannot or will not. If the community is weak enough, nothing can really protect you.

Second, “IP ownership trumps IP licensing.” Some of the commentators on Everyblock that I have read so far are circling around this point, but none has yet followed the logic all the way. All the debate over licensing tends to obscure the reality that final power lies in ownership, not licensing. For a surprising number of situations, licensing is little more than a red herring.

If I own the code, I can issue you a GPL, someone else a BSD, and yet another license to a third party–take a look at the Mozilla licensing scheme sometime, for an example. If I’m also responsible for updating the code, I can change the license to all of you at any time simply by issuing a new version. Sure, you can still use the old version under the old license, but if I really want to make it tough for you to keep using the old version, there are ways. Finally, as you’re seeing with Everyblock, when someone owns the code privately, there’s nothing that prevents someone else from buying the code – often by buying the firm itself – and changing the licensing terms.

I have no insight into MSNBC’s plans for Everyblock. Maybe they’ll close the code; maybe not. Maybe they’ll keep something open but close the commercial services they build on top of it – I don’t know. As your commentators have noted, no one seems to know – and that’s part of the problem with privately owned but open-licensed code. You just never know.

That’s one reason why I tend to be wary about the “commercial OSS” model, no matter what license it uses. In many commercial OSS projects that I’ve seen, even the GPL is effectively just a cover for what is to all intents and purposes a closed code-base, because the owner/vendor is the only entity on earth that has any realistic likelihood of supporting or extending or developing the code further. Ask someone in the MySQL community how protected they feel by their license – or ask the people using Zimbra how they expected to fare if Microsoft bought Yahoo. It’s not about whether the current owner is good, bad, or ugly; it’s about the fact that you can never know whether it will be the same owner tomorrow. That’s a lot of uncertainty on which to base a mission-critical technology choice.

JB: So, given the diverse range of contexts you describe, what specific strategies have you deployed to mitigate these risks?

CM: Good question – and it’s important to emphasize the word “mitigate,” because there are no guarantees and there’s no such thing as absolute effectiveness. One thing we do in our program is to use IP agreements (a contract with the owner of the code to be developed) that require any transfer of ownership to be to an entity which must also agree to the terms of our IP agreement. In a sense, we make the ownership viral, whether or not the license is viral. That’s not a perfect solution, but it appears to be working for us so far.

It also helps that we make our grants to non-profit organizations, which can’t be bought the same way you can buy a private or publicly held firm. When for-profits are involved in our grants, which sometimes happens when grantees decide to contract with for-profit developers, my program (Mellon’s Program in Research in Information Technology) has always required that the non-profit be the IP owner. We are not alone in this; for instance, when several major technology corporations—all for-profits—decided to share and protect some of their own intellectual property in an open environment, they didn’t trust it to a for-profit, but instead created the Eclipse Foundation, a non-profit that owns the Eclipse Project IP. Ditto the Mozilla Foundation.

Still, it bears repeating that just putting your IP into a non-profit mindlessly doesn’t eliminate the risk, because it matters how the non-profit is structured and governed: nothing says a non-profit can’t be malfeasant, too, if in somewhat different ways.

JB: Do you think that the Knight Foundation was swindled? Did they get outfoxed by msnbc.com, or do you think they are happy with this outcome?

CM: I have no knowledge about what the Knight Foundation intended – has anybody bothered to ask them? [ed note: this conversation took place before Knight made a public statement] I think it would be foolish simply to assume that the grant makers have been outfoxed by this development: it may have been exactly what they wanted, or just a risk they decided beforehand that it was worthwhile to run. Keep in mind, too, that MSNBC hasn’t said or done anything about closing the code so far. Even if the Knight Foundation did want perpetual openness and the strategy wasn’t perfect, there’s still a chance that they’ll get what they wanted.

All that’s really happened here is that the sense of security held by at least some members of the Everyblock community has been shaken by the purchase news. But it was always a false sense of security; at this moment, as far as I can tell, nothing objective about the openness of the project has actually changed.

JB: Do you have any closing thoughts about this deal, or what you think grantmakers and open source advocates can learn from it?

CM: If Everyblock serves to help some members of the openness community to get past their ideological blinders and recognize that IP ownership and licensing decisions are subtle challenges with relatively few simple, definitive answers, it will have done some good. After all, even the best source code is relatively ephemeral, but we can hope that such wisdom will last forever.

JB: Thanks so much for your time and wisdom. I know alot of people who were quite surprised by this turn of events, and it feels like we all need a crash course in IP law /and/ sociology to navigate the intricacies of this political economy. Even veteran lawyers and free software evangelists are often confused by many of these complexities. I really hope that this case and your analysis will better inform future work of this type. Good luck keeping it open (and real)!

CM: Thanks very much. I hope what I had to say is useful.

Disorganized thinking

poison_pillAs I’ve claimed previously, Big Pharma’s crimes and cover-ups will soon make Big Tobacco’s scandals look like jaywalking.

AstraZeneca’s Seroquel trial began last week, and the industry’s criminal antics surrounding anti-psychotics are coming into better focus.  Documents introduced as evidence are confirming that, like Eli Lilly with Zyprexa(Kills), AstraZeneca knowingly downplayed the fatal side-effects of their toxic pills. They covered up the fact that Seroquel causes diabetes and massive weight gain, and have been gaming the drug approval process to expand the diagnostic reach of their drugs.

In a move which hits new lows, even for Pharma, documents introduced into evidence reveal sex scandals and conflicts of interest in the approval of Seroquel for treating depression, the burying of unfavourable studies, and deeper insight into the pathological cognitive dissonance underlying Pharma’s logic. Get ’em while they’re hot!

43_Exhibit 15.pdf

There may be a rationale to explain why acutely psychotic patients may gain weight in the short term, following effective therapy. The relief of negative symptoms, apathy, etc, disorganized thinking, may result in return to normal activities like having regular meals.

I see. Blame the weight gain on the crazy people. Gotta love it. I am reminded of the current economic situation, where corporations privatize the profits and socialize the risks/loses. All the good is caused by the drugs, the patients/victims take all responsibility and blame for the bad.

Meanwhile, this week yielded a few more alternate hypotheses on behavioural issues in children:

Are Bad Sleeping Habbits Driving us Mad?

The 3 R’s? A Fourth Is Crucial, Too: Recess

adding to the growing list (nutritional issues, boredom, and increased stress) of plausible explanations for children’s irritability, restlessness, and erratic behaviour.

I thought the scientific method was supposed to be about systematically exploring causal possibility spaces, and iteratively refining our narrative understanding based on critical observation. Pharma’s scientists have seriously lost their way. They have betrayed the sceptical stance at the foundation of scientific knowledge production.

That’s some abstract, theoretical jargon, but the threat here is quite concrete and real.  Just ask these horrifically abused elderly patients. People who have never manifested psychotic symptoms are no longer safe!

That future of psychiatry is quite disturbed. They might actually beat
omniscient surveillance to the punch on absolute control over the populace. But heaven help us if/when Big Brother forges an alliance with Big Pharma.

Hot off the Collaborative Digital Press

At long last! Wiki Writing: Collaborative Learning in the College Classroom has finally been published. An anthology of peer-reviewed essays on teaching and learning with wikis, the first two chapters in the book are written by myself, my coworkers, and my friends.  Mark Phillipson contributed “Wikis in the Classroom: A Taxonomy,” and Myself, John Frankfurt, and Alex Gail Shermansong teamed up with Professor Robin Kelley, our faculty partner on the Social Justice Wiki, to write “Wiki Justice, Social Ergonomics, and Ethical Collaborations.”

Over 3 years since the Call For Papers, and a long and arduous review process, the hard copy of this book is now available for purchase from the University of Michigan Press and at Amazon, and will soon be available to explore free of charge at the Digital Culture Books website. It think they may have grown the trees before killing them for the paper.

The half-life of the subject matter certainly warranted a more rapid turnaround, but I guess that’s the sound of dying media letting out its last wheeze. I am also disappointed that the hard copy managed to publish the wrong, older version of my diagram. So, for my first erratum, here is the figure that should have been printed: Social Software Value Space.

Gripe, gripe, gripe. Actually, I am thrilled this came together, and think the book looks great and will stand the test of time. I’m also happy the digital version of the book will be available for free, though I am not certain the book made it out under a Creative Commons license. A huge thanks to our editors (Robert E. Cummings and Matt Barton, whom I have yet to meet in person) for persevering and making this happen.

Previous PageNext Page
/* reset the net - http://resetthenet.tumblr.com/post/84330794665/the-reset-the-net-splash-screen */