Now Playing: Nothing but the whole truth

sword-justice-not-blindI recently learned about a fascinating  trend in litigation that is quietly transforming courtroom testimony, and is spreading fast and far – video depositions.

I talked with a consultant who helps attorneys process video depositions. In the courtroom, attorneys are juxtaposing live testimony with segments from depositions.  Video clips of witnesses reinforcing (or contradicting) themselves are far more powerful than merely reading back the transcript. The courtroom has always been about performance, but these videos have taken this to a new level, as savvy lawyers manipulate appearances and emotions. Increasingly all depositions are being recorded, just as they are transcribed.

Apart from the ways that courtroom proceedings are being transformed, I am also intrigued by the software that is undoubtedly in development to support these operations. In addition to conventional A/V support, working effectively with hundreds of hours of video involves archiving, indexing, distributing, editing, and clipping.  At about a day or two of testimony per witness, and dozens of witnesses per trial, the numbers add up pretty quickly.

As cases accumulate, and multiple associates begin working with and analyzing video, law firms will quickly recognize the desirability of networked, collaborative, video annotation environments.  Some large firms (and their vendors) may have already begun developing solutions. However, the consultant that I spoke with was storing video locally on a laptop hardrive and tracking it with an Access database, so opportunities are knocking. Without a doubt many of the tools that will be highlighted at the upcoming Open Video Conferene (OpenCast, Kaltura, and CCNMTL’s Mediathread come to mind) have overlapping feature and requirements.

Once again the organizational digital divide looms, and I am deeply concerned that only the high end corporate law firms will be able to invest in the competencies and capacities to make this work.  Meanwhile, the impact law firms (along with journalists and social scientists), will be playing catch up, handicapped by this powerful new differential.

I wonder how quickly this practice will spread?

Oyez, Oyez, Oyez!

Pick a world… any world…

abandon_despairLast week I attended the second half of the US Social Forum – not exactly a conference, but more of a convergence or a process, where 20,000 people gathered in Detroit to build coalitions, alliances, and movements. The World Social Forum began as a response to the World Economic Forum – Why should the power elite be the only ones planning humanity’s future?!?

The USSF web site and the People’s Media Center (made possible by some righteous radical techies, the Design Action Collective, riseup.net, and May First/People Link) should give you a flavor of what the event was all about. But, be aware that the streaming video and social media barely scratches the surface of the experience.

The forum is organized around 2-hour long workshops, and over 100, 4-hour long People’s Movement Assembly’s.  The sessions were in depth and quite intensive. The format is designed to encourage small group interactions and for people to connect and get to know each other.

The assemblies were geared around crafting resolutions and actions – I attended parts of the transformative justice and healing PMA, and it was really well facilitated. During the closing ceremony the assemblies synthesized their resolutions, scheduled actions, and asked for commitments of solidarity around their issues.  I don’t think that this forum represents the Left’s answer to the Tea Party, but I did gain a much better appreciation for the scope of issues comprising The Agenda(s). And, considering that anyone passionate about an issue was welcome to participate, the assemblies offered an authentic glimpse into everyone’s priorities. It felt like a determined effort to take things into account, and put them in order.

Here are some of the resolutions that emerged from the Progressive Techie Congress Principles and the Transformative Justice and Healing assembly.

Collective Liberation and Radical Mental Health

The main draw for me to the conference were the Icarus Project workshops and the convergence of Icaristas, in person. We took over and transformed a house in a Detroit suburb, and mad dreaming and plotting ensued. The place was quickly transformed into a safe space for people to brilliantly  navigate the madness of the forums, and it was quite amazing to spend quality time, face to face, with friends and allies. I gravitated to the heath tracks, taking up issue of self-care, mutual aid, and wellness.  I also caught some great music, ate some amazing homemade food (and not bombs), visited some incredible collective living spaces, and was pretty inspired by everyone who cared and showed up.

This Icarus workshop I attended (there was another that I missed, plus a screening of Crooked Beauty) was eagerly anticipated and well attended – the participants were open and receptive to the core messages, and there was a palpable desire to embrace these issues locally. The session leaders shared their personal stories and modeled peer-support as we broke into groups (photos, highlight reel to be posted shortly). People shared details of their individual and organizational neuro-diversity and how dysfunctional feedback loops undermine many organizing efforts. The relationship between personal and collective liberation emerged from the workshop and will travel far beyond Detroit’s (shrinking) city limits.

Detroit is pretty beat up – we stayed two blocks away from a refinery that belched flames into the night sky – but there are some wonderful people and projects that were really cool to experience. It’s also the only city I have ever been to that has a monument to organized labor.

If I can’t dance, I don’t want to be part of your revolutionEmma Goldman, Radical Feminist

Humane Communications over Human Networks

emergency.broadcast.Today I attended a barcamp-style CrisisCamp in NYC  where volunteers from around the world  gathered physically and virtually to brainstorm, organize, coordinate, and work to help alleviate the suffering in Haiti (CNN CrisisCamp coverage). When people talk about crowdsourcing relief to this disaster, CrisisCamps around the country helped assemble the the sources (and faces) in these mysterious crowds.

Self-Organized Collaborative Production and Action

It was amazing to see these strangers converge, congregating around the familiar communication modalities of wikis, mailing lists, irc, and now twitter and google wave. While these torrential rivers of information are overwhelming, some subcultures are developing strategies for managing and synthesizing these flows. A main organizing hub is http://crisiscommons.org/ , and the hashtags #cchaiti and #haiti are being used to ‘tag’ disparate social media around these efforts.

Today’s NYC event drew over a dozen people, techies, community organizers, students, Hatians, UN reps, librarians, union workers, journalists, and beyond. I have been closely following ushahidi/swiftapp project, and their http://haiti.ushahidi.com collaborative filtering curation strategy is in full swing. Open Street Maps is proving to be an essential piece of infrastructure  around mapping data, and the New York Public Library has rescheduled the launch of their amazing new map rectifying tool to help make sense of Hatian geography – shockingly, there are very few maps of Haiti, and their collection might significantly help when overlaid on satellite imagery. This can assist relief workers who need to  know what neighborhoods are called, and which buildings were where, etc. If you are familiar with Hatian geography, you can help rectify maps here.

The Sahana Disaster Management Project is also looking for python developers to help scale their software.

Strategic Communication Flows

Strategically, I was struck by the asymmetry of information flows. Many of the efforts seemed to focused on collecting Hatian data, and representing it to Americans and NGOs working on the ground in Haiti. But, not too many Hatians have iphones…

There seems to be very little focus on creating flows of information back into Haiti – information from the outside world directed to Haitians, or, on creating infrastructure for Hatians to communicate with each other.  Beyond that, I am not aware of any coordinated efforts to establish non-corporate-mediated, 2-or-more-way channels of information between Hatians and Hatians in the diaspora.

I was reminded of the recent Iranian uprising. A wonderful moment of microblogging glory, although few Americans appreciated how the Iranians were able to receive lifelines of information from outside of Iran (like where to find proxy servers), and were also using the platform to communicate with each other, within Iran.

I was struck by what an important role traditional mass broadcast media might play in a crisis situation. People on the ground need information, desperately.  They need to know which symbols indicate that a house has already been searched, where the next food/water/medicine drop will be, and that the biscuits are good, and not expired.  They also need entertainment, and news –

à la Good Morning Vietnam.  And messages of consolation, emotional support, solidarity, and even song and laughter. Maybe even Bryant Park style movie nights.

Hybrid Networks

Electricity and ISPs are largely down. There are trickles of bandwidth available, and some Hatians have made it onto facebook and cellphones.

So, what could a hybrid, analog-digital network look like?  Low-power FM? High-speed copy machines? Blackboards?

It’s actually not that hard to imagine a hybrid network, composed of people, FM radio, blackboards, printing presses, portable video projectors, cell phones, SMS,  and Internet.  Really, whatever is available.

The Earth Institute and UNICEF Innovation has been deploying RapidSMS on the ground in Africa, and they are working in villages where a single cell phone operator brokers vital information to a blackboard in the town square, transforming a cell phone into a mass broadcast device.  Reminiscent of the Wall Newspapers in communist russia.

And if there were a low power FM Radio station set up, the DJ could presumably retransmit messages coming in over the Internet or the cell phones (kinda the reverse of the activist who retransmitted police scanner transmissions over Twitter at the G20 summit protests).

Hatians would know that if they needed to get a message out to a loved one in Haiti, they could get to the radio station and it might be transmitted, back into local community. Messages would travel over human and technological networks, routed intelligently by humans where technology leaves off.

What would the programming on this radio station look like?  They could have hourly news and announcements, read out community messages submitted by listeners, convey messages of condolences and support from the outside world, play music, pray, talk radio, “call in” shows, anything really. Most importantly, this radio would be locally produced, with  the local community deciding what to play.  There was a precedent for local radio, KAMP, in the astrodome stadium after Katrina. The station was set up with the help of the fantastic Prometheus Radio Project volunteers, though authorities tried to shut down the “pirate” lifeline.

Turning Messages in Bottles into Skywriting

Today I met someone who is working with local Haitian communities in NYC.  We are both very concerned with CNN dominated the coverage, frittering away their 24/7 news coverage on looping segments, and circling like vultures waiting for violence to erupt. We have to understand the danger of a single story.

We were both very interested in creating alternate channels of communication for Hatians to speak for themselves, and engage in dialogue with their relatives in the diaspora.

Here is one project we could run over the kind of hybrid analog-digital/human-machine sneakernet described above.

Hatians could send video messages in a bottle.  The community here could gather to watch and reply to those videos.  Say the videos and the replies were limited to 3 minutes each. The original message and the reply could be bundled and sent back to Haiti – not unlike sending a letter before the postage service – you would give it to someone heading to the recipient’s town.

Initially, a few flip cameras on the ground in Haiti, with the video transmitted home over the Internet, or even back to the states by sending the memory cards home with a courier. Eventually, when bandwidth begins to open up, we might be able to imagine a live, synchronous, stream. But, before then, we can imagine ansynchronous video messages being sent back and forth, between Haiti an Haitian communities in the diaspora.

On the Hatian end, the replies could be projected and played back to groups gathered around projectors at night. On our end, distribution is trivial, but the message might easily get to the precise person it was intended for through community social networks.  A Haitian could send a video message in a bottle to Brooklyn, and it would not take long for their relatives to know they were safe.  Replies could include message of hope, compassion, and support.

Most importantly, independent lines of communications could be opened. As a secondary benefit, if the messages were disseminated publicly (say, on you tube), secondary waves of help could create journalistic highlights, extract crucial data to feed the informatics systems (sourced to the originating testimony), and we could start hearing each others voices.

At the moment, our aid feels like we are tossing a homeless person a few dollars while averting our gaze, when what they really need is for us to look them in the eye, recognize their humanity, and have a conversation with them. We are electronically strip searching the people of Haiti, when (forgive the Avatar reference) we need to see each other.

Theory and Practice

A few closing thoughts to this already rambling post.

I attended the event for many reasons including:

  • My research interests in the politics of memory, information flux,  distributed cognition, collaborative production, and collective action.
  • A seminar I am participating in this Spring that is taking up the themes of collective memory, pedagogy, digital media, and trauma (using a the 9/11 Project Rebirth as a point of departure, but conceptualizing responses to collective trauma ranging from Katrina, to evironmental refugees, and beyond).
  • Because the situation is horrifying and desperate, and I have the sinking feeling that no one has a handle on how to help the Hatians.  Worse, I fear that many are already beginning to view this event as a rhetorical chip, and angling to advance their own agendas on the wave of this shock.

The importance of mass media in creating a sense of (imagined) community is well theorized in communications studies.  Haiti’s physical infrastructure is shattered, but we can very quickly reconstruct its communications infrastructure and help them reconstitute their sense of identity and community.

Cultural theorists have criticized the pacifying power of mass media – but the UN is forecasting a sharp increase in violence, riots and rape – if ever there was a time to distract and pacify the populace – or should I say, provide them with a constructive channel for them to express and vent their energies?

If we want to turn this disaster porn on its head, we should just give Hatians the IP rights to all the images pouring out of their country now. The profits would be enough to rebuild the country 10-times over.

The life saving importance of information should not be underestimated – The only thing more important than food, water, or medicine is hope.

Update: This brain[storm/dump] has now been transformed into an actual project proposal at the Crisis Commons wiki – The Open Solace Haiti Project , whose first priority is the Haitian Video Postcard Exchange Network.

[Special thanks to Mar Cabra and Rasmus Nielen for being a sounding board for some of these scattered ideas, John Durham Peters, whose brilliant thought broadcasts on Broadcasting and Schizophrenia induced my thinking, and Levanah and Stan Tenen and the work of the Meru Foundation whose spiritual teachings helped shape these ideas.]

When Lessig was in Disneyland…

ex10_03-04I had a fun idea for a new Free Culture campaign last spring, but I haven’t gotten around to blogging about it until now.

LET MY CULTURE GO!

  • Walt Disney: Let my cartoons go!
  • Jack Valenti: Let my music go!
  • Rupert Murdoch: Let my news go!
  • Steve Jobs: Let my iPhone go!
  • Jeff Bezos: Let my Kindle go!
  • etc, etc.

I know it would be more consistent to substitute ‘our’ for ‘my’, but I really want to evoke the biblical/mythological imagery around freedom and liberation, while simultaneously calling these CEOs out for the pharoahs/slavemasters that they are (we used to have another term for 360 deals…). The campaign simultaneously inverts the framing of copying as piracy, and takes up the mantle of liberators.

As Nina Paley rigorously demonstrates, there are many parallels between the struggles against Human Property and Intellectual Property. Just as we once thought it was morally acceptable to own humans, can we imagine a future where the ownership of ideas is viewed with similar disgust and incredulity? What are the best ways to remind people that Copying is Not Theft?

Anyway, the signal to noise ratio is quite high, and it will definitely
fit on bumper stickers and T-Shirts…

Any graphic designers want to donate some skillz?

Selling shovels to News diggers

Mad Scientist's UnionI had a fun idea tonight (patent pending) that occurred to me after reading about the Newspaper’s accelerating collapse, the Talking Point Memo’s membership experiment, and the recent report on reconstructing journalism.

I can’t recall ever reading about or debating my new journalistic business model, and I’m not sure if its crazy, brilliant, or evil.

Has anyone ever thought about charging newsreaders to express themselves?

Micropayments for comments, not content?

Seriously, how wild would that be.  Pay to comment. Maybe pay to vote, rate, like/dislike. You could even sell different priced foods for people to throw at the journalists (and at other users), provoking foodfights in the newsroom. People would pay to mad men themselves, if you allow them to customize their avatars so they could rant in style.

Now, I recognize it might sound like a step backwards, or slightly anti-democratic, but not long ago there was no commenting at all.  And folks can pick themselves up and have a conversation anywhere on the Internet if they want to. But, you are offering the readers the spotlight of attention… kinda like, advertising!  The dating sites have finely tuned the market dynamics of charging users to communicate. Would these comment stamps reduce or increase the spam?

Maybe the scales are all wrong – it’s probably something like 1% of readers that ever participate, but if fashion (and flickr and  Second Life) is any indication, people dispose plenty of their income expressing themselves in public.

So, Mr. Murdoch, tear down this firewall.  Everyone knows the real money comes from the souvenir and concession stands. It’s better than free.

Reconstruction time again

At a loss for wordsThis week the j-school was abuzz with the conversation successfully  provoked by the publication of a detailed comprehensive report, complete with recommendations, on how to save the endangered species of professional journalists.

One of the report’s two primary authors is my professor Michael Schudson, a thoughtful scholar and a great teacher who is eminently approachable for advice. My friend Dr. C.W. Anderson was the research assistant on the project, and I know he worked pretty hard to make this happen, though he didn’t go on a world tour with the authors.

The report was solid and it managed to gain alot of attention and stir up  a bit of a ruckus. The recommendations seemed reasonable to me, though not quite as radical as I would have hoped…

I have been involved in quite a few conversations around the future of journalism this year, and while there has been a great deal of conversation around how the forms of organization around journalistic production are changing, there has been very little talk about how what’s being produced is changing too.

I am reminded of Bob Stein’s predictions about the Future of the Book. One of his central riffs is his epiphany that the digital book is much less about ebooks and multimedia, and much more about a shift away from the book as a static, finished, complete, object. He imagines a new emergent form in perpetual beta, with multiple authors, and around which revisions, annotations, and communities form. Any of his talks that from the last few years probably picks up on this theme.

While many journalists are talking about producing articles using new media forms, the discussions remind me a bit of the early days of cinema, when they used to film plays.

I’m imaging a shift in journalism towards interactive storytelling, cumulative aggregation, and  distributed collaboration. We have begun to see hints of experiments along these lines in projects like Times Topics, Swiftapp, and Mapping Controversies, but this NPR project profiled last year in CJR really hits the mark: So Cool: How an economic weather map changed the climate. I think these strategies might easily apply to prose, not just data, interactive graphics, and maps.

Comparing journalism with education, will journalism only use new media to create the equivalent of a jazzed up, one-way, lecture? What does interactive story telling even look like? How will we teach the next generation of journalists to create works that are designed to be picked up, re-appropriated, and re-mixed?

With these ideas in mind, I would have loved to see some recommendations in this report designed in anticipation of this future, not merely to prop up yesterday’s decaying models. The patchwork of the future can be best supported by encouraging greater transparency, open licensing, and a culture of collaboration.  What about encouraging open licensing mandates to this foundation support? Mandate the sharing of primary sources? Teach journalists of the future to share, and to learn from their readers? These aren’t all policy recommendations, but I think they need to be thought through and woven into this conversation.

PS – While the future of journalism may be difficult to discern, the future of newspaper suddenly seems pretty clear 😉

The Interdisciplinary Kissing Problem

websLast week I participated in the architecture school’s visualization seminar and  was treated to a mind-blowing presentation by Tony Jebara, a Columbia Computer Scientist. Jebara is a young associate professor who researches machine learning, graphs, and visualizations, and is also the chief scientist at CitySense.com. His lab “develops novel algorithms that use data to model complex real-world phenomena and to make accurate predictions about them.” They also work on improving the readability of massive volumes of multi-dimensional data, and are currently focusing on making sense of networks of people and places (take a wild guess who else is interested in their work).

CitySense is an application that runs on mobile devices and from their location data…

…builds a network of real places (like the World Wide Web) and a network of real people (like FaceBook) in order to apply online techniques like smart search, recommendation, collaborative filtering, marketing, user clustering and prediction. Today, Sense Networks tracks almost 10,000,000 mobile phones and location devices through partnerships with carriers, device manufacturers, application developers, taxi fleets, and its own subscribers.

The system attempts to discern friendships as well as an establishment’s clientele based on calculations of user’s physical proximity to each other, combined with a growing semantic understanding of places (e.g. all the Starbucks in the city probably have more in common with each other (in some sense) than they do with their geographical neighbours). According to Jebara, the system is already able to infer the types of people who frequent certain nightclubs—students, hipsters, professionals, bankers, tourists—based purely on an analysis of these overlapping networks of millions of human and non-human actors. The cell phone carriers have also used this kind of analysis to reduce “churn”—if one person leaves the network, the carriers respond rapidly by offering promotions the defector’s close network to seal the leak before the damage spreads.

Incidentally, the matching algorithms that Jebara’s lab uses are the same family of algorithms that “power” google ad words, so at least for the time being, despite the captivating sacred geometry, I am sticking with human recommendations 😉

I left the talk with many questions and reflections:

  • I understand that calculating similarity is the special sauce in any recommendation implementation, but I am somewhat troubled by the conflation of proximity with similarity. Consider the Upper East Side Nanny or the parole officer and the parolee (although hollywood continues to try to convince us of the identity relation between detectives and criminals) – is there something essential being reduced or discarded when we reduce similarity to proximity?

    Is there any room in this model for the “kind” of relationship that motivates the physical proximity? Perhaps something similar to Yahoo/Berkeley TagMaps? Their work requires explicit human input, and is already anticipating feedback effects and users “gaming” the system. Perhaps the network effects CitySense models will disregard much of this “noise,” but, at what social/cultural/political expense?

  • Similarly, I don’t really understand how this algorithm justifies the “distance” calculation. The multi-dimensional spaces they are flattening are incommensurate. Unlike the dimensions of space-time, the spaces described are composed of values, possibly expressing orders of worth in different units that cannot be transformed between each other. What is the Pythagorean theorem really measuring here?
  • How can we do a better job representing more than a 2D scalar representation to users? Provide them with more dials and knobs so they can customize and tune what their view? How can we craft engaging environments that encourage interactive storytelling and help users carve narratives out of databases?

I think it is really important to understand these assumptions and possibly play with them. Would it make a difference if users had access to their own raw data, and understood the digital footprints they were leaving?  It seems quite  important that users understand the inner workings of these systems so they can take full advantage of them, and also not be exploited by them.

As Computer Science takes on more of the characteristics of Architecture and Urban Planning, we need to turn to questions of desire, intent, and purpose.

What impacts do we hope that CitySense will have on social life? Not in a naive deterministic sense, but in terms of catalyzing, facilitating, and favoring certain social interactions over others. What kinds of hypotheses about the kinds of impacts a system like this might have on an urban space? Amazingly, these hypotheses might be testable. How might this system be tuned to increase the odds of these outcomes, once we articulate the mission/purpose of the design?

Finally, what about the promise a tool like this might have for improving the public good? At work we are partnering with the Earth Institute, the School of Public Health, and others on third world development projects that might benefit greatly from the application of these visualizations (RapidSMS, a project tracking malnutrition in Africa comes to mind).

How can we work on balancing the organizational digital divide, and get these kinds of tools – the server side, not the client side – beyond the hands of corporations and letter agencies and into the hands of social scientists, architects, journalists, and educators?

Can we mobilize the student labor to benefit the public good?

Interview: Christopher Mackie on Knight’s Hyperlocal Gambit

Neon vintage micLast week I reflected on the Everyblock.com acquisition. Since then, Knight’s journalism program director has blogged about their perspective on the sale, and some great conversations have continued.  I have also had a wonderful opportunity to discuss the purchase with Christopher Mackie, a program officer at the Mellon Foundation. Chris is the Associate Program Officer in the Research in Information Technology program and is closely involved in Mellon-funded software initiatives.

Here are some excerpts from our conversation:

JB: Thanks so much for taking the time to share some of your thoughts on the recent purchase of Everyblock. As you know, Everyblock is a foundation sponsored, open-source journalism startup that was recently acquired by msnbc.com. Even though the Knight Foundation mandated that all the software they funded was released under an open (GPLv3) license, the future openness of this application is now uncertain. As an important funder of many valuable open source software projects I am wondering if you could share your reactions to this news? How do you feel about the outcome? Did the deal take you by surprise?

CM: Hi Jonah – good to talk with you! Before we start, let me be clear about a couple of things. First, I don’t speak for the Mellon Foundation on this, so all I can share are my own views. Second, I’m by no means the most knowledgeable person around when it comes to intellectual property issues. In fact, I can find several people who know more than I do without even leaving the building at Mellon. What I do have is a particular perspective on IP issues that has been developed in large part from my work with our information technology program. I hope that my perspective is useful, but I wouldn’t want anyone confusing it with either an official Mellon perspective or some sort of consensus view among experts. As far as I can tell, consensus only exists among IP experts on issues that no one cares about.

That said, as I follow the conversation, what appears to be happening with Everyblock is that a number of people are seeing for the first time some issues that have been seen before in other parts of the software space. In the process of thinking through the implications of those developments, they’re reinventing old arguments, most of which are insufficiently nuanced to be valid. Eventually, they’ll work it out, but right now, many people are still looking for too-simplistic answers.

JB: This moment is such a great learning opportunity to teach grantmakers and journalists some really important lessons about Intellectual Property, and the complexities of Open Source software, community, and culture – is there anything specific you think we can learn from this transaction?

CM: Rather than try to parse the many issues individually, let me just suggest a couple of basic principles that I use when I’m trying to advise projects on licensing issues:

First, “the context is more important than the license.” The debate over BSD/GPL tends to take place at a very abstract, ideological level. This is the wrong level: when it comes to licensing, I believe that you really need to get down and grub in the dirt. Licensing decisions are almost always made better when they’re made in a carefully contextualized fashion.

The single most important contextual dimension I know concerns the “organizational complexity” of the product. That’s my own, made-up term to describe the need to integrate your project with other organizational systems, human and software. Organizationally complex software requires significant adaptation or customization in most installations – which implies the need for significant vendor involvement in many installations. A good example of an organizationally complex system is something like a financial system, which tends to have to connect to all sorts of other software and to interact with all sorts of human workflows. Good examples of organizationally simple software are things like a Web browser or a word processor, which ought to work out-of-the-box without any customization or integration.

If you have an organizationally complex product, BSD licenses tend to work better than GPL. Why? BSD licenses don’t scare off the vendors who have to poke around the insides of the product in order to support it, and who worry that their private IP may be compromised by an accidental contact with a GPL’d product’s innards. I’ve seen the arguments about whether this is actually a valid concern, by the way, and I’m not particularly invested in learning the right answer, if there even is one. As long as vendors believe or fear it to be true – and many do – then it might as well be true. Without vendors, it’s hard for an organizationally complex project to thrive, so BSD tends to win out in those sorts of projects.

A second dimension concerns the degree of “market power” held by the users. Market power depends on the ability of users to recognize themselves as having shared interests and then to act on those shared interests. A user community that has market power can issue a credible threat to punish a misbehaving vendor; one lacking market power, cannot. This often isn’t a simple determination; for instance, consider Mozilla. At the core of the Mozilla community, as with most open source communities, is an intense, dedicated group that sees itself as having shared interests and clearly has the will to punish someone who attempts to misuse the Mozilla IP. But do they have the ability? After all, they’re only a tiny fraction of all Mozilla users. The rest are a widely distributed, diffuse group that would never imagine themselves as having much in the way of common purpose, beyond the desire to have a free Web browser. Which constituency matters more in calculating market power? It almost certainly depends on the context.

Some people object to the phrase “market power,” preferring terms like “strength of community” or “trust.” I’m not too worried about what one calls it, but I will say this: once you get past the rhetoric, it mostly boils down to the community’s ability to deliver a credible threat to punish a malfeasant vendor. If the user community ceases to value the project enough to want to defend it against vendor malfeasance, or ceases to be able to act together effectively to deliver that defense, then, however much they value the project individually, it is unlikely to stay open no matter the license.

There are other dimensions to think about, too; for instance, a project having multiple vendors is safer than one with only a single vendor, or none, because non-colluding vendors tend to act in ways that keep each other well-behaved. But those are the biggest two, in my experience so far.

Earlier, you brought up the Sakai and OpenCast projects, both of which have been funded by us (and by other foundations, such as the William and Flora Hewlett Foundation, as well). I believe that these two characteristics are why Sakai and OpenCast, as well as other community source projects, are able to use BSD-style licenses (they actually use the Educational Community License, or ECL, which is almost-but-not-quite the Apache license). Community source software projects produce organizationally complex products deployed by a coherent community of institutions willing and able to exercise market power if needed. For instance, the community of higher education institutions seems to have no trouble understanding their common interest in keeping Sakai’s IP open, even if they’re not Sakai users themselves–and as a group, they seem to have the will and ability to punish vendors that attempt to misbehave. Most vendors sell more than one product into these institutions, so they stand to lose more than they can gain from bad behavior on any single project like Sakai. The result: there is virtually no evidence of significant vendor malfeasance in any of the community source projects, despite the use of a license that in theory allows any vendor to close the code at any time. The closest you can find is the Blackboard patent dispute—which is a challenge to the ownership of the IP, not its licensing, and in which Blackboard has been careful to steer clear of any direct threat to the Sakai community. But would every vendor’s good behavior continue if the community stopped caring about Sakai? I seriously doubt it.

On the other hand, if you have a product which is organizationally simple, as well as having a relatively powerless user community, then get thee to the GPL, because the temptations to steal and close the code just become too great for some vendors to resist. We’ve seen some examples of that, recently, too. Still, don’t believe that the GPL will protect you if your community cannot or will not. If the community is weak enough, nothing can really protect you.

Second, “IP ownership trumps IP licensing.” Some of the commentators on Everyblock that I have read so far are circling around this point, but none has yet followed the logic all the way. All the debate over licensing tends to obscure the reality that final power lies in ownership, not licensing. For a surprising number of situations, licensing is little more than a red herring.

If I own the code, I can issue you a GPL, someone else a BSD, and yet another license to a third party–take a look at the Mozilla licensing scheme sometime, for an example. If I’m also responsible for updating the code, I can change the license to all of you at any time simply by issuing a new version. Sure, you can still use the old version under the old license, but if I really want to make it tough for you to keep using the old version, there are ways. Finally, as you’re seeing with Everyblock, when someone owns the code privately, there’s nothing that prevents someone else from buying the code – often by buying the firm itself – and changing the licensing terms.

I have no insight into MSNBC’s plans for Everyblock. Maybe they’ll close the code; maybe not. Maybe they’ll keep something open but close the commercial services they build on top of it – I don’t know. As your commentators have noted, no one seems to know – and that’s part of the problem with privately owned but open-licensed code. You just never know.

That’s one reason why I tend to be wary about the “commercial OSS” model, no matter what license it uses. In many commercial OSS projects that I’ve seen, even the GPL is effectively just a cover for what is to all intents and purposes a closed code-base, because the owner/vendor is the only entity on earth that has any realistic likelihood of supporting or extending or developing the code further. Ask someone in the MySQL community how protected they feel by their license – or ask the people using Zimbra how they expected to fare if Microsoft bought Yahoo. It’s not about whether the current owner is good, bad, or ugly; it’s about the fact that you can never know whether it will be the same owner tomorrow. That’s a lot of uncertainty on which to base a mission-critical technology choice.

JB: So, given the diverse range of contexts you describe, what specific strategies have you deployed to mitigate these risks?

CM: Good question – and it’s important to emphasize the word “mitigate,” because there are no guarantees and there’s no such thing as absolute effectiveness. One thing we do in our program is to use IP agreements (a contract with the owner of the code to be developed) that require any transfer of ownership to be to an entity which must also agree to the terms of our IP agreement. In a sense, we make the ownership viral, whether or not the license is viral. That’s not a perfect solution, but it appears to be working for us so far.

It also helps that we make our grants to non-profit organizations, which can’t be bought the same way you can buy a private or publicly held firm. When for-profits are involved in our grants, which sometimes happens when grantees decide to contract with for-profit developers, my program (Mellon’s Program in Research in Information Technology) has always required that the non-profit be the IP owner. We are not alone in this; for instance, when several major technology corporations—all for-profits—decided to share and protect some of their own intellectual property in an open environment, they didn’t trust it to a for-profit, but instead created the Eclipse Foundation, a non-profit that owns the Eclipse Project IP. Ditto the Mozilla Foundation.

Still, it bears repeating that just putting your IP into a non-profit mindlessly doesn’t eliminate the risk, because it matters how the non-profit is structured and governed: nothing says a non-profit can’t be malfeasant, too, if in somewhat different ways.

JB: Do you think that the Knight Foundation was swindled? Did they get outfoxed by msnbc.com, or do you think they are happy with this outcome?

CM: I have no knowledge about what the Knight Foundation intended – has anybody bothered to ask them? [ed note: this conversation took place before Knight made a public statement] I think it would be foolish simply to assume that the grant makers have been outfoxed by this development: it may have been exactly what they wanted, or just a risk they decided beforehand that it was worthwhile to run. Keep in mind, too, that MSNBC hasn’t said or done anything about closing the code so far. Even if the Knight Foundation did want perpetual openness and the strategy wasn’t perfect, there’s still a chance that they’ll get what they wanted.

All that’s really happened here is that the sense of security held by at least some members of the Everyblock community has been shaken by the purchase news. But it was always a false sense of security; at this moment, as far as I can tell, nothing objective about the openness of the project has actually changed.

JB: Do you have any closing thoughts about this deal, or what you think grantmakers and open source advocates can learn from it?

CM: If Everyblock serves to help some members of the openness community to get past their ideological blinders and recognize that IP ownership and licensing decisions are subtle challenges with relatively few simple, definitive answers, it will have done some good. After all, even the best source code is relatively ephemeral, but we can hope that such wisdom will last forever.

JB: Thanks so much for your time and wisdom. I know alot of people who were quite surprised by this turn of events, and it feels like we all need a crash course in IP law /and/ sociology to navigate the intricacies of this political economy. Even veteran lawyers and free software evangelists are often confused by many of these complexities. I really hope that this case and your analysis will better inform future work of this type. Good luck keeping it open (and real)!

CM: Thanks very much. I hope what I had to say is useful.

Freedom of the (hyperlocal) Press?

Viral PoliceHeh.  I enjoy a nice long weekend off, and a few of my worlds collided while I was away…

This weekend msnbc.com snatched up the Knight Foundation funded everyblock.com project, and now a bunch of people I know – from  journalism, free software, law, and software development are all talking about the ethics and implications of choosing different Free/Open Source licenses for grant funded projects and experiments in sustainable journalism 😉

The Knight Foundation has been funding innovation in technology and journalism for a few years, and lately has been mandating open licenses for all the code and content they sponsor.  Knight is not alone. Mellon, Hewlitt, OSI, NSF, NIH, and other grantmakers have all begun to encourage that the IP they fund be as open as possible (to varying degrees).  Seems obvious.  If you want to maximize your philanthropic ROI, make sure that the future can extract the full potential of the work you fund – not be shackled, stifled, or duped by the misapplication of intellectual property.

I continue to be hopeful that pressure from funders might represent a tipping point for openness.  Many organizations need bunches of carrots to overcome their knee-jerk institutional momentum to horde – even if sharing costs them nothing (in dollars, labor, or resources, although sometimes transparency can take its toll on egos).

But is all openness created equal? No way am I going to attempt to recreate the great BSD-GPL wars in this post, but I will say that it stings every time I hear someone accuse the GPL of being viral (are vaccines viral?).  I also wince every time I see a vibrant open source community make an argument against the GPL – I have seen this happen around Sakai, OpenCast, and even lately around around Plone and its plugins.

[From my perspective, its the purportedly unencumbered communities that are really viral, as they continue to ratchet down GPL communities to lowest common denominator licenses, by whining about how they can’t use GPL code (which they can, provided they share-alike).  But don’t take my word for it – ask Zed why he (A/L)GPLs.]

To me, first and foremost, the GPL signals trust. As I understand it, this legal instrument has helped enable institutions and individuals, large and small, to trust each other, without fear of being stabbed in the back or being taken for a sucker. In the end, the GPL is just a license, and while it has been increasingly taken more seriously, enforcement is never fun (except for lawyers, I guess).

Eben Moglen is the founder of the The Software Freedom Law Center and also the author of GPL, but their firm can’t officially shill for the GPL. They care enough about freedom to continue to help any open software communities in need, but I sometimes wonder how they manage to bite their tongues and not scream We told you so or We warned you. Some of these same communities who have scorned the GPL have had to turn to the SFLC to bail them out when they got attacked by patent sharks. Perhaps the Everyblock story will serve as a cautionary tale, and people will learn to start taking the SFLC’s legal advice seriously. I believe that history will show that it was the GPL that ultimately averted Microsoft’s monopoly – no license could have accomplished this without the boundless energy and will of the open source developers, but the GPL was the pentagram restraining a very bad actor.

But not everyone sees the world this way, and there are other valid perspectives.  In conversations I have had with Jacob Kaplan-Moss (who co-founded Django, alongside Everyblock’s Adrian Holovaty) Jacob voiced a strong conviction that transparency, openness, and sharing are better ways to develop software, and that those values ought/need not be legally mandated. He prefers to participate in a community where those values are understood and shared.  Some might call his perspective slightly naive (while others might trace some of these attitudes to the roots of Django and the proprietary journalistic corporation that birthed it), but James Vasile makes a very similar point:

It might be disappointing that MSNBC.com can close-source Everyblock, but we still have the code. If the code is valuable to the community, we can take the last published version and use it as we want. If MSNBC.com trades a healthy free software project for a proprietary development cycle, we’ve lost nothing, and MSNBC.com has thrown away the most important asset they had– the community behind and around the code.

As for the future of Everyblock, I am am still hopeful that rationality will prevail. Everyblock runs on an incredibly sophisticated stack of open software – python, postgres (with GIS extensions), django (or something very, very similar), and msnbc.com will not get very far with this software without engaging these communities. In the 21st century, owning code is a liability, not an asset. Sure, they can try to leech and poison the well, but they will meet with pretty staunch resistance – trouble hiring programmers, getting their patches accepted, maintaining and upgrading – good luck going it alone. They will end up with the IE of hyperlocal news websites.

I also don’t think it’s necessarily evil for a corporation to participate in this ecology, or for funders to seed new user interfaces or patterns, and then hand off the innovation to capital. Sustainability is really quite complicated, especially was we embark on hybrid economies. And on the open side, it can be difficult for funders to keep software honest.

If Everyblock has a real value right now, its in the relationships they have forged with the data providers, and the effort they put into scraping and formatting this data. What we want from them now isn’t just an open platform, its also open apis, to get at the data they are collecting and harvesting. Code is only one corner freedom’s jigsaw puzzle. Never forget about the data. And, I am not really sure what Knight could have done to better protect the future openness or integrity of that data.

O.V. High

Man w/ a Movie Camera Tattoo I have to thank my friend and colleague Clayfox for comparing (positively) the vibe at this weekend’s fabulous Open Video Conference to High School. The optimism, diversity, and composition of the crowd was really inspiring.

In some ways, this conference might as well have been called the “Independent Media” conference, but of course, if it was, the right people wouldn’t have attended. Somehow they managed to attract people involved with every layer of the stack needed to create independent media.  Subcultures representing hardware, html5, metadata, content, law, production, funders and more were all represented.

To make independent new media, you either need to understand all of these details, or know someone who does.  I don’t think I have ever been in a room with this particular blend of expertise and interests before.

The networking was great, and my office was closely involved in making the education stuff at this conference happen (I have a great job). At the conference we announced the liberation of a great piece of software – VITAL is free! Run, VITAL, Run.

The highlight of the talks had to be Amy Goodman’s inspiring speech. I had seen her introduce Chomsky last week, and was left a little bummed out by his talk since it was blow after blow of what’s broken in the world, with very little vision, and no call to action. You don’t hear too many female preachers, but Goodman has really mastered an hypnotic cadence – speeding up to fit in alot of ideas, but slowing down for emphasis.  Her soundbytes are eminently tweetable (twitter essentially  replaced irc at this conference, and there was an incredibly active backchannel around the #openvideo tag/frequency/channel).

Benkler also opened with fresh material – he has clearly been thinking about journalism in the wake of this year’s collapses (and maybe even our CDPC conference?). It is amusing to think that between Benkler and Moglen (and his metaphorical corollary to Faraday’s law), it might be the sociologically-inclined lawyers who arrive at a theory of creativity (instead of the cognitive scientists).  And Zittrain covered for the missing Clay Shirky, and pulled of a funny and intelligent talk.

Many other highlights which I hope to curate once the video is all posted and I have a chance to decompress. I know I should have gone to more talks that I didn’t belong at, but I kept getting pulled in to great conversations…

Kudos to the organizers for pulling off a small miracle. I’ve been to many conferences that cost hundreds of dollars to attend, and don’t even offer lunch.  They managed to pull off a beautiful space, food, and even video djs and an open bar.

I wonder to what degree freeculture’s networked proximity to techies and lawyers simplifies some of the logistical nightmares that often plague organizers. It just sems like they are able to organize with relative ease, as the communications media and social capital are intuitive and readily available. Good thing for everyone they are using their super-powers for the greater good 😉

In terms of the longer term, they were consciously trying to create something bigger than a one time event. I was impressed at the purposeful scaffolding of the infrastructure meant to sustain this conversation now that conference is over.  Many gatherings only figure out at the event that they want to keep talking afterwards.  THe OVC crew did a great job of setting up, and using a wiki, and some sensibly divided mailing lists to seed a healthy after-party.

Previous PageNext Page
/* reset the net - http://resetthenet.tumblr.com/post/84330794665/the-reset-the-net-splash-screen */