Parabolic Intentions

4585915584_8cb079376dMystical traditions depict a singularity in consciousness occurring when all of humanity is united in the same state of mind. Our choices will determine if we will arrive at this state by achieving global peace, or take a detour through the another World War. In the limit, our shared reflective awareness is a possible consequence of globalization and has been linked to the promise of world peace.

Meanwhile, Princeton University’s all-but-unheard of Noosphere project has begun tracking meaningful correlations in random data that suggest an awakening of global consciousness. The project has distributed physical networked “eggs” which generate a steady stream of random numbers. Upon the occurrence of events of global significance the streams suddenly become a lot less random  (actually immediately before these events, but that’s another mystery).  Unprepared to even postulate the mechanism for the correlations they have established, the project minimally suggests that our collective intentions and emotions have the power to influence and affect our physical reality.

A wise mentor of mine thinks we might be able to accelerate this transformation if we all took the simple step of pausing, contemplating, and reflecting every day at noon.  Similar to the Play As Being practice I sampled a while back, the personal potency of such a discipline is dramatic. Noon is a convenient time to sync up, but the coarseness time zones introduces a margin of error. Imagine if large numbers of people welcomed the sun every morning – a wave of transcendence would (en)circle the globe. Some kind of psychic beacon?

The idea that our technologies mirror our realities is common, though contemplating our reflection within these mirrors is less so. Our global communications system is not only the planet’s nervous system, but through computation and representation, it is becoming a 2-way mirror into our collective psyche.

In the past I have appreciated how distributed research has given way to tools which help aggregate many snowflakes of data into a meaningful snowbank. Flickr and Delicious taught us how to conduct distributed research on photos and hyperlinks, but Twitter has helped popularize aggregation around arbitrary structured data.  We are monitoring elections, and each other’s sexual habits. And the data doesn’t even need to be particularly well structured, as this research on the pulse of the nation’s mood demonstrates.

Now that we have glimpsed own collective moods, can we design the biofeedback loops for us to become collectively-aware (in addition to self-aware)? To put this another way, could be learn to actually control the coordinated output of the Noosphere eggs, instead of merely tracking their correlations with our global state.

If we could collectively broadcast one syllable into the universe, what would it be?

Pick a world… any world…

abandon_despairLast week I attended the second half of the US Social Forum – not exactly a conference, but more of a convergence or a process, where 20,000 people gathered in Detroit to build coalitions, alliances, and movements. The World Social Forum began as a response to the World Economic Forum – Why should the power elite be the only ones planning humanity’s future?!?

The USSF web site and the People’s Media Center (made possible by some righteous radical techies, the Design Action Collective, riseup.net, and May First/People Link) should give you a flavor of what the event was all about. But, be aware that the streaming video and social media barely scratches the surface of the experience.

The forum is organized around 2-hour long workshops, and over 100, 4-hour long People’s Movement Assembly’s.  The sessions were in depth and quite intensive. The format is designed to encourage small group interactions and for people to connect and get to know each other.

The assemblies were geared around crafting resolutions and actions – I attended parts of the transformative justice and healing PMA, and it was really well facilitated. During the closing ceremony the assemblies synthesized their resolutions, scheduled actions, and asked for commitments of solidarity around their issues.  I don’t think that this forum represents the Left’s answer to the Tea Party, but I did gain a much better appreciation for the scope of issues comprising The Agenda(s). And, considering that anyone passionate about an issue was welcome to participate, the assemblies offered an authentic glimpse into everyone’s priorities. It felt like a determined effort to take things into account, and put them in order.

Here are some of the resolutions that emerged from the Progressive Techie Congress Principles and the Transformative Justice and Healing assembly.

Collective Liberation and Radical Mental Health

The main draw for me to the conference were the Icarus Project workshops and the convergence of Icaristas, in person. We took over and transformed a house in a Detroit suburb, and mad dreaming and plotting ensued. The place was quickly transformed into a safe space for people to brilliantly  navigate the madness of the forums, and it was quite amazing to spend quality time, face to face, with friends and allies. I gravitated to the heath tracks, taking up issue of self-care, mutual aid, and wellness.  I also caught some great music, ate some amazing homemade food (and not bombs), visited some incredible collective living spaces, and was pretty inspired by everyone who cared and showed up.

This Icarus workshop I attended (there was another that I missed, plus a screening of Crooked Beauty) was eagerly anticipated and well attended – the participants were open and receptive to the core messages, and there was a palpable desire to embrace these issues locally. The session leaders shared their personal stories and modeled peer-support as we broke into groups (photos, highlight reel to be posted shortly). People shared details of their individual and organizational neuro-diversity and how dysfunctional feedback loops undermine many organizing efforts. The relationship between personal and collective liberation emerged from the workshop and will travel far beyond Detroit’s (shrinking) city limits.

Detroit is pretty beat up – we stayed two blocks away from a refinery that belched flames into the night sky – but there are some wonderful people and projects that were really cool to experience. It’s also the only city I have ever been to that has a monument to organized labor.

If I can’t dance, I don’t want to be part of your revolutionEmma Goldman, Radical Feminist

The Case of the Missing See-Saws

1729937274_e675e78a7e[ed: They past few months I was commissioned to explore a series of rabbit/fox/worm holes, collecting inter-dimensional tales along the way.  Now that I have returned home, some typing is long overdue].

A few months ago I started wonder when and why children’s playgrounds have became so darn safe. Its no secret that litigation (both the fear and the reality) has slowly been transforming playgrounds into rubber rooms for decades.

In his analysis of Junk Playgrounds, Roy Koslovsky has advanced the argument that the activities children are immersed in are models of the kinds of citizens we want them to become. (see Adventure Playground and Postwar Reconstructions in Designing Modern Childhoods).  What might children learn from (supervised) danger and what are they missing when we they are excessively insulated and protected?

Without exposure to some risk, how are children supposed to learn to evaluate and take chances, the consequences of their actions, and the Golden Rule – what goes around comes around?  If we don’t provide them with the space to develop and exert their agency and will, are these lessons lost? Can they be adequately taught through simulation?

Against this backdrop, I followed up a lead from a reliable informant (my Dad) and began visiting local playgrounds. I first ventured out on a snow day back in February. The playgrounds were appropriately locked down that day, since apparently the last place we want kids playing in the snow is under controlled supervision. But children weren’t the only thing missing form the playgrounds…  I also noticed something else – or, more accurately – didn’t notice something else. I visited half a dozen playgounds and I didn’t see a single See-Saw!

Since then I have been informally asking around and I am pretty sure the last public see saw on the island of Manhattan is in a park on 84th and Riverside. There are still a few See-Saws left in the South Bronx and the suburbs, but in NYC they are an endangered species.

This got me wondering – What do children learn from See-Saws?  Without conducting any formal research, but after a few good conversations, I hypothesized this answer – On the physical plane: balance, gravity, and equilibrium.  On the social plane:  cooperation, friendship and trust. Heck, the see-saw is the only activity in the playground where kids are necessarily looking each other in the eyes. If you betray someone on the see saw, playground rules.  You will learn that what goes around comes around even without the merry go round (those disappeared before my time – now that toy was dangerous). And if you don’t eventually learn your lesson on the See-Saw, you might find yourself without friends within a few years.

What kinds of effects might we expect from restricting children to hamster tubes which overlooking simulated danger?  Perhaps none. Or, perhaps these attitudes are contributing to the fear, anxiety, restlessness and behavioral disorders being reported and diagnosed in children at alarming rates.

They came first for the merry go rounds, then they came for the see saws, soon they’ll come for the swings!  If only we could figure out who the capital ‘T’ They are….

http://rutgerspress.rutgers.edu/acatalog/designing_modern_childhoods.htmlk

Humane Communications over Human Networks

emergency.broadcast.Today I attended a barcamp-style CrisisCamp in NYC  where volunteers from around the world  gathered physically and virtually to brainstorm, organize, coordinate, and work to help alleviate the suffering in Haiti (CNN CrisisCamp coverage). When people talk about crowdsourcing relief to this disaster, CrisisCamps around the country helped assemble the the sources (and faces) in these mysterious crowds.

Self-Organized Collaborative Production and Action

It was amazing to see these strangers converge, congregating around the familiar communication modalities of wikis, mailing lists, irc, and now twitter and google wave. While these torrential rivers of information are overwhelming, some subcultures are developing strategies for managing and synthesizing these flows. A main organizing hub is http://crisiscommons.org/ , and the hashtags #cchaiti and #haiti are being used to ‘tag’ disparate social media around these efforts.

Today’s NYC event drew over a dozen people, techies, community organizers, students, Hatians, UN reps, librarians, union workers, journalists, and beyond. I have been closely following ushahidi/swiftapp project, and their http://haiti.ushahidi.com collaborative filtering curation strategy is in full swing. Open Street Maps is proving to be an essential piece of infrastructure  around mapping data, and the New York Public Library has rescheduled the launch of their amazing new map rectifying tool to help make sense of Hatian geography – shockingly, there are very few maps of Haiti, and their collection might significantly help when overlaid on satellite imagery. This can assist relief workers who need to  know what neighborhoods are called, and which buildings were where, etc. If you are familiar with Hatian geography, you can help rectify maps here.

The Sahana Disaster Management Project is also looking for python developers to help scale their software.

Strategic Communication Flows

Strategically, I was struck by the asymmetry of information flows. Many of the efforts seemed to focused on collecting Hatian data, and representing it to Americans and NGOs working on the ground in Haiti. But, not too many Hatians have iphones…

There seems to be very little focus on creating flows of information back into Haiti – information from the outside world directed to Haitians, or, on creating infrastructure for Hatians to communicate with each other.  Beyond that, I am not aware of any coordinated efforts to establish non-corporate-mediated, 2-or-more-way channels of information between Hatians and Hatians in the diaspora.

I was reminded of the recent Iranian uprising. A wonderful moment of microblogging glory, although few Americans appreciated how the Iranians were able to receive lifelines of information from outside of Iran (like where to find proxy servers), and were also using the platform to communicate with each other, within Iran.

I was struck by what an important role traditional mass broadcast media might play in a crisis situation. People on the ground need information, desperately.  They need to know which symbols indicate that a house has already been searched, where the next food/water/medicine drop will be, and that the biscuits are good, and not expired.  They also need entertainment, and news –

à la Good Morning Vietnam.  And messages of consolation, emotional support, solidarity, and even song and laughter. Maybe even Bryant Park style movie nights.

Hybrid Networks

Electricity and ISPs are largely down. There are trickles of bandwidth available, and some Hatians have made it onto facebook and cellphones.

So, what could a hybrid, analog-digital network look like?  Low-power FM? High-speed copy machines? Blackboards?

It’s actually not that hard to imagine a hybrid network, composed of people, FM radio, blackboards, printing presses, portable video projectors, cell phones, SMS,  and Internet.  Really, whatever is available.

The Earth Institute and UNICEF Innovation has been deploying RapidSMS on the ground in Africa, and they are working in villages where a single cell phone operator brokers vital information to a blackboard in the town square, transforming a cell phone into a mass broadcast device.  Reminiscent of the Wall Newspapers in communist russia.

And if there were a low power FM Radio station set up, the DJ could presumably retransmit messages coming in over the Internet or the cell phones (kinda the reverse of the activist who retransmitted police scanner transmissions over Twitter at the G20 summit protests).

Hatians would know that if they needed to get a message out to a loved one in Haiti, they could get to the radio station and it might be transmitted, back into local community. Messages would travel over human and technological networks, routed intelligently by humans where technology leaves off.

What would the programming on this radio station look like?  They could have hourly news and announcements, read out community messages submitted by listeners, convey messages of condolences and support from the outside world, play music, pray, talk radio, “call in” shows, anything really. Most importantly, this radio would be locally produced, with  the local community deciding what to play.  There was a precedent for local radio, KAMP, in the astrodome stadium after Katrina. The station was set up with the help of the fantastic Prometheus Radio Project volunteers, though authorities tried to shut down the “pirate” lifeline.

Turning Messages in Bottles into Skywriting

Today I met someone who is working with local Haitian communities in NYC.  We are both very concerned with CNN dominated the coverage, frittering away their 24/7 news coverage on looping segments, and circling like vultures waiting for violence to erupt. We have to understand the danger of a single story.

We were both very interested in creating alternate channels of communication for Hatians to speak for themselves, and engage in dialogue with their relatives in the diaspora.

Here is one project we could run over the kind of hybrid analog-digital/human-machine sneakernet described above.

Hatians could send video messages in a bottle.  The community here could gather to watch and reply to those videos.  Say the videos and the replies were limited to 3 minutes each. The original message and the reply could be bundled and sent back to Haiti – not unlike sending a letter before the postage service – you would give it to someone heading to the recipient’s town.

Initially, a few flip cameras on the ground in Haiti, with the video transmitted home over the Internet, or even back to the states by sending the memory cards home with a courier. Eventually, when bandwidth begins to open up, we might be able to imagine a live, synchronous, stream. But, before then, we can imagine ansynchronous video messages being sent back and forth, between Haiti an Haitian communities in the diaspora.

On the Hatian end, the replies could be projected and played back to groups gathered around projectors at night. On our end, distribution is trivial, but the message might easily get to the precise person it was intended for through community social networks.  A Haitian could send a video message in a bottle to Brooklyn, and it would not take long for their relatives to know they were safe.  Replies could include message of hope, compassion, and support.

Most importantly, independent lines of communications could be opened. As a secondary benefit, if the messages were disseminated publicly (say, on you tube), secondary waves of help could create journalistic highlights, extract crucial data to feed the informatics systems (sourced to the originating testimony), and we could start hearing each others voices.

At the moment, our aid feels like we are tossing a homeless person a few dollars while averting our gaze, when what they really need is for us to look them in the eye, recognize their humanity, and have a conversation with them. We are electronically strip searching the people of Haiti, when (forgive the Avatar reference) we need to see each other.

Theory and Practice

A few closing thoughts to this already rambling post.

I attended the event for many reasons including:

  • My research interests in the politics of memory, information flux,  distributed cognition, collaborative production, and collective action.
  • A seminar I am participating in this Spring that is taking up the themes of collective memory, pedagogy, digital media, and trauma (using a the 9/11 Project Rebirth as a point of departure, but conceptualizing responses to collective trauma ranging from Katrina, to evironmental refugees, and beyond).
  • Because the situation is horrifying and desperate, and I have the sinking feeling that no one has a handle on how to help the Hatians.  Worse, I fear that many are already beginning to view this event as a rhetorical chip, and angling to advance their own agendas on the wave of this shock.

The importance of mass media in creating a sense of (imagined) community is well theorized in communications studies.  Haiti’s physical infrastructure is shattered, but we can very quickly reconstruct its communications infrastructure and help them reconstitute their sense of identity and community.

Cultural theorists have criticized the pacifying power of mass media – but the UN is forecasting a sharp increase in violence, riots and rape – if ever there was a time to distract and pacify the populace – or should I say, provide them with a constructive channel for them to express and vent their energies?

If we want to turn this disaster porn on its head, we should just give Hatians the IP rights to all the images pouring out of their country now. The profits would be enough to rebuild the country 10-times over.

The life saving importance of information should not be underestimated – The only thing more important than food, water, or medicine is hope.

Update: This brain[storm/dump] has now been transformed into an actual project proposal at the Crisis Commons wiki – The Open Solace Haiti Project , whose first priority is the Haitian Video Postcard Exchange Network.

[Special thanks to Mar Cabra and Rasmus Nielen for being a sounding board for some of these scattered ideas, John Durham Peters, whose brilliant thought broadcasts on Broadcasting and Schizophrenia induced my thinking, and Levanah and Stan Tenen and the work of the Meru Foundation whose spiritual teachings helped shape these ideas.]

When Lessig was in Disneyland…

ex10_03-04I had a fun idea for a new Free Culture campaign last spring, but I haven’t gotten around to blogging about it until now.

LET MY CULTURE GO!

  • Walt Disney: Let my cartoons go!
  • Jack Valenti: Let my music go!
  • Rupert Murdoch: Let my news go!
  • Steve Jobs: Let my iPhone go!
  • Jeff Bezos: Let my Kindle go!
  • etc, etc.

I know it would be more consistent to substitute ‘our’ for ‘my’, but I really want to evoke the biblical/mythological imagery around freedom and liberation, while simultaneously calling these CEOs out for the pharoahs/slavemasters that they are (we used to have another term for 360 deals…). The campaign simultaneously inverts the framing of copying as piracy, and takes up the mantle of liberators.

As Nina Paley rigorously demonstrates, there are many parallels between the struggles against Human Property and Intellectual Property. Just as we once thought it was morally acceptable to own humans, can we imagine a future where the ownership of ideas is viewed with similar disgust and incredulity? What are the best ways to remind people that Copying is Not Theft?

Anyway, the signal to noise ratio is quite high, and it will definitely
fit on bumper stickers and T-Shirts…

Any graphic designers want to donate some skillz?

Selling shovels to News diggers

Mad Scientist's UnionI had a fun idea tonight (patent pending) that occurred to me after reading about the Newspaper’s accelerating collapse, the Talking Point Memo’s membership experiment, and the recent report on reconstructing journalism.

I can’t recall ever reading about or debating my new journalistic business model, and I’m not sure if its crazy, brilliant, or evil.

Has anyone ever thought about charging newsreaders to express themselves?

Micropayments for comments, not content?

Seriously, how wild would that be.  Pay to comment. Maybe pay to vote, rate, like/dislike. You could even sell different priced foods for people to throw at the journalists (and at other users), provoking foodfights in the newsroom. People would pay to mad men themselves, if you allow them to customize their avatars so they could rant in style.

Now, I recognize it might sound like a step backwards, or slightly anti-democratic, but not long ago there was no commenting at all.  And folks can pick themselves up and have a conversation anywhere on the Internet if they want to. But, you are offering the readers the spotlight of attention… kinda like, advertising!  The dating sites have finely tuned the market dynamics of charging users to communicate. Would these comment stamps reduce or increase the spam?

Maybe the scales are all wrong – it’s probably something like 1% of readers that ever participate, but if fashion (and flickr and  Second Life) is any indication, people dispose plenty of their income expressing themselves in public.

So, Mr. Murdoch, tear down this firewall.  Everyone knows the real money comes from the souvenir and concession stands. It’s better than free.

Reconstruction time again

At a loss for wordsThis week the j-school was abuzz with the conversation successfully  provoked by the publication of a detailed comprehensive report, complete with recommendations, on how to save the endangered species of professional journalists.

One of the report’s two primary authors is my professor Michael Schudson, a thoughtful scholar and a great teacher who is eminently approachable for advice. My friend Dr. C.W. Anderson was the research assistant on the project, and I know he worked pretty hard to make this happen, though he didn’t go on a world tour with the authors.

The report was solid and it managed to gain alot of attention and stir up  a bit of a ruckus. The recommendations seemed reasonable to me, though not quite as radical as I would have hoped…

I have been involved in quite a few conversations around the future of journalism this year, and while there has been a great deal of conversation around how the forms of organization around journalistic production are changing, there has been very little talk about how what’s being produced is changing too.

I am reminded of Bob Stein’s predictions about the Future of the Book. One of his central riffs is his epiphany that the digital book is much less about ebooks and multimedia, and much more about a shift away from the book as a static, finished, complete, object. He imagines a new emergent form in perpetual beta, with multiple authors, and around which revisions, annotations, and communities form. Any of his talks that from the last few years probably picks up on this theme.

While many journalists are talking about producing articles using new media forms, the discussions remind me a bit of the early days of cinema, when they used to film plays.

I’m imaging a shift in journalism towards interactive storytelling, cumulative aggregation, and  distributed collaboration. We have begun to see hints of experiments along these lines in projects like Times Topics, Swiftapp, and Mapping Controversies, but this NPR project profiled last year in CJR really hits the mark: So Cool: How an economic weather map changed the climate. I think these strategies might easily apply to prose, not just data, interactive graphics, and maps.

Comparing journalism with education, will journalism only use new media to create the equivalent of a jazzed up, one-way, lecture? What does interactive story telling even look like? How will we teach the next generation of journalists to create works that are designed to be picked up, re-appropriated, and re-mixed?

With these ideas in mind, I would have loved to see some recommendations in this report designed in anticipation of this future, not merely to prop up yesterday’s decaying models. The patchwork of the future can be best supported by encouraging greater transparency, open licensing, and a culture of collaboration.  What about encouraging open licensing mandates to this foundation support? Mandate the sharing of primary sources? Teach journalists of the future to share, and to learn from their readers? These aren’t all policy recommendations, but I think they need to be thought through and woven into this conversation.

PS – While the future of journalism may be difficult to discern, the future of newspaper suddenly seems pretty clear 😉

Wonderful, Wonderful Copenhagen?

copenhagen_logoIn honor of Blog Action Day I’m posting a round of my favorite posts relating to climate change and sustainable development.

At work, we are also working closely with the Earth Institute, including setting up the learning environment used in the new masters program in Development Practice. I have been collecting some fun links on the program’s community site.

tck, tck, tck….

The Interdisciplinary Kissing Problem

websLast week I participated in the architecture school’s visualization seminar and  was treated to a mind-blowing presentation by Tony Jebara, a Columbia Computer Scientist. Jebara is a young associate professor who researches machine learning, graphs, and visualizations, and is also the chief scientist at CitySense.com. His lab “develops novel algorithms that use data to model complex real-world phenomena and to make accurate predictions about them.” They also work on improving the readability of massive volumes of multi-dimensional data, and are currently focusing on making sense of networks of people and places (take a wild guess who else is interested in their work).

CitySense is an application that runs on mobile devices and from their location data…

…builds a network of real places (like the World Wide Web) and a network of real people (like FaceBook) in order to apply online techniques like smart search, recommendation, collaborative filtering, marketing, user clustering and prediction. Today, Sense Networks tracks almost 10,000,000 mobile phones and location devices through partnerships with carriers, device manufacturers, application developers, taxi fleets, and its own subscribers.

The system attempts to discern friendships as well as an establishment’s clientele based on calculations of user’s physical proximity to each other, combined with a growing semantic understanding of places (e.g. all the Starbucks in the city probably have more in common with each other (in some sense) than they do with their geographical neighbours). According to Jebara, the system is already able to infer the types of people who frequent certain nightclubs—students, hipsters, professionals, bankers, tourists—based purely on an analysis of these overlapping networks of millions of human and non-human actors. The cell phone carriers have also used this kind of analysis to reduce “churn”—if one person leaves the network, the carriers respond rapidly by offering promotions the defector’s close network to seal the leak before the damage spreads.

Incidentally, the matching algorithms that Jebara’s lab uses are the same family of algorithms that “power” google ad words, so at least for the time being, despite the captivating sacred geometry, I am sticking with human recommendations 😉

I left the talk with many questions and reflections:

  • I understand that calculating similarity is the special sauce in any recommendation implementation, but I am somewhat troubled by the conflation of proximity with similarity. Consider the Upper East Side Nanny or the parole officer and the parolee (although hollywood continues to try to convince us of the identity relation between detectives and criminals) – is there something essential being reduced or discarded when we reduce similarity to proximity?

    Is there any room in this model for the “kind” of relationship that motivates the physical proximity? Perhaps something similar to Yahoo/Berkeley TagMaps? Their work requires explicit human input, and is already anticipating feedback effects and users “gaming” the system. Perhaps the network effects CitySense models will disregard much of this “noise,” but, at what social/cultural/political expense?

  • Similarly, I don’t really understand how this algorithm justifies the “distance” calculation. The multi-dimensional spaces they are flattening are incommensurate. Unlike the dimensions of space-time, the spaces described are composed of values, possibly expressing orders of worth in different units that cannot be transformed between each other. What is the Pythagorean theorem really measuring here?
  • How can we do a better job representing more than a 2D scalar representation to users? Provide them with more dials and knobs so they can customize and tune what their view? How can we craft engaging environments that encourage interactive storytelling and help users carve narratives out of databases?

I think it is really important to understand these assumptions and possibly play with them. Would it make a difference if users had access to their own raw data, and understood the digital footprints they were leaving?  It seems quite  important that users understand the inner workings of these systems so they can take full advantage of them, and also not be exploited by them.

As Computer Science takes on more of the characteristics of Architecture and Urban Planning, we need to turn to questions of desire, intent, and purpose.

What impacts do we hope that CitySense will have on social life? Not in a naive deterministic sense, but in terms of catalyzing, facilitating, and favoring certain social interactions over others. What kinds of hypotheses about the kinds of impacts a system like this might have on an urban space? Amazingly, these hypotheses might be testable. How might this system be tuned to increase the odds of these outcomes, once we articulate the mission/purpose of the design?

Finally, what about the promise a tool like this might have for improving the public good? At work we are partnering with the Earth Institute, the School of Public Health, and others on third world development projects that might benefit greatly from the application of these visualizations (RapidSMS, a project tracking malnutrition in Africa comes to mind).

How can we work on balancing the organizational digital divide, and get these kinds of tools – the server side, not the client side – beyond the hands of corporations and letter agencies and into the hands of social scientists, architects, journalists, and educators?

Can we mobilize the student labor to benefit the public good?

Interview: Christopher Mackie on Knight’s Hyperlocal Gambit

Neon vintage micLast week I reflected on the Everyblock.com acquisition. Since then, Knight’s journalism program director has blogged about their perspective on the sale, and some great conversations have continued.  I have also had a wonderful opportunity to discuss the purchase with Christopher Mackie, a program officer at the Mellon Foundation. Chris is the Associate Program Officer in the Research in Information Technology program and is closely involved in Mellon-funded software initiatives.

Here are some excerpts from our conversation:

JB: Thanks so much for taking the time to share some of your thoughts on the recent purchase of Everyblock. As you know, Everyblock is a foundation sponsored, open-source journalism startup that was recently acquired by msnbc.com. Even though the Knight Foundation mandated that all the software they funded was released under an open (GPLv3) license, the future openness of this application is now uncertain. As an important funder of many valuable open source software projects I am wondering if you could share your reactions to this news? How do you feel about the outcome? Did the deal take you by surprise?

CM: Hi Jonah – good to talk with you! Before we start, let me be clear about a couple of things. First, I don’t speak for the Mellon Foundation on this, so all I can share are my own views. Second, I’m by no means the most knowledgeable person around when it comes to intellectual property issues. In fact, I can find several people who know more than I do without even leaving the building at Mellon. What I do have is a particular perspective on IP issues that has been developed in large part from my work with our information technology program. I hope that my perspective is useful, but I wouldn’t want anyone confusing it with either an official Mellon perspective or some sort of consensus view among experts. As far as I can tell, consensus only exists among IP experts on issues that no one cares about.

That said, as I follow the conversation, what appears to be happening with Everyblock is that a number of people are seeing for the first time some issues that have been seen before in other parts of the software space. In the process of thinking through the implications of those developments, they’re reinventing old arguments, most of which are insufficiently nuanced to be valid. Eventually, they’ll work it out, but right now, many people are still looking for too-simplistic answers.

JB: This moment is such a great learning opportunity to teach grantmakers and journalists some really important lessons about Intellectual Property, and the complexities of Open Source software, community, and culture – is there anything specific you think we can learn from this transaction?

CM: Rather than try to parse the many issues individually, let me just suggest a couple of basic principles that I use when I’m trying to advise projects on licensing issues:

First, “the context is more important than the license.” The debate over BSD/GPL tends to take place at a very abstract, ideological level. This is the wrong level: when it comes to licensing, I believe that you really need to get down and grub in the dirt. Licensing decisions are almost always made better when they’re made in a carefully contextualized fashion.

The single most important contextual dimension I know concerns the “organizational complexity” of the product. That’s my own, made-up term to describe the need to integrate your project with other organizational systems, human and software. Organizationally complex software requires significant adaptation or customization in most installations – which implies the need for significant vendor involvement in many installations. A good example of an organizationally complex system is something like a financial system, which tends to have to connect to all sorts of other software and to interact with all sorts of human workflows. Good examples of organizationally simple software are things like a Web browser or a word processor, which ought to work out-of-the-box without any customization or integration.

If you have an organizationally complex product, BSD licenses tend to work better than GPL. Why? BSD licenses don’t scare off the vendors who have to poke around the insides of the product in order to support it, and who worry that their private IP may be compromised by an accidental contact with a GPL’d product’s innards. I’ve seen the arguments about whether this is actually a valid concern, by the way, and I’m not particularly invested in learning the right answer, if there even is one. As long as vendors believe or fear it to be true – and many do – then it might as well be true. Without vendors, it’s hard for an organizationally complex project to thrive, so BSD tends to win out in those sorts of projects.

A second dimension concerns the degree of “market power” held by the users. Market power depends on the ability of users to recognize themselves as having shared interests and then to act on those shared interests. A user community that has market power can issue a credible threat to punish a misbehaving vendor; one lacking market power, cannot. This often isn’t a simple determination; for instance, consider Mozilla. At the core of the Mozilla community, as with most open source communities, is an intense, dedicated group that sees itself as having shared interests and clearly has the will to punish someone who attempts to misuse the Mozilla IP. But do they have the ability? After all, they’re only a tiny fraction of all Mozilla users. The rest are a widely distributed, diffuse group that would never imagine themselves as having much in the way of common purpose, beyond the desire to have a free Web browser. Which constituency matters more in calculating market power? It almost certainly depends on the context.

Some people object to the phrase “market power,” preferring terms like “strength of community” or “trust.” I’m not too worried about what one calls it, but I will say this: once you get past the rhetoric, it mostly boils down to the community’s ability to deliver a credible threat to punish a malfeasant vendor. If the user community ceases to value the project enough to want to defend it against vendor malfeasance, or ceases to be able to act together effectively to deliver that defense, then, however much they value the project individually, it is unlikely to stay open no matter the license.

There are other dimensions to think about, too; for instance, a project having multiple vendors is safer than one with only a single vendor, or none, because non-colluding vendors tend to act in ways that keep each other well-behaved. But those are the biggest two, in my experience so far.

Earlier, you brought up the Sakai and OpenCast projects, both of which have been funded by us (and by other foundations, such as the William and Flora Hewlett Foundation, as well). I believe that these two characteristics are why Sakai and OpenCast, as well as other community source projects, are able to use BSD-style licenses (they actually use the Educational Community License, or ECL, which is almost-but-not-quite the Apache license). Community source software projects produce organizationally complex products deployed by a coherent community of institutions willing and able to exercise market power if needed. For instance, the community of higher education institutions seems to have no trouble understanding their common interest in keeping Sakai’s IP open, even if they’re not Sakai users themselves–and as a group, they seem to have the will and ability to punish vendors that attempt to misbehave. Most vendors sell more than one product into these institutions, so they stand to lose more than they can gain from bad behavior on any single project like Sakai. The result: there is virtually no evidence of significant vendor malfeasance in any of the community source projects, despite the use of a license that in theory allows any vendor to close the code at any time. The closest you can find is the Blackboard patent dispute—which is a challenge to the ownership of the IP, not its licensing, and in which Blackboard has been careful to steer clear of any direct threat to the Sakai community. But would every vendor’s good behavior continue if the community stopped caring about Sakai? I seriously doubt it.

On the other hand, if you have a product which is organizationally simple, as well as having a relatively powerless user community, then get thee to the GPL, because the temptations to steal and close the code just become too great for some vendors to resist. We’ve seen some examples of that, recently, too. Still, don’t believe that the GPL will protect you if your community cannot or will not. If the community is weak enough, nothing can really protect you.

Second, “IP ownership trumps IP licensing.” Some of the commentators on Everyblock that I have read so far are circling around this point, but none has yet followed the logic all the way. All the debate over licensing tends to obscure the reality that final power lies in ownership, not licensing. For a surprising number of situations, licensing is little more than a red herring.

If I own the code, I can issue you a GPL, someone else a BSD, and yet another license to a third party–take a look at the Mozilla licensing scheme sometime, for an example. If I’m also responsible for updating the code, I can change the license to all of you at any time simply by issuing a new version. Sure, you can still use the old version under the old license, but if I really want to make it tough for you to keep using the old version, there are ways. Finally, as you’re seeing with Everyblock, when someone owns the code privately, there’s nothing that prevents someone else from buying the code – often by buying the firm itself – and changing the licensing terms.

I have no insight into MSNBC’s plans for Everyblock. Maybe they’ll close the code; maybe not. Maybe they’ll keep something open but close the commercial services they build on top of it – I don’t know. As your commentators have noted, no one seems to know – and that’s part of the problem with privately owned but open-licensed code. You just never know.

That’s one reason why I tend to be wary about the “commercial OSS” model, no matter what license it uses. In many commercial OSS projects that I’ve seen, even the GPL is effectively just a cover for what is to all intents and purposes a closed code-base, because the owner/vendor is the only entity on earth that has any realistic likelihood of supporting or extending or developing the code further. Ask someone in the MySQL community how protected they feel by their license – or ask the people using Zimbra how they expected to fare if Microsoft bought Yahoo. It’s not about whether the current owner is good, bad, or ugly; it’s about the fact that you can never know whether it will be the same owner tomorrow. That’s a lot of uncertainty on which to base a mission-critical technology choice.

JB: So, given the diverse range of contexts you describe, what specific strategies have you deployed to mitigate these risks?

CM: Good question – and it’s important to emphasize the word “mitigate,” because there are no guarantees and there’s no such thing as absolute effectiveness. One thing we do in our program is to use IP agreements (a contract with the owner of the code to be developed) that require any transfer of ownership to be to an entity which must also agree to the terms of our IP agreement. In a sense, we make the ownership viral, whether or not the license is viral. That’s not a perfect solution, but it appears to be working for us so far.

It also helps that we make our grants to non-profit organizations, which can’t be bought the same way you can buy a private or publicly held firm. When for-profits are involved in our grants, which sometimes happens when grantees decide to contract with for-profit developers, my program (Mellon’s Program in Research in Information Technology) has always required that the non-profit be the IP owner. We are not alone in this; for instance, when several major technology corporations—all for-profits—decided to share and protect some of their own intellectual property in an open environment, they didn’t trust it to a for-profit, but instead created the Eclipse Foundation, a non-profit that owns the Eclipse Project IP. Ditto the Mozilla Foundation.

Still, it bears repeating that just putting your IP into a non-profit mindlessly doesn’t eliminate the risk, because it matters how the non-profit is structured and governed: nothing says a non-profit can’t be malfeasant, too, if in somewhat different ways.

JB: Do you think that the Knight Foundation was swindled? Did they get outfoxed by msnbc.com, or do you think they are happy with this outcome?

CM: I have no knowledge about what the Knight Foundation intended – has anybody bothered to ask them? [ed note: this conversation took place before Knight made a public statement] I think it would be foolish simply to assume that the grant makers have been outfoxed by this development: it may have been exactly what they wanted, or just a risk they decided beforehand that it was worthwhile to run. Keep in mind, too, that MSNBC hasn’t said or done anything about closing the code so far. Even if the Knight Foundation did want perpetual openness and the strategy wasn’t perfect, there’s still a chance that they’ll get what they wanted.

All that’s really happened here is that the sense of security held by at least some members of the Everyblock community has been shaken by the purchase news. But it was always a false sense of security; at this moment, as far as I can tell, nothing objective about the openness of the project has actually changed.

JB: Do you have any closing thoughts about this deal, or what you think grantmakers and open source advocates can learn from it?

CM: If Everyblock serves to help some members of the openness community to get past their ideological blinders and recognize that IP ownership and licensing decisions are subtle challenges with relatively few simple, definitive answers, it will have done some good. After all, even the best source code is relatively ephemeral, but we can hope that such wisdom will last forever.

JB: Thanks so much for your time and wisdom. I know alot of people who were quite surprised by this turn of events, and it feels like we all need a crash course in IP law /and/ sociology to navigate the intricacies of this political economy. Even veteran lawyers and free software evangelists are often confused by many of these complexities. I really hope that this case and your analysis will better inform future work of this type. Good luck keeping it open (and real)!

CM: Thanks very much. I hope what I had to say is useful.

Previous PageNext Page
/* reset the net - http://resetthenet.tumblr.com/post/84330794665/the-reset-the-net-splash-screen */