Humane Communications over Human Networks

emergency.broadcast.Today I attended a barcamp-style CrisisCamp in NYC  where volunteers from around the world  gathered physically and virtually to brainstorm, organize, coordinate, and work to help alleviate the suffering in Haiti (CNN CrisisCamp coverage). When people talk about crowdsourcing relief to this disaster, CrisisCamps around the country helped assemble the the sources (and faces) in these mysterious crowds.

Self-Organized Collaborative Production and Action

It was amazing to see these strangers converge, congregating around the familiar communication modalities of wikis, mailing lists, irc, and now twitter and google wave. While these torrential rivers of information are overwhelming, some subcultures are developing strategies for managing and synthesizing these flows. A main organizing hub is , and the hashtags #cchaiti and #haiti are being used to ‘tag’ disparate social media around these efforts.

Today’s NYC event drew over a dozen people, techies, community organizers, students, Hatians, UN reps, librarians, union workers, journalists, and beyond. I have been closely following ushahidi/swiftapp project, and their collaborative filtering curation strategy is in full swing. Open Street Maps is proving to be an essential piece of infrastructure  around mapping data, and the New York Public Library has rescheduled the launch of their amazing new map rectifying tool to help make sense of Hatian geography – shockingly, there are very few maps of Haiti, and their collection might significantly help when overlaid on satellite imagery. This can assist relief workers who need to  know what neighborhoods are called, and which buildings were where, etc. If you are familiar with Hatian geography, you can help rectify maps here.

The Sahana Disaster Management Project is also looking for python developers to help scale their software.

Strategic Communication Flows

Strategically, I was struck by the asymmetry of information flows. Many of the efforts seemed to focused on collecting Hatian data, and representing it to Americans and NGOs working on the ground in Haiti. But, not too many Hatians have iphones…

There seems to be very little focus on creating flows of information back into Haiti – information from the outside world directed to Haitians, or, on creating infrastructure for Hatians to communicate with each other.  Beyond that, I am not aware of any coordinated efforts to establish non-corporate-mediated, 2-or-more-way channels of information between Hatians and Hatians in the diaspora.

I was reminded of the recent Iranian uprising. A wonderful moment of microblogging glory, although few Americans appreciated how the Iranians were able to receive lifelines of information from outside of Iran (like where to find proxy servers), and were also using the platform to communicate with each other, within Iran.

I was struck by what an important role traditional mass broadcast media might play in a crisis situation. People on the ground need information, desperately.  They need to know which symbols indicate that a house has already been searched, where the next food/water/medicine drop will be, and that the biscuits are good, and not expired.  They also need entertainment, and news –

à la Good Morning Vietnam.  And messages of consolation, emotional support, solidarity, and even song and laughter. Maybe even Bryant Park style movie nights.

Hybrid Networks

Electricity and ISPs are largely down. There are trickles of bandwidth available, and some Hatians have made it onto facebook and cellphones.

So, what could a hybrid, analog-digital network look like?  Low-power FM? High-speed copy machines? Blackboards?

It’s actually not that hard to imagine a hybrid network, composed of people, FM radio, blackboards, printing presses, portable video projectors, cell phones, SMS,  and Internet.  Really, whatever is available.

The Earth Institute and UNICEF Innovation has been deploying RapidSMS on the ground in Africa, and they are working in villages where a single cell phone operator brokers vital information to a blackboard in the town square, transforming a cell phone into a mass broadcast device.  Reminiscent of the Wall Newspapers in communist russia.

And if there were a low power FM Radio station set up, the DJ could presumably retransmit messages coming in over the Internet or the cell phones (kinda the reverse of the activist who retransmitted police scanner transmissions over Twitter at the G20 summit protests).

Hatians would know that if they needed to get a message out to a loved one in Haiti, they could get to the radio station and it might be transmitted, back into local community. Messages would travel over human and technological networks, routed intelligently by humans where technology leaves off.

What would the programming on this radio station look like?  They could have hourly news and announcements, read out community messages submitted by listeners, convey messages of condolences and support from the outside world, play music, pray, talk radio, “call in” shows, anything really. Most importantly, this radio would be locally produced, with  the local community deciding what to play.  There was a precedent for local radio, KAMP, in the astrodome stadium after Katrina. The station was set up with the help of the fantastic Prometheus Radio Project volunteers, though authorities tried to shut down the “pirate” lifeline.

Turning Messages in Bottles into Skywriting

Today I met someone who is working with local Haitian communities in NYC.  We are both very concerned with CNN dominated the coverage, frittering away their 24/7 news coverage on looping segments, and circling like vultures waiting for violence to erupt. We have to understand the danger of a single story.

We were both very interested in creating alternate channels of communication for Hatians to speak for themselves, and engage in dialogue with their relatives in the diaspora.

Here is one project we could run over the kind of hybrid analog-digital/human-machine sneakernet described above.

Hatians could send video messages in a bottle.  The community here could gather to watch and reply to those videos.  Say the videos and the replies were limited to 3 minutes each. The original message and the reply could be bundled and sent back to Haiti – not unlike sending a letter before the postage service – you would give it to someone heading to the recipient’s town.

Initially, a few flip cameras on the ground in Haiti, with the video transmitted home over the Internet, or even back to the states by sending the memory cards home with a courier. Eventually, when bandwidth begins to open up, we might be able to imagine a live, synchronous, stream. But, before then, we can imagine ansynchronous video messages being sent back and forth, between Haiti an Haitian communities in the diaspora.

On the Hatian end, the replies could be projected and played back to groups gathered around projectors at night. On our end, distribution is trivial, but the message might easily get to the precise person it was intended for through community social networks.  A Haitian could send a video message in a bottle to Brooklyn, and it would not take long for their relatives to know they were safe.  Replies could include message of hope, compassion, and support.

Most importantly, independent lines of communications could be opened. As a secondary benefit, if the messages were disseminated publicly (say, on you tube), secondary waves of help could create journalistic highlights, extract crucial data to feed the informatics systems (sourced to the originating testimony), and we could start hearing each others voices.

At the moment, our aid feels like we are tossing a homeless person a few dollars while averting our gaze, when what they really need is for us to look them in the eye, recognize their humanity, and have a conversation with them. We are electronically strip searching the people of Haiti, when (forgive the Avatar reference) we need to see each other.

Theory and Practice

A few closing thoughts to this already rambling post.

I attended the event for many reasons including:

  • My research interests in the politics of memory, information flux,  distributed cognition, collaborative production, and collective action.
  • A seminar I am participating in this Spring that is taking up the themes of collective memory, pedagogy, digital media, and trauma (using a the 9/11 Project Rebirth as a point of departure, but conceptualizing responses to collective trauma ranging from Katrina, to evironmental refugees, and beyond).
  • Because the situation is horrifying and desperate, and I have the sinking feeling that no one has a handle on how to help the Hatians.  Worse, I fear that many are already beginning to view this event as a rhetorical chip, and angling to advance their own agendas on the wave of this shock.

The importance of mass media in creating a sense of (imagined) community is well theorized in communications studies.  Haiti’s physical infrastructure is shattered, but we can very quickly reconstruct its communications infrastructure and help them reconstitute their sense of identity and community.

Cultural theorists have criticized the pacifying power of mass media – but the UN is forecasting a sharp increase in violence, riots and rape – if ever there was a time to distract and pacify the populace – or should I say, provide them with a constructive channel for them to express and vent their energies?

If we want to turn this disaster porn on its head, we should just give Hatians the IP rights to all the images pouring out of their country now. The profits would be enough to rebuild the country 10-times over.

The life saving importance of information should not be underestimated – The only thing more important than food, water, or medicine is hope.

Update: This brain[storm/dump] has now been transformed into an actual project proposal at the Crisis Commons wiki – The Open Solace Haiti Project , whose first priority is the Haitian Video Postcard Exchange Network.

[Special thanks to Mar Cabra and Rasmus Nielen for being a sounding board for some of these scattered ideas, John Durham Peters, whose brilliant thought broadcasts on Broadcasting and Schizophrenia induced my thinking, and Levanah and Stan Tenen and the work of the Meru Foundation whose spiritual teachings helped shape these ideas.]

Reconstruction time again

At a loss for wordsThis week the j-school was abuzz with the conversation successfully  provoked by the publication of a detailed comprehensive report, complete with recommendations, on how to save the endangered species of professional journalists.

One of the report’s two primary authors is my professor Michael Schudson, a thoughtful scholar and a great teacher who is eminently approachable for advice. My friend Dr. C.W. Anderson was the research assistant on the project, and I know he worked pretty hard to make this happen, though he didn’t go on a world tour with the authors.

The report was solid and it managed to gain alot of attention and stir up  a bit of a ruckus. The recommendations seemed reasonable to me, though not quite as radical as I would have hoped…

I have been involved in quite a few conversations around the future of journalism this year, and while there has been a great deal of conversation around how the forms of organization around journalistic production are changing, there has been very little talk about how what’s being produced is changing too.

I am reminded of Bob Stein’s predictions about the Future of the Book. One of his central riffs is his epiphany that the digital book is much less about ebooks and multimedia, and much more about a shift away from the book as a static, finished, complete, object. He imagines a new emergent form in perpetual beta, with multiple authors, and around which revisions, annotations, and communities form. Any of his talks that from the last few years probably picks up on this theme.

While many journalists are talking about producing articles using new media forms, the discussions remind me a bit of the early days of cinema, when they used to film plays.

I’m imaging a shift in journalism towards interactive storytelling, cumulative aggregation, and  distributed collaboration. We have begun to see hints of experiments along these lines in projects like Times Topics, Swiftapp, and Mapping Controversies, but this NPR project profiled last year in CJR really hits the mark: So Cool: How an economic weather map changed the climate. I think these strategies might easily apply to prose, not just data, interactive graphics, and maps.

Comparing journalism with education, will journalism only use new media to create the equivalent of a jazzed up, one-way, lecture? What does interactive story telling even look like? How will we teach the next generation of journalists to create works that are designed to be picked up, re-appropriated, and re-mixed?

With these ideas in mind, I would have loved to see some recommendations in this report designed in anticipation of this future, not merely to prop up yesterday’s decaying models. The patchwork of the future can be best supported by encouraging greater transparency, open licensing, and a culture of collaboration.  What about encouraging open licensing mandates to this foundation support? Mandate the sharing of primary sources? Teach journalists of the future to share, and to learn from their readers? These aren’t all policy recommendations, but I think they need to be thought through and woven into this conversation.

PS – While the future of journalism may be difficult to discern, the future of newspaper suddenly seems pretty clear 😉

Interview: Christopher Mackie on Knight’s Hyperlocal Gambit

Neon vintage micLast week I reflected on the acquisition. Since then, Knight’s journalism program director has blogged about their perspective on the sale, and some great conversations have continued.  I have also had a wonderful opportunity to discuss the purchase with Christopher Mackie, a program officer at the Mellon Foundation. Chris is the Associate Program Officer in the Research in Information Technology program and is closely involved in Mellon-funded software initiatives.

Here are some excerpts from our conversation:

JB: Thanks so much for taking the time to share some of your thoughts on the recent purchase of Everyblock. As you know, Everyblock is a foundation sponsored, open-source journalism startup that was recently acquired by Even though the Knight Foundation mandated that all the software they funded was released under an open (GPLv3) license, the future openness of this application is now uncertain. As an important funder of many valuable open source software projects I am wondering if you could share your reactions to this news? How do you feel about the outcome? Did the deal take you by surprise?

CM: Hi Jonah – good to talk with you! Before we start, let me be clear about a couple of things. First, I don’t speak for the Mellon Foundation on this, so all I can share are my own views. Second, I’m by no means the most knowledgeable person around when it comes to intellectual property issues. In fact, I can find several people who know more than I do without even leaving the building at Mellon. What I do have is a particular perspective on IP issues that has been developed in large part from my work with our information technology program. I hope that my perspective is useful, but I wouldn’t want anyone confusing it with either an official Mellon perspective or some sort of consensus view among experts. As far as I can tell, consensus only exists among IP experts on issues that no one cares about.

That said, as I follow the conversation, what appears to be happening with Everyblock is that a number of people are seeing for the first time some issues that have been seen before in other parts of the software space. In the process of thinking through the implications of those developments, they’re reinventing old arguments, most of which are insufficiently nuanced to be valid. Eventually, they’ll work it out, but right now, many people are still looking for too-simplistic answers.

JB: This moment is such a great learning opportunity to teach grantmakers and journalists some really important lessons about Intellectual Property, and the complexities of Open Source software, community, and culture – is there anything specific you think we can learn from this transaction?

CM: Rather than try to parse the many issues individually, let me just suggest a couple of basic principles that I use when I’m trying to advise projects on licensing issues:

First, “the context is more important than the license.” The debate over BSD/GPL tends to take place at a very abstract, ideological level. This is the wrong level: when it comes to licensing, I believe that you really need to get down and grub in the dirt. Licensing decisions are almost always made better when they’re made in a carefully contextualized fashion.

The single most important contextual dimension I know concerns the “organizational complexity” of the product. That’s my own, made-up term to describe the need to integrate your project with other organizational systems, human and software. Organizationally complex software requires significant adaptation or customization in most installations – which implies the need for significant vendor involvement in many installations. A good example of an organizationally complex system is something like a financial system, which tends to have to connect to all sorts of other software and to interact with all sorts of human workflows. Good examples of organizationally simple software are things like a Web browser or a word processor, which ought to work out-of-the-box without any customization or integration.

If you have an organizationally complex product, BSD licenses tend to work better than GPL. Why? BSD licenses don’t scare off the vendors who have to poke around the insides of the product in order to support it, and who worry that their private IP may be compromised by an accidental contact with a GPL’d product’s innards. I’ve seen the arguments about whether this is actually a valid concern, by the way, and I’m not particularly invested in learning the right answer, if there even is one. As long as vendors believe or fear it to be true – and many do – then it might as well be true. Without vendors, it’s hard for an organizationally complex project to thrive, so BSD tends to win out in those sorts of projects.

A second dimension concerns the degree of “market power” held by the users. Market power depends on the ability of users to recognize themselves as having shared interests and then to act on those shared interests. A user community that has market power can issue a credible threat to punish a misbehaving vendor; one lacking market power, cannot. This often isn’t a simple determination; for instance, consider Mozilla. At the core of the Mozilla community, as with most open source communities, is an intense, dedicated group that sees itself as having shared interests and clearly has the will to punish someone who attempts to misuse the Mozilla IP. But do they have the ability? After all, they’re only a tiny fraction of all Mozilla users. The rest are a widely distributed, diffuse group that would never imagine themselves as having much in the way of common purpose, beyond the desire to have a free Web browser. Which constituency matters more in calculating market power? It almost certainly depends on the context.

Some people object to the phrase “market power,” preferring terms like “strength of community” or “trust.” I’m not too worried about what one calls it, but I will say this: once you get past the rhetoric, it mostly boils down to the community’s ability to deliver a credible threat to punish a malfeasant vendor. If the user community ceases to value the project enough to want to defend it against vendor malfeasance, or ceases to be able to act together effectively to deliver that defense, then, however much they value the project individually, it is unlikely to stay open no matter the license.

There are other dimensions to think about, too; for instance, a project having multiple vendors is safer than one with only a single vendor, or none, because non-colluding vendors tend to act in ways that keep each other well-behaved. But those are the biggest two, in my experience so far.

Earlier, you brought up the Sakai and OpenCast projects, both of which have been funded by us (and by other foundations, such as the William and Flora Hewlett Foundation, as well). I believe that these two characteristics are why Sakai and OpenCast, as well as other community source projects, are able to use BSD-style licenses (they actually use the Educational Community License, or ECL, which is almost-but-not-quite the Apache license). Community source software projects produce organizationally complex products deployed by a coherent community of institutions willing and able to exercise market power if needed. For instance, the community of higher education institutions seems to have no trouble understanding their common interest in keeping Sakai’s IP open, even if they’re not Sakai users themselves–and as a group, they seem to have the will and ability to punish vendors that attempt to misbehave. Most vendors sell more than one product into these institutions, so they stand to lose more than they can gain from bad behavior on any single project like Sakai. The result: there is virtually no evidence of significant vendor malfeasance in any of the community source projects, despite the use of a license that in theory allows any vendor to close the code at any time. The closest you can find is the Blackboard patent dispute—which is a challenge to the ownership of the IP, not its licensing, and in which Blackboard has been careful to steer clear of any direct threat to the Sakai community. But would every vendor’s good behavior continue if the community stopped caring about Sakai? I seriously doubt it.

On the other hand, if you have a product which is organizationally simple, as well as having a relatively powerless user community, then get thee to the GPL, because the temptations to steal and close the code just become too great for some vendors to resist. We’ve seen some examples of that, recently, too. Still, don’t believe that the GPL will protect you if your community cannot or will not. If the community is weak enough, nothing can really protect you.

Second, “IP ownership trumps IP licensing.” Some of the commentators on Everyblock that I have read so far are circling around this point, but none has yet followed the logic all the way. All the debate over licensing tends to obscure the reality that final power lies in ownership, not licensing. For a surprising number of situations, licensing is little more than a red herring.

If I own the code, I can issue you a GPL, someone else a BSD, and yet another license to a third party–take a look at the Mozilla licensing scheme sometime, for an example. If I’m also responsible for updating the code, I can change the license to all of you at any time simply by issuing a new version. Sure, you can still use the old version under the old license, but if I really want to make it tough for you to keep using the old version, there are ways. Finally, as you’re seeing with Everyblock, when someone owns the code privately, there’s nothing that prevents someone else from buying the code – often by buying the firm itself – and changing the licensing terms.

I have no insight into MSNBC’s plans for Everyblock. Maybe they’ll close the code; maybe not. Maybe they’ll keep something open but close the commercial services they build on top of it – I don’t know. As your commentators have noted, no one seems to know – and that’s part of the problem with privately owned but open-licensed code. You just never know.

That’s one reason why I tend to be wary about the “commercial OSS” model, no matter what license it uses. In many commercial OSS projects that I’ve seen, even the GPL is effectively just a cover for what is to all intents and purposes a closed code-base, because the owner/vendor is the only entity on earth that has any realistic likelihood of supporting or extending or developing the code further. Ask someone in the MySQL community how protected they feel by their license – or ask the people using Zimbra how they expected to fare if Microsoft bought Yahoo. It’s not about whether the current owner is good, bad, or ugly; it’s about the fact that you can never know whether it will be the same owner tomorrow. That’s a lot of uncertainty on which to base a mission-critical technology choice.

JB: So, given the diverse range of contexts you describe, what specific strategies have you deployed to mitigate these risks?

CM: Good question – and it’s important to emphasize the word “mitigate,” because there are no guarantees and there’s no such thing as absolute effectiveness. One thing we do in our program is to use IP agreements (a contract with the owner of the code to be developed) that require any transfer of ownership to be to an entity which must also agree to the terms of our IP agreement. In a sense, we make the ownership viral, whether or not the license is viral. That’s not a perfect solution, but it appears to be working for us so far.

It also helps that we make our grants to non-profit organizations, which can’t be bought the same way you can buy a private or publicly held firm. When for-profits are involved in our grants, which sometimes happens when grantees decide to contract with for-profit developers, my program (Mellon’s Program in Research in Information Technology) has always required that the non-profit be the IP owner. We are not alone in this; for instance, when several major technology corporations—all for-profits—decided to share and protect some of their own intellectual property in an open environment, they didn’t trust it to a for-profit, but instead created the Eclipse Foundation, a non-profit that owns the Eclipse Project IP. Ditto the Mozilla Foundation.

Still, it bears repeating that just putting your IP into a non-profit mindlessly doesn’t eliminate the risk, because it matters how the non-profit is structured and governed: nothing says a non-profit can’t be malfeasant, too, if in somewhat different ways.

JB: Do you think that the Knight Foundation was swindled? Did they get outfoxed by, or do you think they are happy with this outcome?

CM: I have no knowledge about what the Knight Foundation intended – has anybody bothered to ask them? [ed note: this conversation took place before Knight made a public statement] I think it would be foolish simply to assume that the grant makers have been outfoxed by this development: it may have been exactly what they wanted, or just a risk they decided beforehand that it was worthwhile to run. Keep in mind, too, that MSNBC hasn’t said or done anything about closing the code so far. Even if the Knight Foundation did want perpetual openness and the strategy wasn’t perfect, there’s still a chance that they’ll get what they wanted.

All that’s really happened here is that the sense of security held by at least some members of the Everyblock community has been shaken by the purchase news. But it was always a false sense of security; at this moment, as far as I can tell, nothing objective about the openness of the project has actually changed.

JB: Do you have any closing thoughts about this deal, or what you think grantmakers and open source advocates can learn from it?

CM: If Everyblock serves to help some members of the openness community to get past their ideological blinders and recognize that IP ownership and licensing decisions are subtle challenges with relatively few simple, definitive answers, it will have done some good. After all, even the best source code is relatively ephemeral, but we can hope that such wisdom will last forever.

JB: Thanks so much for your time and wisdom. I know alot of people who were quite surprised by this turn of events, and it feels like we all need a crash course in IP law /and/ sociology to navigate the intricacies of this political economy. Even veteran lawyers and free software evangelists are often confused by many of these complexities. I really hope that this case and your analysis will better inform future work of this type. Good luck keeping it open (and real)!

CM: Thanks very much. I hope what I had to say is useful.

Freedom of the (hyperlocal) Press?

Viral PoliceHeh.  I enjoy a nice long weekend off, and a few of my worlds collided while I was away…

This weekend snatched up the Knight Foundation funded project, and now a bunch of people I know – from  journalism, free software, law, and software development are all talking about the ethics and implications of choosing different Free/Open Source licenses for grant funded projects and experiments in sustainable journalism 😉

The Knight Foundation has been funding innovation in technology and journalism for a few years, and lately has been mandating open licenses for all the code and content they sponsor.  Knight is not alone. Mellon, Hewlitt, OSI, NSF, NIH, and other grantmakers have all begun to encourage that the IP they fund be as open as possible (to varying degrees).  Seems obvious.  If you want to maximize your philanthropic ROI, make sure that the future can extract the full potential of the work you fund – not be shackled, stifled, or duped by the misapplication of intellectual property.

I continue to be hopeful that pressure from funders might represent a tipping point for openness.  Many organizations need bunches of carrots to overcome their knee-jerk institutional momentum to horde – even if sharing costs them nothing (in dollars, labor, or resources, although sometimes transparency can take its toll on egos).

But is all openness created equal? No way am I going to attempt to recreate the great BSD-GPL wars in this post, but I will say that it stings every time I hear someone accuse the GPL of being viral (are vaccines viral?).  I also wince every time I see a vibrant open source community make an argument against the GPL – I have seen this happen around Sakai, OpenCast, and even lately around around Plone and its plugins.

[From my perspective, its the purportedly unencumbered communities that are really viral, as they continue to ratchet down GPL communities to lowest common denominator licenses, by whining about how they can’t use GPL code (which they can, provided they share-alike).  But don’t take my word for it – ask Zed why he (A/L)GPLs.]

To me, first and foremost, the GPL signals trust. As I understand it, this legal instrument has helped enable institutions and individuals, large and small, to trust each other, without fear of being stabbed in the back or being taken for a sucker. In the end, the GPL is just a license, and while it has been increasingly taken more seriously, enforcement is never fun (except for lawyers, I guess).

Eben Moglen is the founder of the The Software Freedom Law Center and also the author of GPL, but their firm can’t officially shill for the GPL. They care enough about freedom to continue to help any open software communities in need, but I sometimes wonder how they manage to bite their tongues and not scream We told you so or We warned you. Some of these same communities who have scorned the GPL have had to turn to the SFLC to bail them out when they got attacked by patent sharks. Perhaps the Everyblock story will serve as a cautionary tale, and people will learn to start taking the SFLC’s legal advice seriously. I believe that history will show that it was the GPL that ultimately averted Microsoft’s monopoly – no license could have accomplished this without the boundless energy and will of the open source developers, but the GPL was the pentagram restraining a very bad actor.

But not everyone sees the world this way, and there are other valid perspectives.  In conversations I have had with Jacob Kaplan-Moss (who co-founded Django, alongside Everyblock’s Adrian Holovaty) Jacob voiced a strong conviction that transparency, openness, and sharing are better ways to develop software, and that those values ought/need not be legally mandated. He prefers to participate in a community where those values are understood and shared.  Some might call his perspective slightly naive (while others might trace some of these attitudes to the roots of Django and the proprietary journalistic corporation that birthed it), but James Vasile makes a very similar point:

It might be disappointing that can close-source Everyblock, but we still have the code. If the code is valuable to the community, we can take the last published version and use it as we want. If trades a healthy free software project for a proprietary development cycle, we’ve lost nothing, and has thrown away the most important asset they had– the community behind and around the code.

As for the future of Everyblock, I am am still hopeful that rationality will prevail. Everyblock runs on an incredibly sophisticated stack of open software – python, postgres (with GIS extensions), django (or something very, very similar), and will not get very far with this software without engaging these communities. In the 21st century, owning code is a liability, not an asset. Sure, they can try to leech and poison the well, but they will meet with pretty staunch resistance – trouble hiring programmers, getting their patches accepted, maintaining and upgrading – good luck going it alone. They will end up with the IE of hyperlocal news websites.

I also don’t think it’s necessarily evil for a corporation to participate in this ecology, or for funders to seed new user interfaces or patterns, and then hand off the innovation to capital. Sustainability is really quite complicated, especially was we embark on hybrid economies. And on the open side, it can be difficult for funders to keep software honest.

If Everyblock has a real value right now, its in the relationships they have forged with the data providers, and the effort they put into scraping and formatting this data. What we want from them now isn’t just an open platform, its also open apis, to get at the data they are collecting and harvesting. Code is only one corner freedom’s jigsaw puzzle. Never forget about the data. And, I am not really sure what Knight could have done to better protect the future openness or integrity of that data.

Shekhinah Power

ZapIs it possible that our ancestors harnessed the power of electricity?

It’s logically possible that electric motors pre-dated steam engines, and tantalizing writings combined with circumstantial evidence suggest that the ancients understood more than static electricity and simple batteries.

This question is yet another reformulation of the regard we hold for the wisdom of the ancients, and if their models and perspectives might offer anything meaningful to today’s scientists and philosophers. Even the alternative researchers who investigate these claims often feel the need to invoke atlanteans, martians, or time travelers as the deus ex machina to explain their origin.

A recent constellation of events and ideas (MiT6, Intentional Energy, Faith’s Transmission) in my life has brought me back to this question.  If the ancients had developed a theory of everything, how might they have encoded this message for transmission into the future? Would their theory of everything incorporate/integrate subjectivity and consciousness, unlike our generation’s leading contenders?

The following free association provides a glimpse at what a message like that could look like.

Religion is a process of turning your skull into a tabernacle, not of going up to Jerusalem once a year.[*]

I have been investigating spiritual mysteries for decades – I maintain my own personal X-Files, some of which are documented on this blog.  In June ’95 I even traveled to West Africa following up a lead on the descendants of Joseph and the Arc of the Covenant. (I found everything I was looking for, and more, but that’s a story for another post).

Crackpots and scholars alike have recognized the electrical potential of the Arc.  Most famously, Eric Von Daniken has popularized this interpretation of its intrinsic physical properties in Chariots of the Gods.

The blueprints of the Arc are described in intricate detail in Exodus – Gold box/wooden box/gold box.  Gold is one of the best conductors on the periodic table (think stereo cables), so the Arc was an electromagnetic sandwich — conductor/insulator/conductor — the very definition of a capacitor.  A physical device capable of storing electrical charge — a.k.a. a battery.

That the Arc might have held electrical charge is consistent with the stories told about it.  People who touched it died instantly, it was carried by the Levites into battle, and its divine fire even was known to kill the wicked.

Bracketing for a moment the source of this wisdom, if the Arc was used as a battery, the next logical questions are: How was it charged? What did it power?

They Kept Going and Going

The utility of electrical power in ancient times is simple. Even untamed electricity might have been quite valuable in an era when its sparks would have been regarded as miraculous.  Speculative research suggests the some kinds of applications that electricity could have powered – lighting up the high preist’s breastplate, or perhaps even a transmitter or a manna machine.

Charging is a bit more complicated. For years my imaginations has been conceptually trapped inside the holy of holies — struggling to imagine how on earth the high priest might have been able to transform spiritual energy (or information) into energy we could do work with on this planet (e.g. electrical energy).  Recently I realized that the Tabernacle was actually flowing with physical energy.  The priest’s entire system of sacraments and service could have been organized around collecting, transforming, storing, and harnessing electrical energy.

Not only were the priests playing with fire, pouring gallons of blood through intricate piping, and baking bread and cooling in on strange conductive structures (see the Mishkan or watch the movie), but I had an epiphany around the suggestive “potential” of the priest’s very strange uniforms.

The Israelite priests were actually commanded to wear uniforms which juxtaposed wool and linen — a combination of materials forbidden to the laypeople, and also quite capable of holding a static charge (especially with the help of balloons 😉 ). Additionally, they also wore the fringes whose craft has been preserved by Orthodox Jews to this day. Imagine if these fringes were wrapped in wire instead cotton — they could have stepped up/down the voltage of the charge flowing through them.  It is not necessary to demonstrate that these ritual artefacts were ever made to these specifications.  They testify to the fact that the Israelites had the knowledge and skill to braid electrically sound cabling.

Still Suits for Charge

Here is one possible scenario: The priest reports to duty, grounds themselves on the Temple Grid, work all day long, generating a bit of static charge, and then deposits that charge into the power bank of the Arc of the Covenant.

What if all of the activities conducted in the Tabernacle were oriented around collecting, transforming, and storing charge?  Would this scientific/rational explanation for the miracles in the desert denigrate or diminish their significance?  Or, would this kind of explanation elevate their status, and help remind people of how miraculous the world is on a continuous basis?


I am currently in discussions with legal council about the possibility of patenting ideas related to and inspired by my interpretation of the biblical accounts of the tabernacle artefacts and the corresponding priestly activities.  Regenerative living suits might have incredible potential to help raise energy awareness (and in turn, responsibility, and intentionality).  I doubt that I would be able to recapture very much electricity from my everyday motions, but perhaps enough to play a few songs on my mp3 player or talk for a few minutes on my cell phone.   I know if I were camping, this energy would be priceless.

If I pull this off, would the Israelites need to license my patents when they rebuild their temple?  I suppose they could always just relinquish their hold on Intellectual Property entirely… I would be happy either way 😉

I have to admit, I am amused just thinking about the testimony to invalidate my patent based on biblical prior-art. Intriguingly, this historical hypothesis is testable…

See, I have called by name Bezalel the son of Uri, the son of Hur, of the tribe of Judah; and I have filled him with the spirit of God, in wisdom, and in understanding, and in knowledge, and in all manner of workmanship, to devise skilful works, to work in gold, and in silver, and in brass, and in cutting of stones for setting, and in carving of wood, to work in all manner of workmanship. [*]

O.V. High

Man w/ a Movie Camera Tattoo I have to thank my friend and colleague Clayfox for comparing (positively) the vibe at this weekend’s fabulous Open Video Conference to High School. The optimism, diversity, and composition of the crowd was really inspiring.

In some ways, this conference might as well have been called the “Independent Media” conference, but of course, if it was, the right people wouldn’t have attended. Somehow they managed to attract people involved with every layer of the stack needed to create independent media.  Subcultures representing hardware, html5, metadata, content, law, production, funders and more were all represented.

To make independent new media, you either need to understand all of these details, or know someone who does.  I don’t think I have ever been in a room with this particular blend of expertise and interests before.

The networking was great, and my office was closely involved in making the education stuff at this conference happen (I have a great job). At the conference we announced the liberation of a great piece of software – VITAL is free! Run, VITAL, Run.

The highlight of the talks had to be Amy Goodman’s inspiring speech. I had seen her introduce Chomsky last week, and was left a little bummed out by his talk since it was blow after blow of what’s broken in the world, with very little vision, and no call to action. You don’t hear too many female preachers, but Goodman has really mastered an hypnotic cadence – speeding up to fit in alot of ideas, but slowing down for emphasis.  Her soundbytes are eminently tweetable (twitter essentially  replaced irc at this conference, and there was an incredibly active backchannel around the #openvideo tag/frequency/channel).

Benkler also opened with fresh material – he has clearly been thinking about journalism in the wake of this year’s collapses (and maybe even our CDPC conference?). It is amusing to think that between Benkler and Moglen (and his metaphorical corollary to Faraday’s law), it might be the sociologically-inclined lawyers who arrive at a theory of creativity (instead of the cognitive scientists).  And Zittrain covered for the missing Clay Shirky, and pulled of a funny and intelligent talk.

Many other highlights which I hope to curate once the video is all posted and I have a chance to decompress. I know I should have gone to more talks that I didn’t belong at, but I kept getting pulled in to great conversations…

Kudos to the organizers for pulling off a small miracle. I’ve been to many conferences that cost hundreds of dollars to attend, and don’t even offer lunch.  They managed to pull off a beautiful space, food, and even video djs and an open bar.

I wonder to what degree freeculture’s networked proximity to techies and lawyers simplifies some of the logistical nightmares that often plague organizers. It just sems like they are able to organize with relative ease, as the communications media and social capital are intuitive and readily available. Good thing for everyone they are using their super-powers for the greater good 😉

In terms of the longer term, they were consciously trying to create something bigger than a one time event. I was impressed at the purposeful scaffolding of the infrastructure meant to sustain this conversation now that conference is over.  Many gatherings only figure out at the event that they want to keep talking afterwards.  THe OVC crew did a great job of setting up, and using a wiki, and some sensibly divided mailing lists to seed a healthy after-party.

Connecting the Dots


What happens when the Swine Virus breeds with the Avian Flu?

Pigs Fly, of course.  Welcome to the end of time. I’m off to collect a few debts.

This latest data point is the most recent in a string of bizarre crimes that I have been tracking in my capacity as a double agent (in the Kierkegaardian sense).

Consider these events from last year’s news:

  1. The Aqua Teen Hunger Force Mooninite Bomb Scare in Boston
  2. The Rat poison in the Cat and Dog Food triggering an FDA recall (it only affected wet food)
  3. And, the E-Coli in the Spinach resulting in CNN journalists looking directly into the camera and instructing kids not to eat their green leafy vegetables.

Given everything I know about reality, there is only one man who is sinister and brilliant enough to execute this sequence of terrorist punchlines…

Good riddance to the age of Biblical Myth. Welcome to the Age of Marvel and DC.

Now, if only I could figure out which organization this intentionality emerged from.

Intentional Energy

Seed of Life ActivatorThis past weekend I took part in an exciting panel on internet labor at the Left Forum, but the highlight of the weekend was serendipitous. I attended a salon hosted by Reality Sandwich:

Electrical energy is political energy is personal energy is metaphysical energy: A discussion on technological tools and political policy for opportunities of human freedom and evolution.

While I am usually open to edgy ideas, and am quite comfortable entertaining (and sometimes visiting) alternate realities, I certainly wasn’t expecting the treat I encountered. Ryan Wartana orchestrated an amazing experience, successfully interweaving the metaphors of energy and power through the lenses of the physical, personal, political, and metaphysical.

Ryan has PhD in chemical engineering and has been researching and working with nanotechnology and batteries for over a decade.  Professionally, he is the CTO for the alternative energy startup iCel Systems and is quite committed to alternative renewable energy solutions. He was on the East Coast participating in conference in DC on Advanced Battery Manufacturing, and swung through NYC to connect with other segments of his network.

To give you a sense of the atmosphere, Ryan spoke against the backdrop of a revolving slideshow of sacred geometry (which I have studied also), whose forms and principles have inspired many of his artistic/scientific inquiries and designs. He has worked with researchers growing self-repeating and self-replicating nanostructures, and it soon became clear how inhabiting this domain influenced his thinking. Some large problems can be effectively broken into tiny parts, but it can be difficult to imagine how to practice this w/out radically adjusting our perspective.

I left the lecture with a much clearer vision of what an intelligent energy grid, or an “internet of energy” is all about.  Basically, the current energy grid is unidirectional, and on-demand.  It is a centralized distribution system, much like last century’s mass broadcast media. If we distribute a dollop of storage and intelligence to the network, many amazing possibilities emerge. The analogy with integrated circuits was quite provocative – our current grid is like a circuit board w/out any capacitors on it. iCel and companies like them are trying to become the Cisco of the Energy platform, and create integrated energy systems. So, individuals could draw power when its inexpensive (at night) and produce power and return it to the grid, or even to their peers – bittorrent style.

The power of distributed networks to improve redundancy and resilience, and reclaim lost bandwidth and capacity is well known in information technology and network theory. Google has even been distributing their physical power storage in their servers. But the possibilities Ryan illuminated intuitively clicked for me – and I trusted his vision, even though he is in the battery business 😉

These distributed energy systems are vital, and starting to happen. I wondered about connections with the electric car venture – Beter Place. Their system is immensely promising, but riddled with uncertainty. Will their hardware interoperate with other power providers, or will people be locked in? Will their customers be better off relying on a centralized transportation provider, instead of remaining independent and relatively autonomous?  What there be provisions to mitigate the surveillance threats their network poses?  When you mash good batteries up with Better Place (with a bit of peer-to-peer pressure), many of these problems melt away.

We also talked alot about the importance of energy awareness, giving way to energy responsibility, leading to energy intentionality.  These ideas actually had alot to do with my presentation at the Left Forum, which are hinted at in my take on Free Energy.

The talk left me invigorated and hopeful. NYU’s ITP has had some great projects on energy awareness, and there is even a prof at Columbia who wants to rig up a dorm with energy monitoring.  And, some of our work at CCNMTL with the Earth Institute and the Millenium Villages might benefit from these insights and connections as well.

I attended the Reality Sandwich event hoping that a dose of creative consciousness expansion would offset the heaviness of struggle at the Left Forum. What a refreshing contrast to feeling trapped inside an inescapable system. We can imagine our way free.

Banish the word struggle from your attitude and your vocabulary. All that we do now must be done in a sacred manner and in celebration. “We are the ones we’ve been waiting for.” — Hopi Elder

Semantic Connections

paperboyhazards1It’s been almost 2 months since I participated in the intense and spectacular conference/discussion/seminar on the Changing Dynamics of Public Controversies (CDPC). Since then, numerous municipal dailies have declared bankruptcy, and the question of the future of journalism has gone mainstream – with urgency. (four print-media-collapse stories on the front page of yesterday’s business section of the nytimes!).

Here are a few of the better analyses that have been buzzing around inside the halls of the Columbia J-School:

So, Why teach journalism if newspapers are dying? One of our Deans has a plan to revamp the curriculum and Keep J-School Relevant, but it hasn’t gained much traction yet. 🙁

And, while we’re on the topic of the Academy, are their institutions next? Maybe not, but the printed scholarly monograph is certainly on the chopping block.

I keep coming back to the generatives described in Kevin Kelly’s Better than Free (skip the giddy utopic intro):

These eight qualities require a new skill set. Success in the free-copy world is not derived from the skills of distribution since the Great Copy Machine in the Sky takes care of that. Nor are legal skills surrounding Intellectual Property and Copyright very useful anymore. Nor are the skills of hoarding and scarcity. Rather, these new eight generatives demand an understanding of how abundance breeds a sharing mindset, how generosity is a business model, how vital it has become to cultivate and nurture qualities that can’t be replicated with a click of the mouse.

Could this be the perspective needed to recalibrate the profit compass and find the Sasquatch of sustainability?

The Tweets of War

The current tragedy unfolding in the Middle East right now deserves a more powerful and direct response than I am prepared to deliver. The media coverage is very difficult to sift through and judge, as the reporting has been marinated in propaganda campaigns more sophisticated than anything I have personally experienced. Many people I talk to seem to be unwittingly “on message”, faithfully echoing the sound bites they have been fed on a steady basis.

I am connected to people with very deep convictions about this issue. I know this is a divisive wedge issue, but I am not sure how many social networks contain the extremes it feels like mine does.

I have not found it productive to weigh in on the questions of morality and entitlement, but I have come across a few pieces that I think do a good job discussing the long term strategic stakes, from a more detached and rational perspective. I feel like I can more successfully engage staunch supporters of Israel by challenging the long term wisdom of these attacks, not their justification.

Proportionality And Terror

Even Israeli newspapers and human rights groups are far more nuanced, vocal and divided than the homogenized dichotomy I am subjected to in the US.

At times like these, I also return to read the wise Kabbalistic reflections of the Meru Foundation’s Stan Tenen and his series Making Peace with Geometry (and the recent How Mother Nature Keeps the Peace).

Meanwhile, this is all occurring in an environment awash in participatory media, and I am trying to track the online tactics emerging around this showdown. This is a decent run-down on the cyber-debate the gaza conflict has precipitated. However, beyond the viral video games (newsgaming as the new political cartoon? Raid Gaza!), facebook status updates (qussam count, support gaza), interactive visual propoganda, and virtual protests (which I predicted last year), there is something different happening that is really worth noting.

Computer users are installing software on their computers to donate their computing power to attacking the opposing side’s infrastructure. Conceptually, this is a bit like donating your computer cycles to search for aliens with Seti@Home, except for destructive purposes. Technically, you are installing a trojan on your own computer, so that it can be taken over on demand to join a botnet army of other zombie computers and launch a Denial of Service attack.  (And, there really is no way to verify the actions or intensions of these combatants. For all we know, the russian mafia might be working both sides of the conflict to capture credit card numbers.)

Denial of Service attacks are pretty serious. If the infrastructure you are attacking runs mission critical services, like hospitals, airports, traffic lights, or whatever, suddenly you might actually be participating directly in the destruction, not just debating about it.

It’s scary and important to recognize the dark side of collaboration – the side that leads to lynchings and mob justice.  I have to wonder whether the constant visceral immersion in this carnage has anything to do with its spillover beyond the Mediterranean – NYC police officers have even been injured in this conflict.


Update (11/28/09): I have learned that the World Flag image I used in this post was created by the world flag project “to raise awareness and funding for non-profits and individuals working in the areas of education, world health, human rights, and the environment.”  I had chosen this flag since during these internet campaigns it is common for people to declare their allegiance to one side or another with a national flag, but I was unaware there was an organized project behind this fabulous image.

Previous PageNext Page
/* reset the net - */