Philosophy

Now Playing: Nothing but the whole truth

sword-justice-not-blindI recently learned about a fascinating  trend in litigation that is quietly transforming courtroom testimony, and is spreading fast and far - video depositions. I talked with a consultant who helps attorneys process video depositions. In the courtroom, attorneys are juxtaposing live testimony with segments from depositions.  Video clips of witnesses reinforcing (or contradicting) themselves are far more powerful than merely reading back the transcript. The courtroom has always been about performance, but these videos have taken this to a new level, as savvy lawyers manipulate appearances and emotions. Increasingly all depositions are being recorded, just as they are transcribed. Apart from the ways that courtroom proceedings are being transformed, I am also intrigued by the software that is undoubtedly in development to support these operations. In addition to conventional A/V support, working effectively with hundreds of hours of video involves archiving, indexing, distributing, editing, and clipping.  At about a day or two of testimony per witness, and dozens of witnesses per trial, the numbers add up pretty quickly. As cases accumulate, and multiple associates begin working with and analyzing video, law firms will quickly recognize the desirability of networked, collaborative, video annotation environments.  Some large firms (and their vendors) may have already begun developing solutions. However, the consultant that I spoke with was storing video locally on a laptop hardrive and tracking it with an Access database, so opportunities are knocking. Without a doubt many of the tools that will be highlighted at the upcoming Open Video Conferene (OpenCast, Kaltura, and CCNMTL’s Mediathread come to mind) have overlapping feature and requirements. Once again the organizational digital divide looms, and I am deeply concerned that only the high end corporate law firms will be able to invest in the competencies and capacities to make this work.  Meanwhile, the impact law firms (along with journalists and social scientists), will be playing catch up, handicapped by this powerful new differential. I wonder how quickly this practice will spread? Oyez, Oyez, Oyez!

Parabolic Intentions

4585915584_8cb079376dMystical traditions depict a singularity in consciousness occurring when all of humanity is united in the same state of mind. Our choices will determine if we will arrive at this state by achieving global peace, or take a detour through the another World War. In the limit, our shared reflective awareness is a possible consequence of globalization and has been linked to the promise of world peace. Meanwhile, Princeton University’s all-but-unheard of Noosphere project has begun tracking meaningful correlations in random data that suggest an awakening of global consciousness. The project has distributed physical networked “eggs” which generate a steady stream of random numbers. Upon the occurrence of events of global significance the streams suddenly become a lot less random  (actually immediately before these events, but that’s another mystery).  Unprepared to even postulate the mechanism for the correlations they have established, the project minimally suggests that our collective intentions and emotions have the power to influence and affect our physical reality. A wise mentor of mine thinks we might be able to accelerate this transformation if we all took the simple step of pausing, contemplating, and reflecting every day at noon.  Similar to the Play As Being practice I sampled a while back, the personal potency of such a discipline is dramatic. Noon is a convenient time to sync up, but the coarseness time zones introduces a margin of error. Imagine if large numbers of people welcomed the sun every morning - a wave of transcendence would (en)circle the globe. Some kind of psychic beacon? The idea that our technologies mirror our realities is common, though contemplating our reflection within these mirrors is less so. Our global communications system is not only the planet’s nervous system, but through computation and representation, it is becoming a 2-way mirror into our collective psyche. In the past I have appreciated how distributed research has given way to tools which help aggregate many snowflakes of data into a meaningful snowbank. Flickr and Delicious taught us how to conduct distributed research on photos and hyperlinks, but Twitter has helped popularize aggregation around arbitrary structured data.  We are monitoring elections, and each other’s sexual habits. And the data doesn’t even need to be particularly well structured, as this research on the pulse of the nation’s mood demonstrates. Now that we have glimpsed own collective moods, can we design the biofeedback loops for us to become collectively-aware (in addition to self-aware)? To put this another way, could be learn to actually control the coordinated output of the Noosphere eggs, instead of merely tracking their correlations with our global state. If we could collectively broadcast one syllable into the universe, what would it be?

Reconstruction time again

At a loss for wordsThis week the j-school was abuzz with the conversation successfully  provoked by the publication of a detailed comprehensive report, complete with recommendations, on how to save the endangered species of professional journalists. One of the report’s two primary authors is my professor Michael Schudson, a thoughtful scholar and a great teacher who is eminently approachable for advice. My friend Dr. C.W. Anderson was the research assistant on the project, and I know he worked pretty hard to make this happen, though he didn’t go on a world tour with the authors. The report was solid and it managed to gain alot of attention and stir up  a bit of a ruckus. The recommendations seemed reasonable to me, though not quite as radical as I would have hoped… I have been involved in quite a few conversations around the future of journalism this year, and while there has been a great deal of conversation around how the forms of organization around journalistic production are changing, there has been very little talk about how what’s being produced is changing too. I am reminded of Bob Stein’s predictions about the Future of the Book. One of his central riffs is his epiphany that the digital book is much less about ebooks and multimedia, and much more about a shift away from the book as a static, finished, complete, object. He imagines a new emergent form in perpetual beta, with multiple authors, and around which revisions, annotations, and communities form. Any of his talks that from the last few years probably picks up on this theme. While many journalists are talking about producing articles using new media forms, the discussions remind me a bit of the early days of cinema, when they used to film plays. I’m imaging a shift in journalism towards interactive storytelling, cumulative aggregation, and  distributed collaboration. We have begun to see hints of experiments along these lines in projects like Times Topics, Swiftapp, and Mapping Controversies, but this NPR project profiled last year in CJR really hits the mark: So Cool: How an economic weather map changed the climate. I think these strategies might easily apply to prose, not just data, interactive graphics, and maps. Comparing journalism with education, will journalism only use new media to create the equivalent of a jazzed up, one-way, lecture? What does interactive story telling even look like? How will we teach the next generation of journalists to create works that are designed to be picked up, re-appropriated, and re-mixed? With these ideas in mind, I would have loved to see some recommendations in this report designed in anticipation of this future, not merely to prop up yesterday’s decaying models. The patchwork of the future can be best supported by encouraging greater transparency, open licensing, and a culture of collaboration.  What about encouraging open licensing mandates to this foundation support? Mandate the sharing of primary sources? Teach journalists of the future to share, and to learn from their readers? These aren’t all policy recommendations, but I think they need to be thought through and woven into this conversation. PS - While the future of journalism may be difficult to discern, the future of newspaper suddenly seems pretty clear ;-)

The Interdisciplinary Kissing Problem

websLast week I participated in the architecture school’s visualization seminar and  was treated to a mind-blowing presentation by Tony Jebara, a Columbia Computer Scientist. Jebara is a young associate professor who researches machine learning, graphs, and visualizations, and is also the chief scientist at CitySense.com. His lab “develops novel algorithms that use data to model complex real-world phenomena and to make accurate predictions about them.” They also work on improving the readability of massive volumes of multi-dimensional data, and are currently focusing on making sense of networks of people and places (take a wild guess who else is interested in their work). CitySense is an application that runs on mobile devices and from their location data…

Interview: Christopher Mackie on Knight's Hyperlocal Gambit

Neon vintage micLast week I reflected on the Everyblock.com acquisition. Since then, Knight’s journalism program director has blogged about their perspective on the sale, and some great conversations have continued.  I have also had a wonderful opportunity to discuss the purchase with Christopher Mackie, a program officer at the Mellon Foundation. Chris is the Associate Program Officer in the Research in Information Technology program and is closely involved in Mellon-funded software initiatives. Here are some excerpts from our conversation: JB: Thanks so much for taking the time to share some of your thoughts on the recent purchase of Everyblock. As you know, Everyblock is a foundation sponsored, open-source journalism startup that was recently acquired by msnbc.com. Even though the Knight Foundation mandated that all the software they funded was released under an open (GPLv3) license, the future openness of this application is now uncertain. As an important funder of many valuable open source software projects I am wondering if you could share your reactions to this news? How do you feel about the outcome? Did the deal take you by surprise? CM: Hi Jonah – good to talk with you! Before we start, let me be clear about a couple of things. First, I don’t speak for the Mellon Foundation on this, so all I can share are my own views. Second, I’m by no means the most knowledgeable person around when it comes to intellectual property issues. In fact, I can find several people who know more than I do without even leaving the building at Mellon. What I do have is a particular perspective on IP issues that has been developed in large part from my work with our information technology program. I hope that my perspective is useful, but I wouldn’t want anyone confusing it with either an official Mellon perspective or some sort of consensus view among experts. As far as I can tell, consensus only exists among IP experts on issues that no one cares about. That said, as I follow the conversation, what appears to be happening with Everyblock is that a number of people are seeing for the first time some issues that have been seen before in other parts of the software space. In the process of thinking through the implications of those developments, they’re reinventing old arguments, most of which are insufficiently nuanced to be valid. Eventually, they’ll work it out, but right now, many people are still looking for too-simplistic answers. JB: This moment is such a great learning opportunity to teach grantmakers and journalists some really important lessons about Intellectual Property, and the complexities of Open Source software, community, and culture - is there anything specific you think we can learn from this transaction? CM: Rather than try to parse the many issues individually, let me just suggest a couple of basic principles that I use when I’m trying to advise projects on licensing issues: First, “the context is more important than the license.” The debate over BSD/GPL tends to take place at a very abstract, ideological level. This is the wrong level: when it comes to licensing, I believe that you really need to get down and grub in the dirt. Licensing decisions are almost always made better when they’re made in a carefully contextualized fashion. The single most important contextual dimension I know concerns the “organizational complexity” of the product. That’s my own, made-up term to describe the need to integrate your project with other organizational systems, human and software. Organizationally complex software requires significant adaptation or customization in most installations – which implies the need for significant vendor involvement in many installations. A good example of an organizationally complex system is something like a financial system, which tends to have to connect to all sorts of other software and to interact with all sorts of human workflows. Good examples of organizationally simple software are things like a Web browser or a word processor, which ought to work out-of-the-box without any customization or integration. If you have an organizationally complex product, BSD licenses tend to work better than GPL. Why? BSD licenses don’t scare off the vendors who have to poke around the insides of the product in order to support it, and who worry that their private IP may be compromised by an accidental contact with a GPL’d product’s innards. I’ve seen the arguments about whether this is actually a valid concern, by the way, and I’m not particularly invested in learning the right answer, if there even is one. As long as vendors believe or fear it to be true – and many do – then it might as well be true. Without vendors, it’s hard for an organizationally complex project to thrive, so BSD tends to win out in those sorts of projects. A second dimension concerns the degree of “market power” held by the users. Market power depends on the ability of users to recognize themselves as having shared interests and then to act on those shared interests. A user community that has market power can issue a credible threat to punish a misbehaving vendor; one lacking market power, cannot. This often isn’t a simple determination; for instance, consider Mozilla. At the core of the Mozilla community, as with most open source communities, is an intense, dedicated group that sees itself as having shared interests and clearly has the will to punish someone who attempts to misuse the Mozilla IP. But do they have the ability? After all, they’re only a tiny fraction of all Mozilla users. The rest are a widely distributed, diffuse group that would never imagine themselves as having much in the way of common purpose, beyond the desire to have a free Web browser. Which constituency matters more in calculating market power? It almost certainly depends on the context. Some people object to the phrase “market power,” preferring terms like “strength of community” or “trust.” I’m not too worried about what one calls it, but I will say this: once you get past the rhetoric, it mostly boils down to the community’s ability to deliver a credible threat to punish a malfeasant vendor. If the user community ceases to value the project enough to want to defend it against vendor malfeasance, or ceases to be able to act together effectively to deliver that defense, then, however much they value the project individually, it is unlikely to stay open no matter the license. There are other dimensions to think about, too; for instance, a project having multiple vendors is safer than one with only a single vendor, or none, because non-colluding vendors tend to act in ways that keep each other well-behaved. But those are the biggest two, in my experience so far. Earlier, you brought up the Sakai and OpenCast projects, both of which have been funded by us (and by other foundations, such as the William and Flora Hewlett Foundation, as well). I believe that these two characteristics are why Sakai and OpenCast, as well as other community source projects, are able to use BSD-style licenses (they actually use the Educational Community License, or ECL, which is almost-but-not-quite the Apache license). Community source software projects produce organizationally complex products deployed by a coherent community of institutions willing and able to exercise market power if needed. For instance, the community of higher education institutions seems to have no trouble understanding their common interest in keeping Sakai’s IP open, even if they’re not Sakai users themselves–and as a group, they seem to have the will and ability to punish vendors that attempt to misbehave. Most vendors sell more than one product into these institutions, so they stand to lose more than they can gain from bad behavior on any single project like Sakai. The result: there is virtually no evidence of significant vendor malfeasance in any of the community source projects, despite the use of a license that in theory allows any vendor to close the code at any time. The closest you can find is the Blackboard patent dispute—which is a challenge to the ownership of the IP, not its licensing, and in which Blackboard has been careful to steer clear of any direct threat to the Sakai community. But would every vendor’s good behavior continue if the community stopped caring about Sakai? I seriously doubt it. On the other hand, if you have a product which is organizationally simple, as well as having a relatively powerless user community, then get thee to the GPL, because the temptations to steal and close the code just become too great for some vendors to resist. We’ve seen some examples of that, recently, too. Still, don’t believe that the GPL will protect you if your community cannot or will not. If the community is weak enough, nothing can really protect you. Second, “IP ownership trumps IP licensing.” Some of the commentators on Everyblock that I have read so far are circling around this point, but none has yet followed the logic all the way. All the debate over licensing tends to obscure the reality that final power lies in ownership, not licensing. For a surprising number of situations, licensing is little more than a red herring. If I own the code, I can issue you a GPL, someone else a BSD, and yet another license to a third party–take a look at the Mozilla licensing scheme sometime, for an example. If I’m also responsible for updating the code, I can change the license to all of you at any time simply by issuing a new version. Sure, you can still use the old version under the old license, but if I really want to make it tough for you to keep using the old version, there are ways. Finally, as you’re seeing with Everyblock, when someone owns the code privately, there’s nothing that prevents someone else from buying the code – often by buying the firm itself – and changing the licensing terms. I have no insight into MSNBC’s plans for Everyblock. Maybe they’ll close the code; maybe not. Maybe they’ll keep something open but close the commercial services they build on top of it – I don’t know. As your commentators have noted, no one seems to know – and that’s part of the problem with privately owned but open-licensed code. You just never know. That’s one reason why I tend to be wary about the “commercial OSS” model, no matter what license it uses. In many commercial OSS projects that I’ve seen, even the GPL is effectively just a cover for what is to all intents and purposes a closed code-base, because the owner/vendor is the only entity on earth that has any realistic likelihood of supporting or extending or developing the code further. Ask someone in the MySQL community how protected they feel by their license – or ask the people using Zimbra how they expected to fare if Microsoft bought Yahoo. It’s not about whether the current owner is good, bad, or ugly; it’s about the fact that you can never know whether it will be the same owner tomorrow. That’s a lot of uncertainty on which to base a mission-critical technology choice. JB: So, given the diverse range of contexts you describe, what specific strategies have you deployed to mitigate these risks? CM: Good question – and it’s important to emphasize the word “mitigate,” because there are no guarantees and there’s no such thing as absolute effectiveness. One thing we do in our program is to use IP agreements (a contract with the owner of the code to be developed) that require any transfer of ownership to be to an entity which must also agree to the terms of our IP agreement. In a sense, we make the ownership viral, whether or not the license is viral. That’s not a perfect solution, but it appears to be working for us so far. It also helps that we make our grants to non-profit organizations, which can’t be bought the same way you can buy a private or publicly held firm. When for-profits are involved in our grants, which sometimes happens when grantees decide to contract with for-profit developers, my program (Mellon’s Program in Research in Information Technology) has always required that the non-profit be the IP owner. We are not alone in this; for instance, when several major technology corporations—all for-profits—decided to share and protect some of their own intellectual property in an open environment, they didn’t trust it to a for-profit, but instead created the Eclipse Foundation, a non-profit that owns the Eclipse Project IP. Ditto the Mozilla Foundation. Still, it bears repeating that just putting your IP into a non-profit mindlessly doesn’t eliminate the risk, because it matters how the non-profit is structured and governed: nothing says a non-profit can’t be malfeasant, too, if in somewhat different ways. JB: Do you think that the Knight Foundation was swindled? Did they get outfoxed by msnbc.com, or do you think they are happy with this outcome? CM: I have no knowledge about what the Knight Foundation intended – has anybody bothered to ask them? [ed note: this conversation took place before Knight made a public statement] I think it would be foolish simply to assume that the grant makers have been outfoxed by this development: it may have been exactly what they wanted, or just a risk they decided beforehand that it was worthwhile to run. Keep in mind, too, that MSNBC hasn’t said or done anything about closing the code so far. Even if the Knight Foundation did want perpetual openness and the strategy wasn’t perfect, there’s still a chance that they’ll get what they wanted. All that’s really happened here is that the sense of security held by at least some members of the Everyblock community has been shaken by the purchase news. But it was always a false sense of security; at this moment, as far as I can tell, nothing objective about the openness of the project has actually changed. JB: Do you have any closing thoughts about this deal, or what you think grantmakers and open source advocates can learn from it? CM: If Everyblock serves to help some members of the openness community to get past their ideological blinders and recognize that IP ownership and licensing decisions are subtle challenges with relatively few simple, definitive answers, it will have done some good. After all, even the best source code is relatively ephemeral, but we can hope that such wisdom will last forever. JB: Thanks so much for your time and wisdom. I know alot of people who were quite surprised by this turn of events, and it feels like we all need a crash course in IP law /and/ sociology to navigate the intricacies of this political economy. Even veteran lawyers and free software evangelists are often confused by many of these complexities. I really hope that this case and your analysis will better inform future work of this type. Good luck keeping it open (and real)! CM: Thanks very much. I hope what I had to say is useful.

Shekhinah Power

ZapIs it possible that our ancestors harnessed the power of electricity? It’s logically possible that electric motors pre-dated steam engines, and tantalizing writings combined with circumstantial evidence suggest that the ancients understood more than static electricity and simple batteries. This question is yet another reformulation of the regard we hold for the wisdom of the ancients, and if their models and perspectives might offer anything meaningful to today’s scientists and philosophers. Even the alternative researchers who investigate these claims often feel the need to invoke atlanteans, martians, or time travelers as the deus ex machina to explain their origin. A recent constellation of events and ideas (MiT6, Intentional Energy, Faith’s Transmission) in my life has brought me back to this question.  If the ancients had developed a theory of everything, how might they have encoded this message for transmission into the future? Would their theory of everything incorporate/integrate subjectivity and consciousness, unlike our generation’s leading contenders? The following free association provides a glimpse at what a message like that could look like.

The Remover of Obstacles

Javier TellezOn last weekend’s visit to the Shivananda ashram I chanted away life’s worries while imagining an elephant effortlessly clearing obstacles from its path. Om gam ganapataye namaha! [*] The elephants returned this weekend on my visit to Boston. I spent a wonderful afternoon biking around the city, inhaling the streets, waterways, and parks and internalizing its expanse.  I visited the ICA, a great new museum designed by the same crew that just finished New York’s great new High Line park.  The main attraction at the ICA was the Shepard Fairey exhibit, but I was much more drawn to the “Acting Out: Social Experiments in Video”

Faith's Transmission

Message in a BottleWell, its been 2 months since I participated in MIT’s Media in Transition (MiT6), but the event is still vividly fresh in my mind. The conference was really amazing. It attracted a really diverse mix of theorists and practitioners, academics and professionals, and folks from many walks of life. This conference I tried to go to talks where I “didn’t belong” - hoping to learn from disciplines I don’t regularly encounter. It was a great strategy, as I often gravitate towards talks that I know something about, wanting to hear the presenter’s take on it, but venturing beyond my usual horizons was much more fun. Aram Sinnreich and I presented a paper on Strategic Agency in an Age of Limitless Information (abstract, slides), and I am really happy with how things turned out. Hopefully, we’ll work on polishing this paper up to submit to a journal soon, though I don’t really know where we should submit yet. The videos for the main plenary events are now up and I am looking forward to clipping the little hand grenades I remember throwing during Q&A. This panel on Archives and History (my question starts @ 1:35:15) wasn’t the only conversation about archiving, but it was fairly representative of the perspectives. It’s too bad MIT World does not provide me with a mechanism to address a point of time in their videos (like our recently liberated VITAL tool allows), so you’ll have to advance the playhead manually to hear me out. It’s basically a riff on - Why Archive? - The beauty of the Sand Mandala and the effort required to actually delete something…. The conversations were very similar to some that we had back in May ‘07 at the Open Content conference, but not I think I can finally articulate what’s been bugging me about these conversations. With the help of Ben and John Durham Peters (we shared a bus ride to/from the conf), I realized that archiving can be thought about as a transmission, for anyone, into the future. I also realized that ordinarily, when we look to the past, we use history to help us understand ourselves better. The presumption that future generations will actually care about us for our own sake, strikes me as narcissistic (narcissism and new media has surfaced on this blog before).  I imagine they will want to use the messages that we send them to help themselves, understand themselves better.  So, to archive purposefully the question becomes - how can we best help the future? To the archivists who claim we don’t have any idea what questions the future will be asking, so we better save it all - I think I know what the future will be trying to understand about us.  They will likely be trying to figure out what on earth was distracting us while we let the planet die!  We were busy devoting our resources to saving every last copy of American Idol and Big Brother while Gia screamed in agony for help. So, how can we increase the signal-to-noise ration of the messages we send into the future?  Without somehow reducing the message to the critically problematic golden record on the voyager spaceship, or its successors?  I guess the Long Now Foundation is thinking along these lines, and I have always envied David Vakoch’s job title (Director of Interstellar Message Composition)…  The conference helped me realize that Vakoch and the Long Now have a really similar task - but I don’t know how many archivists conceive of their task as Intergenerational Message Composition. Perhaps we need to spend even more time curating?  Indicating in our archives why we think they were worth saving? And what’s the most important message we can send into the future? Not like it matters much longer, as I really do believe we are embarking on The End of Forgetting (see our conf paper for more details). Shifting frames for a moment, what if the ancients had a really important message to send us? Their Theory of Everything, or the equivalent of E=MC^2.  How would they have attempted to transmit it? When I discussed these ideas w/ my friend Rasmus he recommended I start up a consulting firm specializing in Future Relations. ;-)

Intentional Energy

Seed of Life ActivatorThis past weekend I took part in an exciting panel on internet labor at the Left Forum, but the highlight of the weekend was serendipitous. I attended a salon hosted by Reality Sandwich:

Electrical energy is political energy is personal energy is metaphysical energy: A discussion on technological tools and political policy for opportunities of human freedom and evolution.

While I am usually open to edgy ideas, and am quite comfortable entertaining (and sometimes visiting) alternate realities, I certainly wasn’t expecting the treat I encountered. Ryan Wartana orchestrated an amazing experience, successfully interweaving the metaphors of energy and power through the lenses of the physical, personal, political, and metaphysical. Ryan has PhD in chemical engineering and has been researching and working with nanotechnology and batteries for over a decade.  Professionally, he is the CTO for the alternative energy startup iCel Systems and is quite committed to alternative renewable energy solutions. He was on the East Coast participating in conference in DC on Advanced Battery Manufacturing, and swung through NYC to connect with other segments of his network. To give you a sense of the atmosphere, Ryan spoke against the backdrop of a revolving slideshow of sacred geometry (which I have studied also), whose forms and principles have inspired many of his artistic/scientific inquiries and designs. He has worked with researchers growing self-repeating and self-replicating nanostructures, and it soon became clear how inhabiting this domain influenced his thinking. Some large problems can be effectively broken into tiny parts, but it can be difficult to imagine how to practice this w/out radically adjusting our perspective. I left the lecture with a much clearer vision of what an intelligent energy grid, or an “internet of energy” is all about.  Basically, the current energy grid is unidirectional, and on-demand.  It is a centralized distribution system, much like last century’s mass broadcast media. If we distribute a dollop of storage and intelligence to the network, many amazing possibilities emerge. The analogy with integrated circuits was quite provocative - our current grid is like a circuit board w/out any capacitors on it. iCel and companies like them are trying to become the Cisco of the Energy platform, and create integrated energy systems. So, individuals could draw power when its inexpensive (at night) and produce power and return it to the grid, or even to their peers - bittorrent style. The power of distributed networks to improve redundancy and resilience, and reclaim lost bandwidth and capacity is well known in information technology and network theory. Google has even been distributing their physical power storage in their servers. But the possibilities Ryan illuminated intuitively clicked for me - and I trusted his vision, even though he is in the battery business ;-) These distributed energy systems are vital, and starting to happen. I wondered about connections with the electric car venture - Beter Place. Their system is immensely promising, but riddled with uncertainty. Will their hardware interoperate with other power providers, or will people be locked in? Will their customers be better off relying on a centralized transportation provider, instead of remaining independent and relatively autonomous?  What there be provisions to mitigate the surveillance threats their network poses?  When you mash good batteries up with Better Place (with a bit of peer-to-peer pressure), many of these problems melt away. We also talked alot about the importance of energy awareness, giving way to energy responsibility, leading to energy intentionality.  These ideas actually had alot to do with my presentation at the Left Forum, which are hinted at in my take on Free Energy. The talk left me invigorated and hopeful. NYU’s ITP has had some great projects on energy awareness, and there is even a prof at Columbia who wants to rig up a dorm with energy monitoring.  And, some of our work at CCNMTL with the Earth Institute and the Millenium Villages might benefit from these insights and connections as well. I attended the Reality Sandwich event hoping that a dose of creative consciousness expansion would offset the heaviness of struggle at the Left Forum. What a refreshing contrast to feeling trapped inside an inescapable system. We can imagine our way free.

Disorganized thinking

poison_pillAs I’ve claimed previously, Big Pharma’s crimes and cover-ups will soon make Big Tobacco’s scandals look like jaywalking. AstraZeneca’s Seroquel trial began last week, and the industry’s criminal antics surrounding anti-psychotics are coming into better focus.  Documents introduced as evidence are confirming that, like Eli Lilly with Zyprexa(Kills), AstraZeneca knowingly downplayed the fatal side-effects of their toxic pills. They covered up the fact that Seroquel causes diabetes and massive weight gain, and have been gaming the drug approval process to expand the diagnostic reach of their drugs. In a move which hits new lows, even for Pharma, documents introduced into evidence reveal sex scandals and conflicts of interest in the approval of Seroquel for treating depression, the burying of unfavourable studies, and deeper insight into the pathological cognitive dissonance underlying Pharma’s logic. Get ’em while they’re hot! 43_Exhibit 15.pdf