Meet the Robots

Over Memorial Day weekend I attended Fleet week, and made a few new friends. They happen to be robots, of the autonomous flying variety.

These little gadges come in a wide range of sizes, from wasp not much bigger than two hands all the way up to the predator, which is now armed with hellfire missiles.

For the time being, these robots are unarmed, but are all equipped with survaillance cameras. This explosion in optical feeds helps explain the urgency behind programs like Carnegie Mellon’s Informedia project (Is Anyone Watching Grandma?).

These craft already realize Ender’s Game scenarios, with hs dropouts controling live ammunition in the Iraqi theater of combat from the safety of a bunker in New Mexico.

But even without carrying missiles themselves, these robots have become part of the weapons system. A soldier explained to me how the targeting systems for the large guns on the decks of US ships are now wired to the data feeds coming from the remote drones. With the click of a lightpen, what the plane sees is targeted from the ship’s guns, damage assesed and trajectories corrected.

Killer robots are a topic I have been thinking about for a while, but it was truly amazing to see these devices in person. In many respects this hardware is identical to the remote control airplanes from the ’50s. The only major new advancement is the software controling them.

Here is the model that Bush is planning on deploying to patrol the Mexican border. How long before local law enforcement gets a few of these to play with? How many do they need before the start assigning them to track individual suspects?

Turtle Totems

Seymor PapertSeymour Papert , the inventor of Logo, spoke at Teachers College on Monday April 10th. I was lucky enough to hear him talk in a standing-room-only event. My former employer, Idit Caperton
studied with Papert, and MaMaMedia incorporated many of the principles he advocated.

His ideas, once stated, are remarkably simple and obvious–usually a mark of the good ones. He thinks we are teaching mathematics ass-backwards, and that we ought to introduce it the way it came about in the history of humanity – engineering first. This approach will create and foster the demand for mathematics. Pyramids, navigation, astronomy, all drove the development of mathematics – and robotics and programming can provoke and instigate the need for mathematical abstraction in education. Sounds about right.

Interestingly, his experiments have led to anecdotal accounts of a reversal of the gender discrepancy in science/math. He claims with an engineering first approach, girls actually quickly excel beyond the boys, venturing beyond speed and destruction to the mastery of a much wider variety of skills with the systems.

He also demonstrated, in 10 minutes flat, how logo can be used to teach 2nd graders the notion of a mathematical theorem (in creating any closed shape, the turtle will rotate through a full 360 degrees – repeat N {fd 10 rt 360/N}) as well as how to introduce calculus (through the idea of the limit). He made the point that once a second grader is arguing — “that’s not a circle, its lots and lots of short lines”, you have already won…

If logo has a failing, its that it does not provide the necessary scaffolding for teachers other than Papert to effectively teach with it. I have been exposed to logo in the past, but never really understood its appeal until Seymour started turtling.

Interestingly, Logo is far from irrelevant. Mark Shuttleworth’s ClassroomCoders curriculum imagines a logo->squeak->python pipeline for educating the programmers of the future…

Seymour is also heavily involved in the $100 laptop project, a project which many consider to be one of the most important educational initiatives currently underway.

soft metamedia?

April 7th I heard Lev Manovich talk at Pratt. I am a big fan of Manovich’s written work, and the Language of New Media was instrumental in my analysis of tagging.

Friday night Manovich showed us ideas in progress, and bravely admitted that they were not completely formed. He talked about describing the evolution of media in evolutionary terms. As in, the next logical progression after getting all our media digitized (i.e., simulating physical processes w/in the digital environment) is the breeding and hybridization of the media. He is claiming that some of what we are now seeing in ‘moving graphics’ or ‘design cinema’ is actually a new form of media, distinct from what came before it. And he is interested in identifying the trunks and branches of this media evolution.

Plaid Itsu was a film he used as an example of a completely new form. Whereas multimedia was the assembly of multiple forms of media adjacent to each other, metamedia is the combination of these forms into a new unified whole. He pointed out the live action photography, combined with traditional design aesthetics, combined with graphics, etc etc. Not sure I bought it, but it was an interesting assertion.

The best question from the audience alluded to a longstanding disconnect between media and communication theorists. Manovich is looking exclusively at the end product of the media being created, and not examining the cultural and social conditions that lead to its creation. There may be mileage from this rarefied approach, as some patterns are discernible, but it does seem to be lacking the depth to explain the creative dynamics and underlying motivations.

After the talk, I began to this relate his line of reasoning to Arthur Young’s theory of process:

The Theory of Evolutionary Process as a Unifying Paradigm
Theory of Process Poster (too bad this isn’t really visible online)

Which I first became exposed to through the work of the Meru Foundation:
letter matrix

It seems to me that the evolutionary forces that Manovich is documenting conform to the trans-disciplinary evolutionary process that Young articulated. For what its worth, the hybridization of media that Manovich claims we failed to predict, was foretold back in this book on the MIT Media Lab, published in 1988.

Another New Kind of Science?

Last weekend’s Cultural Studies conference reminded me of a viscous cycle that many humanities-oriented researchers are being subjected to. Disciplines such as educational research, ethnography, anthropology, cultural studies, sociology etc have effectively been colonized by the methodology of the social sciences and they are being forced to play a numbers game which they may not be suited for.

Many projects striving for credibility are subjected to the tyranny of statistics – forced to transform their qualitative information (interviews, transcripts, first person accounts) into quantitative information through the process of coding. This reduction forces the data into buckets and creates a significant degree of signal loss, all in the name of a few percentages and pie-charts.

Perhaps we have lost sight of the motivation for this reduction – the substantiation of a recognizable, narrative account of a phenomena, supporting an argument. Arguably, the purpose of the number crunching is to provide supporting evidence for a demonstrable narrative. Modern visualization techniques may be able to provide one without all the hassle.

True, this is not always the only reason that qualitative is transformed into quantitative data, but advanced visualization techniques may provide a hybrid form that is more palatable to many of the researchers active in this area, and is still a credible methodology. It seems as if many people are being forced into coding and quantification, when they aren’t thrilled to be doing so. But the signal loss that coding is responsible for, all in the name of measuring, might be unnecessary if people think about using data visualization tools, that comprehensibly present the data, in all of its richness and complexity, as opposed to boiling it down to chi-squared confidence levels (and does this false precision actually make any difference? Does a result of 0.44 vs. 0.53 tell significantly different stories?)

In a thought provoking post on the future of science, Kelly enumerates many of the ways new computing paradigms and interactive forms of communications might transform science. The device that I am proposing here might lead to some of the outcomes Kelly proposes.

For a better idea of the kinds of visualization tools I am imagining, consider some of the visualization work on large email corpora coming out of the M.I.T. media lab, or the history flow tool for analyzing wiki collaborations, but even the humble tag cloud could be adapted for these purposes, as the power of words and visualizing the state of the union demonstrate.

Crucially, tools analogous to Plone’s haystack Product (built on top of the free libots auto-classification/summarizer library) might help do for social science research what auto-sequencing techniques have done for biology (when I was a kid, gene sequences needed to be painstakingly discovered “manually”).

The law firms that need to process thousands of documents in discovery and the commercial vendors developing the next generation of email clients are already hip to this problem – when will the sciences catch up?

For any of this to happen the current academic structure needs to be challenged. The power of journals is already under attack, but professors who already have tenure can take the lead here and pave the road for their students to follow.

“Michael, are you sure you want to do that?”

Pull over Kitt – you’ve just been lapped.

On Monday November 14th I attended a presentation by Sebastian Thrun, an AI researcher at Stanford U. whose team recently won the Darpa Grand Challenge.

The idea behind the Grand Challenge is to accomplish something that seems impossible, along the lines of crossing the Atlantic, the X-prize, etc. Darpa had previously funded cars that drive themselves, but after numerous failures decided to turn the task into a contest and see how far teams would get in a competitive setting. Last year none of the entrants managed to finish the course, but this year 5 finished, 4 within the alloted time.

The difference between last year and this year was primarily improvements in software, not hardware. In fact, once the software has been developed, outfitting a car with the necessary equipment to drive itself (the perceptual apparatus – laser, radar, and video guidance, the gps, the inertial motion systems, the general purpose computing servers, and the fly-by-wire control systems), were estimated by Sebastian to cost the robots are already here (some of them killer)!

Wikibases and the Collaboration Index

On October 27th I attended a University Seminar presented by Mark Phillipson. The seminar was lively and well attended, and Mark managed to connect the culture of wikis with their open source roots.

Sometime soon I plan on elaborating on ways in which software, as a form of creative expression, inevitably expresses the values of the creators in the form of features. But right now I want to focus on the taxonomy of educational wiki implementations that Mark has identified since he began working with them.

Here is how Mark divides up the space of educational wikis

  • Repository/reference – eg Wikipedia
    • A website whose primary function is to create a repository of knowledge on a particular topic.
  • Gateway – eg SaratogaCensus
    • A website whose primary function is to collect, assemble, and present references to external sources
  • Simulation/role playing – eg Holocaust Wiki
    • A “choose-your-own-adventure” style simulation/game environment
  • ‘Illuminated’/mark-up – eg The Romantic Audience Projects
    • An environment that provides tools for detailed exegesis on primary sources, where the students are instructed to leave the source material unchanged, and create subpages with detailed commentary on supplemental pages.

I think this taxonomy is accurate, but doesn’t completely capture one of the most interesting educational implications of wikis – the process of creating them.

In particular, I can think of a number of variations on the repository/reference wiki, where the final products might all look similar, but where the “collaboration index” might differ substantially (for more on the popularity of the repository/reference, see Database as a Symbolic form, Manovich 2001).

Wikis are a very flexible tool, whose usage can vary from a personal publishing tool, to a simple Content Management System, to a collaborative authoring environment. Additionally, while wiki software doesn’t usually support the enforcement of a strict workflow, policy can be stipulated and adhered to by convention (like in Mark’s class, where the original poems were meant to be left intact).

Consider a few different applications of reference wikis in the classroom:

  • One way Publishing
    • A simple means for instructors to publish and organize information for their class.
    • Examples include:
      • Instructional handbooks, assignment “databases”, completed examples
  • Collaborative Mini-sites and/or subsections
    • Exercises where individuals or groups work on subsections of a wiki which are combined and referenced within a single larger site
    • Examples include:
      • Students dividing large assignments amongst themselves, each sharing their own results with the group.
      • A site like the social justice wiki where groups of 3-4 students each worked on a reference element of the site.
  • Collaborative Websites
    • Sections of the site where everyone in the community is supposed to be contributing content
    • Examples include:
      • Common Resources, Glossary of Terms, and the larger information architecture and organization of the entire site.
  • Portals and Meta-tasks
    • Also, consider that due to their flexibility, many wikis end up being repurposed beyond their original conception, and begin to serve as portals, where many meta-issues and conversations can take place beyond the assembly of the content itself. Some of these tasks include mundane administrative work, like students forming groups, coordinating assignments, taking minutes, and scheduling time.

While the end results of many of these collaborations might certainly all look similar to each other , perhaps the differences in the process by which this content is developed is crucial in capturing part of what is happening with wikis in the classroom.

This analysis probably also has implications relating to the archiving and the use of a wiki environment in a classroom over time. If the act of creating the wiki is central to what the students are supposed to learn from the exercise, then should they start with a fresh wiki every semester? How is the experience different when they are contributing to an existing system (or even have access to prior versions of the project)?

For more on this, see Mark’s comment’s on CCNMTL’s musings blog.

Fraternal Nearness

In his post Social agency and the intersection of communities and networks, Ulises Mejias expounds on the differences between communities and networks, and relates these concepts to the possibility of ontological nearness. The placement of communities within this continuum can be understood more clearly by the immediacy, intensity and intimacy of the interactions.

This conceptual apparatus is helpful for me to being to explain a phenomena that I have been thinking about for a while now. Part of the question can be though about as: What motivates the open source developer? Why would someone who works full time, often writing code professionally, choose to volunteer their nights and weekends to the continued production of more code?

I think this question is an important one for the educational community, since if we could identify this source of motivation, we might be able to “bottle it” and recreate it within the classroom.

My experiences with the Plone community has given me some insight into this question, and I think that the phenomena of Open Source projects would benefit from an analysis using the ideas proposed in Mejias’ draft.

While many people imagine that open source communities are purely virtual (the non-possibility of a virtual community notwithstanding) , it is important to recognize the ways in which these networks of individual developers become communities. Open Source projects typically use a variety of Social Software tools to communicate – email and mailing lists, web sites, forums, discussion boards, blogs, and irc, to name a few. They also often hold face-to-face conferences, and some projects even regularly arrange sprints (also).

Anecdotally, I found it fascinating to observe a progression in intimacy, to the point where some people’s day jobs are just what they do between conferences and sprints. It is no secret that sprints and conferences help make these communities function, cementing interactions over mailing lists and irc.

But an interesting comparison that I would like to propose, which I think can also be described according to the dimensions proposed by Schutz, is the similarity between an Open Source community and a college Fraternity.

[Disclaimer: I was never in a college fraternity, so this analysis is partially speculative]

Fraternities (and I suppose professional guilds and/or unions which they might be related to) are an example of an extended network/community which is disappearing from the modern urban reality. Some people find these kinds of connections in religious congregations, but otherwise many of us have lost the extended networks of people we know, but not intimately or closely.

Like fraternities, Open Source projects typically have a steep gender imbalance, members often go by aliases or nicknames, develop internal languages, acronyms, and lore. The “project” or “organization” becomes an independent object of importance that members become loyal to, and devote their time and resources to supporting.

Eric Raymond has written a bit on the motivations and structure of the hacker community. I have also heard alternate accounts of developer motivation, beyond status and recognition, that have to do with escape from “reality” and immersion in an environment that the developer completely controls. There are many potent sociological, ethnographic, and anthropological research questions that this touches on, many under active research (e.g. Effective work practices for Free and Open Source Software development, or wikipedia’s research pages).

In summary, I think that Mejias’ framework is very useful, but would benefit greatly from more examples which exercise the ideas. Perhaps we can work these categories into our ssa wiki.

Is anyone watching grandma?

kino eyeOn Friday I had a chance to meet with a group of Artificial Intelligence researchers at Carnegie-Melon university. They demonstrated a working technology, Informedia, which I would have guessed was at least 3-5 years off.

What was most incredible about this demonstration was the vivid observation of the trenches in which the information war is being waged. Like any power, technology can bend towards good or evil, and as this post points out, Social Software can be understood as the purposeful use of technology for the public good.

The surveillance possibilities that machine based processing of video and film affords is mind-boggling and horrifying (for more on this angle, see my bioport papers or the Permanent Records presentation). At the same time, the kinds of research, machine based assistance, and even the ways in which this kind of technology would change journalism, could all be harnessed for the public good.

Is transparency, openness, and free culture our best bet for steering and harnessing these powers productively?

« Previous Page
/* reset the net - http://resetthenet.tumblr.com/post/84330794665/the-reset-the-net-splash-screen */