Encouraging the Panopticon

April 3rd, 2003 1:29 AM

Anil Dash has done a wonderful job at solidifying the traditional science fiction idea of a data recorder enabling a permanent record of your life. In this case, he wants an iPod able to record his daily life while allowing specific conversations to be associated with people in his address book. He calls this his personal panopticon:

It seems that we’ve already accepted a future where we’re all celebrities . If we’re going to accept the negative implications of that reality, then we’d better get working on creating some positive implications to go along with it. The personal panopticon is one of the positives.

My favorite depiction of this sort of thing is Kim Stanley Robinson’s AI. And Anil is right: the technology is here. But the really amazing things, in my opinion, are going to come when we get better at annotating these recordings. Associating time frames in the audio stream with a person or place is a start, but I’m even more excited about what happens when the annotating starts to happen in more depth and in real-time as the MIT wearable folks have been doing for a number of years.

Specifically, some of the anecdotes that Steve Mann gave in his OSCON talk several years ago were very interesting. Anil talks about granting access to his panopticon data to the people who were involved in a specific conversation or event. The MIT folks, by doing more than just recording audio, have gotten to the point of sharing their entire panopticon databases with each other. And since, for them, this is an interactive real-time process, their panopticon can alert them database entries that are related to ongoing experiences.

So, I guess what I’m trying to get at is that this sort of thing gets even more interesting when you add real-time interaction and annotation to the recorder. Give me an iPod that can record my day, yes. But give me also a Twiddler (or some such input device, possible Bluetooth enabled) to interface with it so that I can annotate my day with information for later retrieval and correlation, and so that my iPod can let me know when it has relevant sound bites (or text clippings) that pertain to my current situation.

Update: It occurred to me that the talk at OSCON might have been given by Thad Starner and not Steve Mann, but I can’t seem to find any references. The Refereed Papers from 1999 (the year the talk was given, I believe) don’t mention it (don’t even have a table fo contents), and Google is silent on the issue with simple queries.