It is possible to loathe the desktop computer without being a Luddite. Just ask Neil Gershenfeld, co-director of the Things That Think consortium at the MIT Media Lab. His new book, "When Things Start to Think," envisions an idyllic world in which the computer has been whisked off the desktop. Instead, he believes, it will be integrated into every other object around us -- allowing us to forget about it completely.
As Gershenfeld writes: "In successful mature technologies, it's not possible to isolate the form and the function. The logical design and the mechanical design of a pen or a piano bind their mechanism with their user interface so closely that it's possible to use them without thinking of them as technology, or even thinking of them at all."
To this end, "When Things Start to Think" delves into the feasibility of MIT Media Lab projects like digital ink, wearable computers and computing shoes, personal fabricators, sensory furniture -- even computer processors in your wallpaper -- as part of a future in which everything "thinks."
Gershenfeld recently spoke with Salon about why this world could be a better place, and why these "thinking" computers will bear little resemblance to previous generations' vision of artificial intelligence.
"When Things Start to Think" is hardly the first book to emerge from MIT that forecasts the digital future. How is yours different?
The book represents a frustration with the visible dialogue over digital things, which to me seems to be simply missing the point. For example, there was a debate between Nicholas Negroponte and Sven Birkerts [author of "The Gutenberg Elegies"] at the time Nicholas wrote "Being Digital." Nicholas was saying digital things are good and digital books are good; and Sven was saying, no, you're implicitly illiterate, and beautiful printing is wonderful and you can't match the joy of holding a real book.
They're both missing the really interesting point, which is that a book is technology -- in its day, it was the highest of technology. When somebody says they'd rather read a book than look at a computer, they don't understand that they're not being anti-technical. They are talking about technology, but what they're really doing is comparing specs. And the specs of the book are generally better than the specs of the laptop.
So I turned it around and asked, can new technology work as well as the old technology in the book? That, then, is a very hard problem. But it's increasingly a tractable problem. We can really start to ask that new technology works as well as what it presumes to replace.
Similarly, there's been so much coverage of the Web and the Internet. But in so many ways I see that as a small detail. It touches the small subset of human experience that you spend sitting alone dragging a mouse around and clicking in front of a computer screen. I'm sad about people who used to talk to each other sitting alone, browsing the Web. I'm much more interested in bringing the Net up to where people are, and letting things talk to other things so that people don't have to.
Some people argue that desktop computers and the Web are actually making our lives less fulfilled. In your book, though, you make the argument that by embedding computers more and more into things, our lives will become better. Is this essentially an interface issue?
I really do believe that we are in an awkward evolutionary phase in the development of information technology, where it's just good enough to be seriously intrusive, but not good enough to be really helpful. The path ahead is to bring so much of it so close to us that it disappears.
One example is that we're developing consumer nuclear magnetic sensing -- which means, for example, that your refrigerator could tell when your milk is turning bad. You could embed a computer in your shoe, and your refrigerator could tell your shoes that you're running out of milk. I don't want to know it when I'm leaving in the morning late for work, but when I'm walking home and I have time, and a store broadcasts the information that it has milk, that's a good time for the shoe to say, "By the way, you need milk and they have it there."
If you look at that little example, I got just the right piece of information at just the right time. It's almost like a revolution in artificial intelligence, but it doesn't require a breakthrough insight into consciousness. It just requires giving a computer access to the world and let the simple pieces work together.
It's the bits meets the atoms: You can't separate which is the hardware and software, which is the analog and which is the digital. They really coexist. That's what I find most interested about interfaces -- where the world is so well instrumented that it can become a physical interface. It sounds scary to say I want to put a computer in your shoe and in your table, but what you get in return is capabilities like that.
If our world is totally embedded with computers that track our every move, what happens to information privacy?
There's this paranoid notion of eavesdropping, that the very walls in the room could be aware what you're doing. But it's very important to understand that those are social, not technological, questions. Right now you really are digitally unprotected -- people have access to all the information that comes and goes, and you have no means to manage it. But there is a very good cryptographic protocol to control your information.
I see privacy as becoming the social trade-off. For example, if I'm wearing a computer in my shoe and walk into my store and have my identity turned "on," someone could come breezing up and say, "Welcome, Mr. Gershenfeld, we know you need a shirt -- it's over here, and here's your size, and we'll give you a discount because you've given us such good demographic information and you spend a lot of money." And I'll go home and get junk mail. Or, I can leave my identity "off" and can do a zero-knowledge cryptographic protocol that lets me buy the shirt without the store learning a single piece of information about me at all. But it costs me more and I get worse service.
It's not up to me as a technologist to decide what that trade-off should be. But, crucially, it's up to me to give people a knob to control that, a means to control the information that comes and goes. And that is certainly missing right now.
In your book you write about the lab's projects that "the true sign of success would be that we cease to find the Turing Test interesting, and that [is] already happening." How has the Things That Think group evolved away from traditional visions of artificial intelligence?
I'm interested in human consciousness, and ways machines can help us -- not whether computers can be conscious. Artificial intelligence started with this notion of understanding consciousness; at the end of Turing's seminal paper on giving machines intelligence, where you're first posed with the idea of AI, he ends with this little postscript that everyone forgot, which is, "Now all of this may not work. Maybe the thing to do is just give computers the best sensory experience they can, let them experience the world. And maybe then that will make them useful." What I'm interested in is not the question of whether computers can be conscious, but how we can give them the resources to solve the problems we have.
AI came from the same era as artificial flavoring and artificial coloring and the notion that formula is better than breast milk. We've since learned, in all those other areas, just how much more interesting nature is than the attempt to replace it with the artificial. You can almost view my book as organic, anti-artificial intelligence.
The next evolutionary step, as you mention in your book, is this idea of mind-machine interfaces and embedding technology into humans themselves -- creating the real Donna Haraway-style cyborg. Do you think the notion of a world divided into the organic and the technological -- the notion of a "pure body" -- is an obsolete concept?
Putting stuff in people is increasingly possible, like helping people to see who can't otherwise. I don't want to think about it for a very long time, even if technically we could. I don't want to have to take a nap if my brain computer crashes. I don't trust the technology we put outside people enough to put it inside people. We need to get better first.
The question about the pure body is a question that dates to a window in technological scaling that we're leaving. In two decades, Moore's law is done, period -- we're down to one-atom transistors. That's over. As you start to approach those limits, we have this notion that there's the physical world, and then there's digital computers. In fact, nature around us is an amazing computer -- we're surrounded by computation. It doesn't easily fit into a narrow, Turing-machine definition of computation, but there's all kinds of interesting information manipulation in our environment.
In the last year we showed the world's first working quantum computer -- it was just a tube of chloroform, and the evolution of the molecule did the computation. So when you say "the pure body" that's not the computer, you have to understand that your body is a wonderfully rich, interesting information processor. If you consider the era of artificial intelligence, it was amazing hubris and prejudice to even think otherwise, to think that computation happens in this other place.
Your lab's most well-known project is Wearable Computing, which enables people to wear portable computers using their eyeglasses as displays. In your book you complain that technology "forces an unacceptable decision" between using a computer and interacting with the world around you -- and you envision the wearable computers as breaking down this division and enhancing social interactions. But isn't there a risk that, by surrounding ourselves in a cocoon of computers, we'll end up paying more attention to them than to the people around us?
It's surprising how quickly people learn to use new technological functions. At one of our meetings we had badges with sensors and displays; you'd go over and dip the badges in buckets that had provocative statements (like, if you went to a desert island what book would you bring?), then, when you walked up to somebody, the badges would measure your intellectual proximity. A big green bar meant you agree on everything; red meant you couldn't disagree more -- here was someone you knew would be interesting to talk to. Within a few minutes people had changed the handshaking gesture to expose their chest as if to say, "I would like to share my information with you, and learn something about each other." People adapted very quickly.
Right now the wearable technology is visually intrusive. When I was teaching students in the wearable computing project, at first it was very disconcerting to see these cyborgs sitting in the classroom. But since they make me put my lecture notes on the Net, and they can see them floating next to me and annotate them while looking at me, it actually means I get more of their attention than people who are just looking at their books. The technology they are using helps annotate me with relevant context that lets them pay attention to me and not just divorce their attention.
One of the most striking implications of wearables is that your notion of neighborhood, of community, doesn't have to be physical. You can be interacting with people at a distance at the same time that you are interacting with people nearby. One of the things the people who work in wearables have found very quickly is that you need to provide feedback to your local environment about your remote environment -- helping other people perceive what you perceive.
It's one thing to have rather privileged MIT Media Lab students using wearable computers, or many of the other things you describe in your book, and it's another to have "normal people" using them. Considering the disparity between the high costs of what you're developing and the access and financial means of the rest of the world, won't many of these "things that think" just be creating a greater divide between technology haves and have-nots?
It's important to understand that there are two barriers to access -- one is economic, and one is support. Not only do you have to get the machine, but you have to know how to use it.
The economic barrier is one place where there's great prospects for hope, because even for the corporate sponsors we work with, they need computers for pennies, not dollars. They need to get the computing to where the business is, and to do that it really has to cost pennies. We're starting to study how to actually print computers with commodity printing processes, so a computer costs on the order of a sheet of ink-jet paper. They are not yet working computers, but are pieces of it -- so you can really start to think of computers as costing pennies, you could drop them out of the airplane because they're that cheap.
But that's only halfway there, because the computer has to be usable. Right now, Windows requires a tremendous amount of support infrastructure. The technology has to work no matter what.
We're doing a project right now with Josi Marma Figueres, the former president of Costa Rica, on jump-starting the technical infrastructure in Central America. We're taking shipping containers and outfitting them with electronic infrastructure, so you can drop them into a remote village and get satellite access, local RF access for a community, local computing and telemedicine. A lot of that is off-the-shelf technology, but the interesting research piece is that things like telemedicine need to work no matter what. You need to make stuff that is adaptive and indestructible and costs nothing and fixes itself, so that it doesn't presume complex support technicians to make it work. That's as hard as making electronic ink. You've got to bring together both the economic sensibility and the intellectual sensibility before you can start to think about breaking down the barriers.
Right now, I'd venture to say that when most people -- even technologically advanced people -- look at some of the things that come out of the Media Lab, they seem more gimmicky than anything else. For example, you describe a
href="http://brainop.media.mit.edu/Archive/Hyperinstruments/hypercello.html" target="new">cello
To ask that question shows an interesting historical bias. Up until recently, a Stradivarius wildly exceeded the performance of a digital computer. We did that project for a couple reasons: Yo-Yo Ma wanted to be able to do more, control more sounds in more ways -- enlarge the sonic space and be able to shape phrases. He would never ask your question, because he understands that the Strad is technology, but it has serious limits he wants to overcome as long as we can give him something better than what he has now.
Letting Yo-Yo do more is a good answer, but it's only one. The next one is that being in a room with a Strad is an amazing experience -- it's such a powerful instrument. Increasingly, now, we can turn the Strad into a software application, giving everyone on the planet access to it. But it's still not the reason to do it. The real reason in the end is the social division -- you can either spend a lifetime learning how to play the cello suites, or you just listen to them. But this object we made spans the continuum from a CD player to a Strad and beyond. You can listen to the cello suites, you can control phrasing, conduct it, control the tempo, take over more and more degrees. We open up a continuum and reduce the threshold for people to engage creatively by opening up that space.
In the same sense, the shoe computer seems like a joke, like Maxwell Smart. But go through those specs. A laptop: I've got to carry it, and a power supply, cables and adapters; I've got to sit down and open it up. The shoe: It's always with me, I don't have to look for it, I don't have to think about power supplies because it can be powered up from the friction of walking, I don't have to plug it in to things. I simply become the interface. All those specs make much more sense.
To ask if this stuff is a gimmick is a question about the evolution of the history of technology. In the name of technology people have done horrible stuff -- clunky wearable computers and clunky musical instruments that are just worse than what they replace. It's only interesting to do these things I'm describing if they really are better than what they presume to replace, and add new capabilities. But doing that isn't just a little bit of electronics hacking, it really taxes the resources of MIT. It's a very fundamental question.
When you look at something like the computer shoe, though, it requires that the computer be in the shoe, that the shoe be communicating with lots of objects around you, that there will be other people wearing the same shoes, and that the shoes be affordable to me.
If I talk with presumptuous certainty about this future, it's because everything I talk about in the book works in the MIT Media Lab. These are things that really are in product development pipelines.
It's not going to get rolled out like the Euro. Initially the stuff is going to come out in a couple places. One place is where there's a desperate reason to solve it -- sensory car seats that help save babies' lives [a product the MIT Media Lab recently manufactured to protect infants from airbags], or using wearable computing to read airplane repair manuals so that mechanics don't have to keep going back to the library. Those are the niche markets where you don't ask that question, it just has to happen.
The second vector will be the new generation who's unwilling to live without these capabilities. The third vector is enabling the capabilities themselves, making them possible. What's going to be playing out over the next decade is a creeping and coming together -- an interesting confluence of those three things.
We're made out of atoms, and we will be for the foreseeable future. What's interesting is how the digital world relates to the physical one we live in. The story for the next 10 years will be about breaking down the barrier between the bits and atoms. Eventually, it's going to become a very familiar notion, even though right now it seems elusive and quirky.
Shares