Are we headed for mass destruction? Are technological innovations laying the foundation for machines to take over the world? These are the questions plaguing Sun Microsystems co-founder and chief scientist Bill Joy.
Joy, who was the principal designer of the Berkeley version of Unix and one of the lead developers of Java, last made a big splash in the press in 1998, when he was all pumped up about Sun's new networking technology Jini. Having spent most of his life working on ways to make computer networks work better, he was preaching the gospel of a technology that Sun said would enable truly user-friendly, plug 'n' play computing.
But these days Joy is engaged in an altogether different kind of crusade: He wants us to slow down the technological race, and rein in technologies -- specifically nanotechnology, genetic engineering and robotics -- that he believes could ultimately endanger human life as we know it. Joy spelled out in a recent issue of Wired magazine his fear that modern technologists may unthinkingly be heading down the same path as the physicists who built the atomic bomb.
At a recent Stanford University forum entitled "Will Spiritual Robots Replace Humanity by 2100?" he squared off against optimists, including computer scientist Hans Moravec, who argued that the future's technological organisms -- even if they wipe out humanity -- represent a higher form of evolution that we should be proud to usher in. In the midst of such claims, Joy's call for limiting or regulating research in nanotech, genetic engineering and robotics, played well to the crowd of about 1,000 who fought for space in the auditorium, but his fellow scientists remained unconvinced. That's not stopping Joy, however, who is determined to get people to listen and take action.
Celera Genomics announced last week that it has finished sequencing the human genome, moving the debate about genetic engineering one step further from abstraction toward reality. Given your views on the potential dangers of genetics, nanotechnology and robotics, how do you think this new discovery and others should be controlled?
We have to do something. Because otherwise -- within 20 or 30 years, probably sooner -- if we give everybody personal computers that are a million times as powerful, they'll have the ability to manufacture whatever they can design. We'll have an untenable situation, because crazy people can design new diseases and build them on their own computers. Somebody has to take responsibility for that and figure out a way to not let that happen.
I mean, the people doing genetic engineering are doing really good stuff. It's just that I don't believe they've dealt with the confluence of their field and Moore's Law. If the tools they had were just like the ones they always had, it wouldn't be a problem. But look at Celera, finishing the sequence. How long was it supposed to take? I haven't been keeping track, but they got it done ahead of schedule. And yet, I still don't see that we have adequate mechanisms or the psychological desire to put any restrictions in place.
Then how do you remedy that situation, with genetics and the other technologies? How do you set up a system that keeps companies like Celera in check?
I'm not worried about the commercial companies. Some people could worry that companies could do something bad. But I just don't want to get to the point where crazy people can do it. In the nuclear age we had two organizations with unlimited power, the Soviet government and the U.S. government. Others have nuclear weapons, but not at the level to destroy civilizations.
We may have many companies that have the technology in their labs to do very disastrous things. That's probably difficult to manage, but it's not inconceivable. But every person in the world? That's inconceivable. We can't ethically go there. That's not acceptable. We can't let ourselves get to that situation. There's this great quote from Woody Allen: "More than any other time in history, mankind faces a crossroads: One path leads to despair and hopelessness, and the other to total extinction. Let us pray we have the wisdom to choose correctly."
So in some sense people despair of having to go down a path where they would limit anything, and they feel hopeless to change it -- that's one of Woody's two paths. On the other hand, going out and giving everybody all the information, then letting any crazy person destroy things, is completely and morally unacceptable. But we have a choice, and this is a real choice. We don't face these very often.
Should we put Microsoft ... I mean, Microsoft? They're a bunch of people that don't have much personal ethics, and they break the law all the time, so we kind of futz around. We do this to them or we do that -- but it's not like society's going to crumble. Certainly if we continually fail to enforce the laws of reasonable conduct, our code of ethics in business will decline to the long-term detriment of our society, but the whole thing isn't going to collapse because we failed to enforce the clear law in this case.
But something like this seems much harder. We're talking about real consequences here. We're not talking winners and losers. In my mind, everyone's a loser on this path.
So then, who controls the technology, and saves us from ourselves? An international body? Scientists?
Well, the problem is that there are at least seven ways in which this technology differs from the nuclear technology of the 20th century, all of which makes the problem of regulation harder.
What are those differences?
The first thing is that everything's happening so fast. Moore's law is happening so fast. We don't think things are possible, but in fact they're already going to happen, for sure. The second is this amplification, where self-replication lets an individual do something that has a wide impact. For example, [in February's denial of service attacks] one person probably brought down the Web; that's an example of replicated code. Then there's the irreversibility of the actions. Even if you create a nuclear [war] winner who blows up the cities, it's not the end [of all life on Earth]. But if you release something deadly into the environment, you probably can't call it back. And they are commercial technologies, not military, so they're really loose in the commercial sector, and they have enormous value. These things are going to create millions of dollars in wealth. It's no surprise that the NASDAQ is going crazy. The technologies are very democratic: If it's information, all you need to design it is a PC -- maybe one that's 10,000 or a 100,000 times faster than we have today. But it's not like having a nuclear, plutonium and uranium refining facility -- that's not something everyone's going to have. The information revolution fundamentally democratizes access to the tools.
The other thing is that these things really blur the lines between machines and life. We understand biology and we understand machines, but these things that these technologies are making are different. Say in nanotechnology, it's machines that reproduce. They're living. And if we mess with genetic engineering, we have to understand [cells] at a mechanistic level. And the last thing is, these things are so powerful ... we really can't foresee what the outcome will be. We can know where there's a situation of great danger, but to predict anything with specificity is extremely difficult because there's so much progress being made in parallel on so many fronts. It's like Dolly -- that was a complete surprise. The motto of the age has to be, expect to be surprised; forget fiction, read the newspaper. So each of these seven things makes the problem more difficult.
Couldn't companies simply limit access to dangerous products, thus keeping them from the "crazy people"?
Companies clearly have to behave, but I don't think companies can prevent individuals from getting the technology. An example of something we might have is some kind of secret patent. We may decide that it's good to study some virus, in the hopes of coming up for a cure for something. But maybe we don't want everyone to know everything about it. And yet we want the company to have intellectual property rights. So we need some kind of patent that isn't published.
I'm not suggesting that's the solution. We'd have to think that through. That could actually make the situation worse. I'm not sure how you'd have the patents be largely secret and still enforce them. It's scary because it sounds like some of these trials where you don't know what you're accused of; it sounds like Kafka.
But see, the problem we have here is that if we're trying to protect against something that's so extreme, we can't rule things out. If we want to have all this incredibly powerful technology, we may have to give up some other things. Fundamentally, individuals can't be free to use anything that's that powerful. Therefore, something has to be restricted: their actions, the information itself, the tools to do it.
Haven't you spent most of your life pushing technology to its boundaries? Given your new apocalyptic views of nanotechnology, robotics and genetics, how has your own work been affected? Has your relationship to technology changed?
I've become more troubled about the progress in these fields because while there's tens of thousands of good things we can do and enormously wonderful wealth creation, longer lives, all these things -- I don't think we've dealt with the downsides, and aren't on a course to deal with the risks. So that's obviously a depressing thing to believe. It's kind of strange to write a piece, and have your fondest wish be that someone would stand up and tell you that you're totally wrong. It's not a message that's a fun message to be delivering.
Did anyone come forward and allay your fears?
No. In terms of the analysis of the problem, there isn't much dispute. I think some people say we can't do anything, let's just go faster -- the fatalist approach. Other people say it's all Darwinism, whatever happens happens. That's really the same argument. The other thing people say is that we have to seek truth at all costs -- that's the Classical thing. But then, if you start looking at the ethical consequences, and try to say what you can do if you believe that something should be done, it isn't easy. Because it involves people. Management would be easy if it didn't involve people, right?
Then, do you still feel justified in your own work -- on networks and Jini, which could be considered an aid to the super intelligent machines that you fear?
I'm actually not working on Jini or the network aspect of it right now. I'm trying to make machines more reliable. I'd like to have systems that worked, that were administered in a way that when you touched it, it didn't break. Every time I touch anything I break it, because change has a higher probability of causing an error.
I don't think [my work on reliability] contributes to the problem because the problem I'm concerned about is offensive uses of these things, and that doesn't require reliability. It requires overwhelming force. Defense requires more care. For example, a burglar can go down the street and look for the house where the burglar alarm is broken; I have to have one that works if I expect my house to be safe. It's the defense that needs the reliability.
So improvement in software reliability improves our ability to deal with these situations to the extent that we can abate them with technology. But in the end, I don't think there are technical fixes to a lot of these problems anyway. We're going to need to have some ethical and political and other fixes.
I'm interested in the kinds of systems that provide services to people who can't tolerate failure -- like air traffic control, hospitals and public safety. We need more and more systems that work safely and reliably in those environments, especially as medicine gets more computerized.
I don't see that it's contributing to this other problem. If I felt it was in any substantive way, then I'd really have a problem. But the rate of progress of the speed of personal computers is largely out of my hands. I sent copies of the article to Andy Grove and Gordon Moore [of Moore's law], but that's all I can do.
Wired News ran an article saying Silicon Valley didn't pay much attention to your warnings -- but personally, how would you gauge the reaction? Did anyone from Sun respond?
I sent copies to all the vice presidents and directors and other people who have equivalent categories in Sun -- that's a few hundred people -- and I would say I got 25 responses all of which were supportive. I suppose you could say that maybe some people didn't like it and didn't say anything because they were afraid. That's not really the culture though; people tend to question everything. I have one old friend who works at Intel who sent me mail saying he thought it was all way, way farther out. Then I sent him back a couple of references trying to move him gently to where I think the truth is. Now I just got mail from him a half-hour ago saying his daughter was thinking about going to school in nanotechnology, and now he's talking to her about it.
What about Hans Moravec's argument that we, as humans, are just parts of this evolutionary whole and that the advent of techno-humans represents a higher form of evolution -- one that we should be honored to bring about, even if they kill us off?
Nah. There's two things going on there. One: Science has always been fatalistic, saying we'll just pursue truth and whatever happens happens. Moravec is espousing, really, the Darwinian world. But humans haven't been subject to Darwinism for a long time. In fact you can make a strong argument that evolution stopped when we came along because evolution is no longer genetic, it's cultural. So he's arguing for bringing back something that has not been the driving force and somehow transferring it from a side effect of capitalism, creating it without anybody asking for it, and then just sort of doing surgery on ourselves. It might happen because we're not being careful, but it's an awful crazy thing.
Aristotle said, "We're clearly rational beings," and we clearly are. We didn't set up our present system without thinking. The Constitution, the whole economic system we have was designed. So I'm hopeful if only because I believe we can choose what's in our interest independent of whether it maximizes someone's notion of profit or destiny. We can choose to have a social safety net, when in fact business would be more efficient if it didn't have to pay those costs, and people had to starve in the streets. We can decide that's not acceptable and we have in most Western societies. That's not a totally Darwinian choice. The total Darwinian choice would just be to let it go. But we've decided that's not the kind of society we want to live in.
Do you think we'll actually reach the point where these technologies are considered dangerous enough to be regulated? Will Congress ever address the issue?
I think we have to be realistic that it takes a long time ... We're a very long way from any prospect of any action. And we would have to deal with the scientific community's enormous desire for lack of interference, and businesses' enormous desire for a lack of interference, and governments' desire to not do anything. Everybody's pretty happy with the status quo at this point. It's going to take some real leadership, and it's going to take time to develop.
Let's just hope no one has the breakthrough to wild self-replication. Luck may be an important aspect here. We might get lucky. Let's hope we do.
Shares