Looking for a good domestic robot? According to www.ns-5.com, the world's first fully automated domestic assistant is about to go on sale. The Nestor Class 5 robot is six foot tall, looks vaguely human, and can do all sorts of housework, from washing-up to managing your finances. There's just one catch: the website promoting this amazing gadget is just a tease, a clever bit of advertising from 20th Century Fox to promote its movie, "I, Robot," which is released in the UK next month.
"I, Robot" is a sci-fi action thriller starring Will Smith, although the real star is the beautifully rendered NS-5 robot. Smith plays a detective investigating the murder of a famous scientist working for the fictional US Robotics company. Despite the fail-safe mechanism built into the robots, which prevents them from harming humans, the detective suspects that the scientist was killed by an NS-5. His investigation leads him to discover an even more serious threat to the human race.
Isaac Asimov wrote more than 500 novels and short stories, and invented the term "robotics." His grasp of science fact -- he gained a PhD in chemistry -- lent rigour to his science fiction. "I, Robot" is loosely based on a collection of Asimov's earliest stories, most of which revolve around the famous "three laws of robotics" that Asimov first proposed in 1940. In those days, barely two decades after the word "robot" had been coined by the Czech playwright Karel Capek, other writers were still slavishly reworking Capek's narrative about nasty robots taking over the world. But Asimov was already asking what practical steps humanity might take to avoid this nasty fate. The solution he came up with was programming all robots to follow the following three laws:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These three laws might seem like a good way to keep robots in their place. But to a roboticist (another of Asimov's neologisms) they pose more problems than they solve. Asimov was well aware of this, and many of his short stories revolve around the contradictions and dilemmas implicit in the three laws. The sobering conclusion that emerges from these stories is that preventing intelligent robots from harming humans will require some
For a start, the robot would need to be able to recognise humans and not confuse them with chimpanzees, statues and humanoid robots. This may be easy for us humans, but it poses considerable difficulty for robots, as anyone working in machine vision will tell you. To follow rule two, the robot would have to be capable of recognising an order and to distinguish this from a casual request something well beyond the capability of contemporary artificial intelligence, as those working in the field of natural language processing would attest. To follow any of the three laws, the robot would have to determine whether and to what extent any of them applied to the current situation, which would involve complex reasoning about the future consequences of its own actions and of the actions of other robots, humans and other animals in its vicinity. But why stop at its own immediate vicinity?
The first law, as stated above, includes no clause restricting its scope to the immediate surroundings of the robot. A robot standing in the Arctic might reason that it could take food to Africa and thereby save a child from starvation. If it remains in the Arctic, the robot would, through inaction, allow a human to come to harm, thus contravening the first law. Even if artificial intelligence advanced to allow the three laws to be implemented in a real robot, the problems would be far from over, because the laws provide plenty of scope for dilemmas and conflicting orders. Conflict between one law and another is ruled out by the fact that the three laws are arranged in a hierarchy. But what about conflict between multiple applications of the same law?
For example, what if a robot was guarding a terrorist who had planted a timebomb? If the robot tortured the terrorist in an attempt to find out where the bomb has been planted, it would break the first law; but if the robot didn't torture the terrorist, it would also break the first law by allowing other humans to come to harm.
Such dilemmas are referred to as "choice of evil" problems by moral philosophers, and even they find them difficult to deal with, so it would be unrealistic to expect robots to find them any easier. To enable robots to avoid getting caught on the horns of such dilemmas, they would need some capacity for moral reasoning -- an "ethics module", perhaps. That would be hideously complex compared to Asimov's three laws.
If these speculations seem far-fetched, the day when they become pressing issues may be closer than you suspect. Computer scientist Bill Joy is not the only expert who has urged the general public to start thinking about the dangers posed by the rapidly advancing science of robotics, and Greenpeace issued a special report last year urging people to debate this matter as vigorously as they have debated the issues raised by genetic engineering.
We should not be too alarmist, however. While the field of robotics is progressing rapidly, there is still some way to go before robots become as intelligent as the NS-5. As Chris Melhuish, a leading British roboticist admits: "The biggest threat our robots currently pose to humans is that you can trip over them."
Shares