Space Ramblings

Isaac Asimov Never Said that Robots Cannot Harm People

This particular bit of idiocy seems to keep cropping up, including from people who should know better, but for some bloody reason, don’t. This issue comes to mind namely because both SomethingAwful in their 22 Most Awful Things About Science Fiction and Mike Resnick in his Baen’s column go into this.

The sad truth about the Three Laws of Robotics is that they are entirely removed from reality. Even in Asimov’s heyday the military was developing and deploying computer-controlled machines designed to kill.

Is how SA puts it. SA at least has an excuse because it’s run staffed by morons who pay 10 bucks just to be given the privilege to post on their forum. Mike Resnick has much less of an excuse,

And let’s start with one that even non-science-fiction people like to quote: Isaac Asimov’s First Law of Robotics, which states that a robot cannot harm a human being, or through inaction allow harm to come to a human being.

Sounds sensible. Of course we’ll build that into every robot we ever make. Everyone knows that.

Uh…well, maybe not quite everyone. Seems to me that in 1991, the entire world saw a smart bomb, which is nothing but a robot in other-than-humanoid form, find its way down an Iraqi chimney. In 2003, we saw the Navy fire a smart bomb into the air while at sea, and the bomb, using its (non-positronic) brain, found its target 450 miles away. So much for First Law.

So much for Mike Resnick. Isaac Asimov did not postulate that every robot we build will have the Three Laws built into it. Isaac Asimov developed the concept of a fictional universe where intelligent robots are used in the home and for other consumer uses and that to reassure people, these robots are based on the Three Laws. This is of course a fictional scenario. Isaac Asimov never believed that we’d never build robots to kill people. What he postulated reasonably enough, is that if we’re going to use robots in the home and sell them to consumers, they need safeguards and the Three Laws make a certain amount of sense in that regard. Isaac Asimov was a pacifist and at no point in Asimov’s novels set on Earth is there a war in progress.

Finally talking today about robots in the Three Laws sense is irrelevant. We have nothing today that remotely compares to the kind of intelligent robots Asimov was writing about. Computer controlled is correct. We have computer controlled bombs but that computer is controlled by a man. Our computers are programmed systems designed for specific tasks. They are not artificial intelligences and it is stupid to claim that the Three Laws are irrelevant because we’re building remote guided weapons to kill people. It’s as silly as claiming that the trigger on a gun disproves the Three Laws.

If we ever do create robots with minds complex enough to be able to exercise real choice then we may well need something like The Three Laws in place. Ultimately though the Three Laws are a logical dillema of the sort Asimov loved to unravel. They are not a work of ironclad futurism and while SA may not know better, Baen’s Universe should.

Related posts:

Post Navigation

Custom Avatars For Comments
%d bloggers like this: