By Chet Yarbrough
By: Isaac Asimov
Narrated by: Scott Brick
Isaac Asimov stretched the imagination of science with a prescient understanding of robotics, in his book I, Robot. I, Robot, a book published in 1950, was a vision of the future; i.e. a vision that attracts and repels its reader. Attraction comes from a vision of human liberation and possible salvation. Repulse comes from a vision of robotic dependence and loss of human volition.
Asimov creates a character named Dr. Susan Calvin. Calvin is a consulting psychologist for robot manufacturers and users. She is the go-to person whenever robots seem to malfunction. She is the perfect choice for the job because of her understanding of how a programmed robot “thinks”. Her role is to analyze aberrant computer function and correct unacceptable behavior. Unacceptable robot behavior, in Asimov’s story, inevitably originates with human error. In more modern terms, either garbage-in/garbage-out’ programing, or misunderstanding of how a Centralize Processing Unit works, is the cause of malfunction.
Three fundamental laws for robot programming are required in Asimov’s view of the future. The three laws are hierarchical with the first being most important. Robots must be programmed to do no harm to humans, obey human orders, and protect their own existence. Asimov creates a series of stories that show how human misunderstanding of any one of these three laws causes robot malfunction.
One story shows how robots can harm humans when given imprecise or non-literal commands. A robot fails to take preventative action to avoid human harm by following an off-hand comment by a human for a robot to “get lost”. A second story explains how a human command for a robot to retrieve ore on an active volcanic planet fails because the command conflicts with the third law which says a robot is to protect its existence. The problem is—the robot knows volcanic activity (which is where the ore deposit exists) radiates and destroys a robot’s CPU. This makes the robot retreat on approach to the ore because of radiation, and return when the radiation is out of range. The ore is never delivered and the robot does not return because it keeps moving in a circle trying to follow both the second and third law of robot programming.
Asimov’s heroine, Dr. Susan Calvin, is convinced robots are better than humans, with an inference that reliance on robots will insure the future of humankind. Calvin’s steadfast support of robots is challenged by Asimov’s cautionary tales of robot mismanagement by humans. Asimov’s stories of human’ mismanagement alludes to human dependence on robots, and concomitant loss of human choice. Human morality and instinctive choice are replaced by a robot’s programmed set of ones and zeros.
With the advent of Artificial Intelligence, some of Asimov’s vision of the future is blurred. If A.I. can be improved to the level of human brain function, morality becomes a part of a C.P.U.’s decision-making process. Of course, this leads to a more complicated set of problems but morality becomes a part of the decision making process. Ray Kurzweil takes that possibility a giant step farther by suggesting A.I. become a part of the human genetic code, melding human minds with C.P.U. capabilities. That science fiction novel might be titled “I, Robot, 2.0”. A new set of problems would be revealed in stories of enhanced human ability to lie, cheat, and steal in the pursuit of money, power, and prestige.