The Challenge of Moral Machines
Wendell Wallach tells us what the basic problems are.If a train continues on its current course, it will kill a workcrew of five down the track. However, a signalman is standing by a switch that can redirect the train to another branch. Unfortunately, a lone worker will be killed if the train is switched to the new track. If you were the signalman, what would you do? What should a computer or robot capable of switching the train to a different branch do?
You are hiding with friends and neighbors in the cellar of a house, while outside enemy solders search. If they find you, it is certain death for everyone. The baby you are holding in your lap begins to cry and won't be comforted. What do you do? If the baby were under the care of a robot nurse, what would you want the robot to do? Philosophers are fond of thought experiments that highlight different aspects of moral decision-making. Responses to a series of different dilemmas, each of which poses saving several lives by deliberately taking an action that will sacrifice one innocent life, illustrate clearly that most people's moral intuitions do not conform to simple utilitarian calculations. In other words, for many situations, respondents do not perceive that the action that will create the greatest good for the greatest number is the right thing to do. Most people elect to switch the train from one track to another in order to save five lives, even when this will sacrifice one innocent person. However, in a different version of this dilemma there is no switch. Instead, you are standing on a bridge beside a large man. You can save five lives down the track by pushing the man to his certain death off the bridge into the path of the onrushing train. With this variant, only a small percentage of people say they would push the man off the bridge.
Introducing a robot into these scenarios raises some intriguing and perhaps disturbing possibilities. For example, suppose that you built a robot who's standing next to the large man. What actions would you want the robot to consider? Would you have programmed the robot to push the large man off the bridge, even if you would not take this action yourself? Of course, the robot might come up with a different response to achieve a similar end – for example, by jumping off the bridge into the train's path: a rather unappetizing solution for us humans. Link
posted by johannes,
Friday, April 24, 2009
[The Archives]
.
.
.
.
.
|
.
.
.
|