Most philosophers are familiar with the Trolley Problem. It's a theoretical scenario where our moral intuitions are tested: Can we provide a moral reason to distinguish between pulling a lever to divert a threat, or actively putting a person in harm's way to prevent a worse consequence? Is there a moral difference between intentions and consequences in these situations? Does it matter? How do we make sense of the conflict of moral intuitions and values that your average person feels when evaluating these cases?
There are a few conditions to keep in mind when evaluating the Trolley Problem:
1. The question to ask is, “What is the best or moral thing to do in the situation?”
2. We should recognize that this is a thought experiment, and the option is forced. So you can’t try to get out of the experiment.
Although the Trolley Problem was conceived as a thought experiment, its application has become more prevalent in the areas of psychology and computer technology (among others). Below are some links that explore more practical reasons for considering the morality of our intuitions in these "lesser of evil" type situations.
Here's a video on the traditional problem:
Virtual reality and neuroimaging are helping us discover what goes through our heads when we decide.
Driverless cars will (hopefully) be programmed to avoid collisions with pedestrians and other vehicles. They will also be programmed to protect the safety of their passengers. What happens in an emergency when these two aims come into conflict?
B. The Problem of Self-Driving Cars