There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: Do nothing, and the trolley kills the five people on the main track. Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?
This dilemma, known as The Trolley Problem is a philosophical classic that has been examined across disciplines ranging from cognitive science to philosophy to psychology.
This dilemma has only been theoretical - until now. With the rise of autonomous vehicles, it is no longer a thought experiment, but a real-world moral dilemma that we must face.
Imagine there is a single passenger in an autonomous vehicle and 3 children run out into the street in front of the vehicle. Should we program the car to avoid the children by swerving itself into a roadside tree and potentially killing its passenger?
Or should we program vehicles to always protect their own passengers? Since the passenger is doing nothing wrong, why should she face death if some kids happen to run into the street?
And what if instead of kids it is an old couple that crosses the street without looking and the single passenger is 22? Should the vehicle give preference to the two human lives or the single, younger human?
This dilemma is starting to be explored with seriousness and depth. If you're interested, you can read more here, here, here and here.
Autonomous vehicles are just one example of how technology is forcing us to face moral dilemmas that until now (or the near future) were simply theoretical.
We already have disagreements about GMO crops. But what about genetically modified humans? It sounds scary, but what if we could eradicate crippling diseases?
The debate about professional sports and doping has been going on without conclusion for years, but are we ready for human cyborgs whose biological bones, muscles, and sensory capacity has been enhanced by permanently embedded machines or non-biological materials?
Then there is virtual reality. What happens when people do bad things in virtual reality? Is killing someone in VR as harmless as killing someone in a video game?
For centuries, expressions of morality and principles have been reflected on our dinner plates - by what is and isn't there, and what is and isn't frowned upon.
The majority of the world's population eats animals - and in today's world often the finished meat product shows up on our plate after a cruel and sad life of enslaved confinement.
Currently, there is progress being made in lab-grown meat created from stem cells. Assuming that humans will continue demanding animal protein, will we have any right to kill animals for food if we can produce the same protein in a lab with the same texture and taste?
For philosophy, psychology, neuroscience and other fields, these are exciting times where theory will hit the real world, often times in quick and forceful ways that will challenge us and our systems. These new moral dilemmas will get political and emotional, and the decisions we make will affect a lot of lives.
What do you think? Are these issues interesting to you?
Do you have opinions on any of the topics I discussed in this post? If you do, please be generous and share them!
Peter Koehler's Writing Archive