Moral dilemmas are common questions for PPEists at interview.
They show a candidate's ability to think about things in a way that shows that not only can they be analytical, but also reflective and philosophical.
A strong PPEist is able to explore the hinterland around truth and absolutes.
In 1942, Isaac Asimov provided one prevailing take on robot morality with the three laws of robotics featured in his famous novel I, Robot. His outline was simple: A robot may not injure a human being, or through inaction, allow a human being to come to harm. But, as the characters discover in the novel, sometimes harm is simply unavoidable. What if the question instead mutates to what is preferable, letting the young or old live or sacrificing one to save many?
Here we discuss the famous philosophical dilemma known as the Trolley Dilemma:
Whilst this is an age-old philosophical problem, AI programmers face it in a 21st century way - driverless cars.
Here is a neat MIT interactive game that requires you to choose between different scenarios, resulting in a social preference conclusion, on gender, age, occupation biases.
Read more on the issue here - and don't forget to read the comments at the bottom of the article so you can get a deeper sense of some of the debate.