Robot Morality

tumblr ni2tcv9Liw1shkhw1o1 1280The intersection of technology and morality has long been a staple of science fiction — think of The Three Laws of Robotics, Daleks, The Borg, Cylons, Transcendence and The Machine, to name just a few. But recently, a number of artificial intelligence (AI) thresholds have been crossed: Watson's Jeopardy win, Eugene the Machine beating the Turing Test, and now the University of Alberta'€™s Computer Poker Research Group solving heads-up limit Texas hold 'em poker.

Increasingly, people are starting to think of AI as a potentially serious concern for our species. Stephen Hawking and Elon Musk's dire warnings about the dangers of superintelligence have made headlines in the last few months. Their claim is simple: if the power of AI exceeds the power of human intelligence before we understand the consequences, humanity may be doomed. Nick Bostrom, Director of Oxford's Future of Humanity Institute, published a book, Superintelligence: Paths, Dangers, Strategies, that provides an extensive analysis supporting their concerns. Bostrom sees morality ("value acquisition") as an essential aspect of any controllable superintelligence.

On January 15, 2015, the Future of Life Institute published an open letter (signed by Hawking, Musk, Bostrom, and thousands of others) proposing that research priorities be carefully structured to reduce AI's long term dangers and increase its benefits to mankind. Among the proposed research priorities are recommendations for ethics research, which would include investigating "machine ethics" and the ethics for "lethal autonomous weapons."

One particular flavour of technology that demands such consideration is robotics. A recent article in the New York Times touches on how people are currently framing the problem of robot morality, and how some are attempting to deal with it. Self-driving cars, for example, will occasionally be faced with life-and-death situations (e.g. hit a pedestrian vs hit another car vs hit the ditch). I think the article exposes two important points that show just how little we understand about the intersection of technology and morality:

robotpic1. The description of robot morality in the article tends toward the utilitarian: do a cost/benefit calculation, then act. This works well in some cases, but may have a sociopathic quality in others. For example, in the Trolley Problem (where you have the option of diverting a runaway trolley away from a track with five people on it to one with a single person on it), the utilitarian decision to chuck the fat stranger in front of the trolley to save the five people is the solution that tends to be favoured by sociopaths, but not by most human beings. The case of the exploding Ford Pinto is a real—and notorious—example of the cold application of a utilitarian calculus. There's more to morality than utilitarian calculations, and it's still very poorly understood. So sprinkling a little morality code on a robot will be considerably harder than the article implies. But you've got to start somewhere, which is why…

2. …the examples of moral dilemmas faced by self-driving cars are instructive: whether or not technological artifacts are explicitly endowed with a moral calculus, they exist in a morally charged world, and many decisions have an unavoidable moral dimension. The rapidly increasing sophistication, reach and impact of technological stuff in the world means that this isn't something we can afford to solve in an ad hoc manner. It is both urgent and important.

Gaining a deeper understanding of the intertwined nature of human beings, technology and morality is no longer just the domain of science fiction, nor is it merely the subject of abstract philosophical speculation. It is very concrete, and the future of humanity will depend on it.