How Enhanced AI Could Be Achieved Through Crowdsourcing Morality

A car cruises around the corner on a misty country road. At the same time a pregnant woman is taking a walk along the road after a difficult week with morning sickness. A man who has had too much to drink is there as well, he is drifting onto the road, unaware of the woman... Read more »

A car cruises around the corner on a misty country road. At the same time a pregnant woman is taking a walk along the road after a difficult week with morning sickness. A man who has had too much to drink is there as well, he is drifting onto the road, unaware of the woman who, having absent-mindedly started reading a text message on her phone, also wonders on to the road. By the time the car turns the corner it is too late to slow down completely – at least one of the pedestrians is going to be hit. What should the car do?

This would be a dilemma for any driver. Perhaps abstract ideas about whether it is worse to take the life of two people (mother and unborn child) as opposed to just one would probably factor in. Perhaps the fact that the man is drunk is a mitigating factor. But morally speaking, it would be hard to argue for one option as being inherently superior to the other. The matter becomes morally prickly when it comes to driverless cars. Now programmers (and then their AI) have to make a decision about what the car should do in that kind of scenario before it happens. This of course creates the massive question: Where do driverless cars get their morality?

Ethical crossroads: How can AI moral make decisions? Photo by Oliver Roos on Unsplash

Ethical crossroads: How can AI moral make decisions?
Photo by Oliver Roos on Unsplash

Legal framework for AI

The German government recently took some of the guesswork out of the decision by publishing the world’s first ethical guidelines for how driverless cars should handle these kinds of scenarios. The report contains 15 rules for designing driverless car systems that prioritize “safety, human dignity, personal freedom of choice and data autonomy.” In those rules it is stated that the car should minimize human harm over other objects or life-forms. That is to say, given the choice, the car should choose humans over animals or property and it is prohibited from making decisions about human safety based on “age, gender, physical or mental constitution.”

The report does well to delineate the priority line between animal and human but stays decidedly agnostic about the larger moral dilemmas that would involve two or more humans.

“Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation, incorporating ‘unpredictable’ behaviour by parties affected,” the report reads. “They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable.”

Or can’t they?

Science to the rescue

A recent study conducted at the Massachusetts Institute of Technology (MIT) put this matter to the test. Scientists asked humans to vote on sets of alternatives in dilemmas – with factors including gender, age, health and species being taken into account. Using the pairwise comparisons to learn a model of the preferences of each voter and aggregating the answers into a single model, collective preferences were deduced. As a result the chosen alternative in each scenario would come close to being the outcome that “society”, represented by the voters, views as the least terrible among all the distasteful options. So, in effect the programmers are passing the buck. They’re crowdsourcing the decision, which opens up its own set of questions.

The morality of the crowd

Other researchers from MIT extended the experiment by launching the Moral Machine website – a space for people to answer questions on the subject of difficult choices that self-directed cars might have to make on the road. The results were in turn used to write a paper published by Carnegie Mellon University’s Ariel Procaccia and one of the Moral Machine researchers. Procaccia conceded that the morality that filters through the crowd is not perfect but argues that it is still a good start. “Democracy has its flaws, but I am a big believer in it. Even though people can make decisions we don’t agree with, overall democracy works.”

Procaccia was responding to general doubts about the validity of shifting the moral burden onto a group of people. After all, just because the majority of a population views something as right does not make it so. History is littered with examples that prove that case.

Then there is the concern about programming. That is to say, there is potential for favoritism in the people who alter raw crowdsourced data into decision-making algorithms, with the possibility of different analysts arriving at different conclusions from similar data. In this regard Procaccia hedged the argument in an interview with The Outline magazine by stating: “We are not saying that the system is geared up and ready for operation. But it is a testimony of notion, showing that democratic system can help address the impressive challenge of moral decision making in AI.”

Conclusion

There are definitely still some big question marks about how to make AI truly ethical. There will always be decisions that driverless cars make in dilemma situations that some people disagree with. But on the other hand there will always be disagreement about what people choose in any dilemma – the very fact of the dilemma implies that both options are unpleasant. At the very least the MIT study indicates that the crowd can get us to a set of ethical decisions that most of us can live with. Is it perfect? No, but nothing in life (including AI programming) can be.

Tags: , ,