Machines and the Morality Code

Like many of humankind’s technological ventures, the race to autonomous machines – especially cars – is one that exceeds our ability to contextualize and more importantly to code into a technological object, our sense of morality.

And wait, before you brush this off as a minor issue, it is important to note that we impart a lot of moral overlay onto our actions as part of our daily routines.

Take self-driving cars as an extreme example (extremes are often useful to tease out an issue without the need to bring people up to speed on a finer discussion point).  Is it proper for a self-driving car to move over into the passing (left) lane and speed?  Seems simple enough.  There’s a law and you can code very easily to the parameters (there’s a minimum and a maximum speed and the car should operate only with those limits).

While an ideal programming use-case, the moral case around this is different.  People speed.  The passing lane is used to go faster to go past someone. Ergo if they are going the speed limit or even slightly below it and you want to pass them, you need to speed.  Besides that, simply no one I know would purchase a car that limited the top speed – I think we all secretly think we’re going to have that James Bond moment and need that extra speed to save the day! 

This is one of those cases where the law is the law…BUT.

Ethicists would probably say there’s a work around to that problem that is quite easy – let the human set the upper speed (which is in fact how all cars with any type of speed control work) even if the car knows what the speed limits are. Thus there is no moral decision the car must make; essentially you’ve simply moved the cheese to the human operator and really bypassed the problem.

But here’s an example where we cannot transfer the moral decision making: a computer driven car is coming down a street.  It knows where the street is and where the sidewalk is, the car in front of it stops and the car calculates there’s not enough time to bring the vehicle to a halt and there’s oncoming traffic in the opposing lane.  Does the car go onto the sidewalk or smash into the back of the car in front of it? Which rule should it break?

Now how about a really hard one (a similar one which was the opening premise of iRobot); a car has a choice to safeguard you, in an inevitable accident where the car has a choice of hitting another car, hitting a light post or hitting an adult male. Hitting the adult may mean the least risk to you, but is that the decision you want the car to be programmed to perform?

Does that have your wheels turning a little more?  

Is it the cars decision to sacrifice you to safeguard the pedestrian when speeds are such that your survivability is in doubt? Would it make a difference if it were an old man?  Or a woman with a baby stroller?

These are moral dilemmas that we probably don’t have quick answers too (well except maybe the baby carriage one). But it’s an example of the difficulty of technically programming an autonomous machine to work in our random society where rules might be rules, but common sense is what rules the day.   

Have Questions? Contact Us!

We will never call your phone number unless you selected “call me” above.