Forums

Why self driving cars must be programmed to kill

Why self driving cars must be programmed to kill

I found this article discussing ethical dilemmas for autonomous vehicles interesting. I apologize if it has been posted already.

http://www.technologyreview.com/view/542626/why-self-driving-cars-must-b...

bobrobert | 31 octobre 2015

Another example of the problem of moral absolutes – whether an algorithm can be written to respond mechanically and be right (or best) in every situation. The article is a variation on a classic philosophical exercise, which the emergence of self-driving cars takes out of the classroom – and poses the additional question of whether one can abdicate moral responsibility to a computer.

Life rarely poses such extreme tests, but moral absolutes are often presented in less mortal ways; for example, whether one should always tell the truth, even when a white lie will spare needless hurt.

The answer is personal and spiritual – a vague way of saying that absolutes are inhuman, to keep hands on the wheel, instincts sharp, and intuition open. A clearer answer too quickly becomes absolute, and your religion may vary.

AmpedRealtor | 31 octobre 2015

Actually the car will use facial recognition to run background checks on those it is considering hitting in the split second before an actual impact, and will run over those with the lowest credit scores.

EVino | 31 octobre 2015

Thank you br

georgehawley.fl.us | 31 octobre 2015

Invalid proposition. Death is not a certain outcome in any case posed.

SbMD | 31 octobre 2015

Such a doomsday decision tree is too complex and controversial for humans, so to presume that a human could compose a valid algorithm to do that sort of determination is an impossibility.

In other tems: an article written with the wrong focus. More interesting to ask what to do to prevent the loss of lives.

jlocke | 31 octobre 2015

What I find interesting is the point of a self driving car is to be not distracted and always alert and ready to respond. How would a car put itself in said position that all of a sudden a group of people is in it's path and it cannot stop in time? Coming around a corner doesn't explain it as most cars aren't going so fast that slamming on the breaks doesn't stop the car quite quickly. If road conditions are slippery that car would adjust for that to ensure it doesn't get put in this situation.

Humans put ourselves in these situations where we need to make a moral choice in a split second, a computer shouldn't be driving so poorly it is forced into this type of situation.

Pungoteague_Dave | 31 octobre 2015

A smaller moral dilemma exists when presented with the choice of striking an animal or trying to avoid it. In motorcycling we are taught to always aim for the soft center, and not do the natural thing, which is to turn away. I have hit two deer while motorcycle riding, last one in 2011 resulted in a nice helicopter ride and smalll brain bleed. Bastard forest rats all deserve to die.

dborn @nsw.au | 31 octobre 2015

Deer make excellent biltong. If you don't know what that is, it is South African beef jerky, but vastly better than jerky. All ex South African carnivores like myself crave the stuff even 40 years after leaving the country!!

tesladude | 31 octobre 2015

@jlocke, you are being idealistic. It doesn't matter how alert you are or how ready to respond the computer is. You don't control what others do. Situations like described in the article WILL arise and software needs to be programmed to deal with them. However, the chances of an extreme situation like this are pretty low and in every day driving we don't encounter them that often.

It does, however, get even more complicated than that. If you survey humans regarding these situations and add other factors (e.g. people crossing the street are pregnant women vs. armed criminals trying to stop and take your car) the answers from humans will vary wildly at the same time the software will not even be able to make a distinction.

In the end, no matter what decision the software makes, if the loss of life occurs as a result, the investigation will have to consider what other decision(s) could've been made and what outcomes that decision could have lead to. As @georgehawley points out, certainty of death cannot be proven in a case that didn't happen, so it will be difficult (if not impossible) to argue that the decision made was the right or the wrong one.

Aside of moral choices that the software needs to make, there are other significant challenges that must be overcome before the fully autonomous driving becomes a reality:
1) Practicality in urban environments with dense pedestrian population. Pedestrians crossing in front of a stopped car may ignore traffic signals and continue crossing indefinitely if they know the car won't move as long as someone is in front of it.
2) Criminals can take over your car if they can easily predict that the car will stop simply by standing in the middle of the road.
3) Terrorists can use autonomous cars as weapon delivery mechanism with high precision and automatic deployment upon arrival...

I can continue as this list is long and these challenges are far more complicated than the challenge of the actual driving. As much as I would love to see fully autonomous cars in my lifetime, I don't think I will. But I think we can get pretty close though.

bobrobert | 31 octobre 2015

Yeah, PD, those forest rats all carry bubonic plague infested fleas – no wonder you had brain-bleed.

After such tragedies as described in the article, no matter how rare, we'll have knee-jerk regulations because the answer to something bad happening is always to pass new laws (leaving the next generation of barristers to mollify the consequences.) Future furry critters as well as pedestrians will be required to carry cellphones or at least SIM cards on their persons, turned on so that Google can track them, else they forfeit all rights & claims, and may be liable.

Eventually the case of the State vs. Google will determine whether the algorithms are properly valuated and implemented. The rules will surely be set by insurance actuaries, perhaps secretly weighted by AmpedRealtor's consideration of credit score.

evsisson | 31 octobre 2015

The car will eventually learn to sound the alarm and yield control to the driver, and if it's smart, it will enter that in its log.

EVino | 31 octobre 2015

Easy. Just follow Asimov's Three Laws.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

PBEndo | 31 octobre 2015

@george
Replacing the wall with a high cliff ledge could make death a certainty.

@SbMD
It may be impossible for humans to develop the perfect algorithms but rules will still need to be developed since the events will occur.

@tesladude
Your first two examples are resolved by allowing the occupant to initiate motion or take control.

Grinnin'.VA | 31 octobre 2015

@ SbMD | October 31, 2015

>> Such a doomsday decision tree is too complex and controversial for humans, so to presume that a human could compose a valid algorithm to do that sort of determination is an impossibility. <<

^^ As I see it, the focus of current self-driving car R&D is focused on a very different perspective. These efforts are focused on making systems that reduce fatalities by roughly a factor of 100 compared to cars driven by humans. They deal with the details of making sensors and control logic with that goal.

For at least the next decade, I doubt that the scenarios described in that article will ever make their way into the logic of autonomous driving systems. These systems try to "see" the driving environment and react to changes in it to make driving safe enough. It's not credible that a self-driving car would "think" that all is well one instant only to discover for certainty from its next sensor/control cycle that it faces the hypothetical scenario posed. When a self-driving car suddenly "sees" changes that look dangerous, on the next sensor/control cycle it simply will attempt to change its controls to make it less likely that some horrible crash will result. It will not know that one control option will certainly kill the driver. Such understanding will only emerge over a series of control cycles. Never will the control logic deal with the dilemma posed.

What do you think about this?

tesladude | 31 octobre 2015

@PBEndo, that's what I mean when I say that we will get pretty close. Fully autonomous car will not require to have human controls (such as pedals and steering wheel) at all and will not require anyone with driver's license to occupy the driver seat. It won't even have a driver seat. As long as there is a requirement for a licensed driver to be in a driver's seat of every car on the road we are not in the world of fully autonomous driving.

Haggy | 31 octobre 2015

In motorcycling we are taught to always aim for the soft center, and not do the natural thing, which is to turn away. I have hit two deer while motorcycle riding, last one in 2011 resulted in a nice helicopter ride and smalll brain bleed. Bastard forest rats all deserve to die.

That's why the deer should always cross where there's a deer crossing sign. Otherwise, their insurance will go up.

Eish | 31 octobre 2015

I do kind of feel like the software should give the user the choice - have the car decide by default to go for greater good, i.e. hit the wall instead of one, (the crumple zone should protect driver) hit one instead of the group and hit the person instead of going over the cliff, re the group or cliff greater good would say go over the cliff, but maybe hit the group slower

Drivers could choose to always save themselves, like a driver could currently do - you don't have to swerve if someone is on the road, its instinct, but you don't have to, but you would have to pay the piper and take responsibility for you actions, the only difference is you can choose your actions in advance. If you follow the cars decision the manufacturer settles any claims for damages, if you change it to your desires, you or your insurance would have to pay for damages.

But if one maker said their cars were programmed for greater good, and another said it would always protect the driver regardless - which car do you thing people would buy?

SbMD | 31 octobre 2015

@PBEndo - I agree that rules/algorithms should be developed, but to avoid accidents and tackle driving correctly, but not about choosing which casualties are acceptable.

I have yet to see a scenario presented where such a "kill/suicide" algorithm would need to be implemented. For every hypothetical situation where it is a "kill the one to save the many", one should devise ways to circumvent such a scenario in the first place.

Also, no one would have confidence in such systems if there was a "suicide switch" that would kill off the driver to save others, or a "homicide switch" to kill off another to save the driver.

Go one step further: if you teach a machine to choose whether someone lives or dies, then you better be really sure that you gave that machine morals and ethics algorithms... that might be a bit complicated, to boot!

@Grinnin' - I think what you propose is a reasonable way of breaking down those processes.

KL | 31 octobre 2015

This is a false choice. A human driver in this scenario kills at least some people. An autonomous car saw this situation ahead of time because it was not texting after having a couple drinks and stopped the car well in advance.

- K

PBEndo | 1 novembre 2015

An autonomous car cannot automagically see and avoid every possible collision in advance. There is no doubt that they will be better than humans, but unless you separate all cars and pedestrians by much greater distances than we currently allow and reduce speeds to unacceptably low rates, there will be times when a collision will be unavoidable. The rules that are programmed into the system could determine the outcome. Even if the algorithm does not specifically count the possible lives lost, it must still decide when/if it is acceptable to deviate from the traffic lane to avoid a collision.

check this video. When I had this close call I remember wondering what a self-driving car would have done, especially if the circumstances were worse than this.
https://www.youtube.com/watch?v=j-FEKShWJGk&feature=youtu.be

If I did not have an empty lane to my left, how would I have avoided this accident? Would an autonomous car have been able to react so fast that it could have stopped while staying in its lane? Would that have resulted in a rear-end collision by the following car? What would it do if the black car had turned more abruptly, or even turned perpendicular to the direction of traffic, thereby blocking the entire lane?

SbMD | 1 novembre 2015

From the poorly written article comes a much more interesting discussion... many thanks, @PBEndo, for the thread!

No doubt, that as many possible and seemingly impossible scenarios should be accounted for using algorithms that strive for preserving life and property, in that order of importance.

Mixing autonomous and human drivers in the same setting, which is getting closer but still in the distance, is the holy grail of self-driving cars. If all vehicles were self-driving and have proximity-based car-to-car communication, safety algorithms would be much easier to build, and can actively avoid potentially dangerous scenarios in traffic management (e.g. boxing in cars without adequate maneuvering room).

Until autonomous cars hits a critical adoption and regulation, humans will need to shepherd their cars to some degree, taking the responsibility for making the more involved driving decisions.

Ross1 | 1 novembre 2015

a pity that there is a similar thread started under GENERAL forum

maxr | 2 novembre 2015

For me, the car should be programmed to stay on the road. Always. If it hit something/someone it will be at low speed. But programing is all about rules. The car should follow the road rules. People should not be walking in the middle of a road.

Autonomous software will improve but so will human too. We'll learn to avoid situations where we know and can predict that cars WILL follow the rules. Otherwise any person can intentionally decide to kill a driver by just standing in the middle of a road, near a cliff, knowing the computer will spare him.

Autonomous cars should demonstrate predictability and us humans avoid putting ourselves in their way. We do that already (nobody cross a highway by foot) and we should continue to unsure our self protection (autonomous or not).

ChrisH314 | 2 novembre 2015

Asimov discussed this problem in great detail. It becomes an insoluble problem. Consider, the Tesla is being driven by a world renowned 30 year old cardiac surgeon, and is headed towards an accident with a group of ten 90 year old convicted escaped unemployed alcoholic paedophiles. Ideally, a moral decision should be made that even though 10 lives could be traded for one, the one is more useful to society.

Ross1 | 2 novembre 2015

And from the link I posted, if it is programmed to avoid school buses, what if the bus is empty and the AV is full?

garygid | 2 novembre 2015

It would appear that human lifeforms are overly plentiful
on this planet as it is, and roughly 80,000,000 die every year.
If Autonomous vehicles save a few thousand, very little changes.

How much do parents get paid when their son is killed
in a military "training accident", or in a war of some sort?

Perhaps that tells us something about the value of human life.

However, how valuable is the last viable group of any species?

It would seem that some/many humans care very little.

Son of a Gunn | 2 novembre 2015

Silly headline.

Grinnin'.VA | 2 novembre 2015

@ Son of a Gunn | November 2, 2015

[[ Silly headline. ]]

^^ YES!

AmpedRealtor | 2 novembre 2015

I still like my credit score idea.

Ross1 | 2 novembre 2015

@ garygid:

"However, how valuable is the last viable group of any species?"

St Elon , whom everyone loves, puts such a high value on them he is serious about putting it on Mars.

And saving this planet from being asphyxiated.