Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Autonomous Car Ethics: If a Crash Is Unavoidable, What Does It Hit?

Soulskill posted about 4 months ago | from the i'm-sorry-dave,-that-volvo-looked-at-me-funny dept.

Transportation 800

An anonymous reader writes "Patrick Lin of California Polytechnic State University explores one of the ethical problems autonomous car developers are going to have to solve: crash prioritization. He posits this scenario: suppose an autonomous car determines a crash is unavoidable, but has the option of swerving right into a small car with few safety features or swerving left into a heavier car that's more structurally sound. Do the people programming the car have it intentionally crash into the vehicle less likely to crumple? It might make more sense, and lead to fewer fatalities — but it sure wouldn't feel that way to the people in the car that got hit. He says, '[W]hile human drivers may be forgiven for making a poor split-second reaction – for instance, crashing into a Pinto that's prone to explode, instead of a more stable object – robot cars won't enjoy that freedom. Programmers have all the time in the world to get it right. It's the difference between premeditated murder and involuntary manslaughter.' We could somewhat randomize outcomes, but that would lead to generate just as much trouble. Lin adds, 'The larger challenge, though, isn't thinking through ethical dilemmas. It's also about setting accurate expectations with users and the general public who might find themselves surprised in bad ways by autonomous cars. Whatever answer to an ethical dilemma the car industry might lean towards will not be satisfying to everyone.'"

cancel ×

800 comments

Sorry! There are no comments related to the filter you selected.

Pinto? (0)

Anonymous Coward | about 4 months ago | (#46937435)

I'd hit it!

Undefined (1)

Anonymous Coward | about 4 months ago | (#46937437)

Just leave that kind of behavior undefined. That kind of rapid crash prioritization is likely to be too complicated to implement anyway. Focus on good accident avoidance in general.

Re:Undefined (5, Insightful)

Wootery (1087023) | about 4 months ago | (#46937621)

Congratulations, you've given me a great go-to example of a non-answer.

Just leave that kind of behavior undefined.

Programs are generally deterministic beasts, by nature. What are you trying to say?

Re: Undefined (2)

carlos92 (682924) | about 4 months ago | (#46937699)

I thinks he means implement the simplest thing, let the outcome be determined by physics and invest the effort into better crash avoidance, friendlier UIs or more detailed logging so that this question can be answered with data from real crashes.

Re:Undefined (2, Funny)

Anonymous Coward | about 4 months ago | (#46937777)

if (aboutToCrash) {
        log.error ("Oh shit!");
}

Re:Undefined (2)

michelcolman (1208008) | about 4 months ago | (#46937623)

I think he's talking about technology 50 years from now. An autonomous Google Prius is unlikely to make that kind of decisions any time soon.

Re:Undefined (1)

Anonymous Coward | about 4 months ago | (#46937717)

I think he's talking about technology 50 years from now. An autonomous Google Prius is unlikely to make that kind of decisions any time soon.

Are you saying that the Google car doesn't have an algorithm for avoiding an obstacle? It surely have algorithms to slow down/brake if something is in its path.
If it tries to avoid it rather than braking it has to select the right or left side of it. The case we are talking about is when we have determined that we don't have time to break and select to avoid just to find out that neither side is better.
Considering the article two days ago claimed that they were already tracking abjects around the vehicle it is pretty safe to assume that they already have those algorithms in place. The thing they probably doesn't do is to rank how bad collision with the different objects are since that situation should be avoided anyway. (When you get into a situation where you don't have the time to brake and have to select what to crash with you have already failed, autonomous cars shouldn't take risks like that.)
The simple solution would be to say that if avoidance can't be done then it is better to crash with the object directly in front since that is more predictable for surrounding human drivers.
A less simple solution would be to collide with the object that is most similar to you with regards to direction to cause as little damage as possible.

Re:Undefined (1)

michelcolman (1208008) | about 4 months ago | (#46937761)

The Google car can certainly avoid obstacles, I was just saying it's unlikely to make sophisticated decisions about which "obstacle" (including humans) to prefer.

Spock got it right... (1)

Anonymous Coward | about 4 months ago | (#46937441)

"The needs of the many, outweight the needs of the few". Computer algorithms should try to minimize casualties.

Re:Spock got it right... (4, Interesting)

michelcolman (1208008) | about 4 months ago | (#46937615)

What if one car has two guys with multiple convictions for armed robbery and the other has a working dad with a family and three kids at home? OK, the algorithm would have to be pretty sophisticated to detemine that, but who knows...

Or something slightly more realistic, a car with an couple of 80 year olds versus a 25 year old mom of three? Should the car kill the mom rather than the couple that will be dead in less than 10 years? One death is worse than two, no matter what?

Or yet another one, what if two people cross the street without looking, and the car swerves off the road to avoid them and rather kill one person who was walking on te pavement, not doing anything wrong? One casualty is better than two, right?

Those are just questions, mind you. Only shows how "minimize casualties" is not always so clear cut.

Re:Spock got it right... (0)

Anonymous Coward | about 4 months ago | (#46937785)

"Minimize casualties in the statistical sense", then. Next.

Re:Spock got it right... (2, Insightful)

Wootery (1087023) | about 4 months ago | (#46937643)

So if me and a few of my friends jump out in front of your car, the car should do everything in its power to avoid hitting us, right? Including driving off a cliff-face?

A car which can be persuaded to deliberately kill its passengers... that might be a problem.

A bunch of nuns? (4, Interesting)

Bongo (13261) | about 4 months ago | (#46937445)

I'm reminded of Michael Sandel's televised series on ethics.

If you could stop a runaway train from going over a ravene, by pulling a lever, thus saving 300 people, but the lever sent the train down a different track on which 3 children were playing, what do you do?

Somehow, involving innocents seems to change the ethical choices. You're no longer just saving the most lives, but actively choosing to kill innocent bystanders.

Re:A bunch of nuns? (5, Insightful)

Anonymous Coward | about 4 months ago | (#46937495)

The kids are playing on a fucking railtrack, for fuck's sake. If they can't get out of the way in time, then they deserve what they get.

Bad example (3, Insightful)

Anonymous Coward | about 4 months ago | (#46937501)

Why do poeple always give such easy examples when asking this question?

Of course you save the 300 people! There's probably a lot more innocent people than 3 in that group of 300... You'd have to be very stupid to save 3 over 300 or too lazy to think about it and you make a random decision.

The question should be more like this:
On one track there's 10 escaped criminals and the other is your wife with son and another child in the belly.

That's a decision you might have to think about, but most people would easily save their own wife.
In my opion this shows most people are not ethical at all. So when someone asks you this question they pose it extremely in favor of sacraficing the innocent to make certain people will make the 'ethical' decision.

Re:Bad example (0)

Anonymous Coward | about 4 months ago | (#46937537)

Or just 10 random strangers, to increase the ethical embiguity.
If you want to make the decision extra traumatic, the proportion of adults and children in both groups should be similar.

Re:Bad example (0)

Anonymous Coward | about 4 months ago | (#46937597)

It is not a bad example. The dilemma isn't about how to save the most innocent people, or the most people at all. It is about deliberately doing something moraly wrong (killing some peoples, whoever they are) "for a good cause" versus letting something bad happen by inaction. Your proposed formulation would be about harming someone you know versus harming anonymous individuals. While still interesting, it isnt' making the same point.

Making the "obvious" choice of saving the 300, as you say, means you become a killer. And you killed innocent people, not monstruous nazi degenerates or whatever. And you killed them deliberately. On paper, the choice may be easy (those are just numbers, and math tell us that 300 > 3). But in real life, with real human lives at stakes? I do hope most people would at least hesitate before killing others, even for a good cause.

Re:Bad example (4, Insightful)

Ost99 (101831) | about 4 months ago | (#46937689)

Killing someone by inaction is also murder.
The question then becomes, kill 3 or kill 300.

Re:Bad example (1)

Anonymous Coward | about 4 months ago | (#46937743)

Then, according to that line of reasoning, being in position to be able to choose is murder.

Re:Bad example (0)

Anonymous Coward | about 4 months ago | (#46937827)

Then, according to that line of reasoning, being in position to be able to choose is murder.

Welcome to the real world, where decisions can have life-or-death consequences.

Re:Bad example (0)

Anonymous Coward | about 4 months ago | (#46937609)

Yeah right, why would it be more ethical to save the 10 criminals? Just because there's more? The next step of course would be that those 10 criminals could easily be a-holes killing more people or worse while they are running just because you saved them, but oh, that should not be considered, because they are somehow equal. Bullshit. They proven not to be capable of handling life without hurting others, so they are not equal in this case. I'd save the family even if it wasn't my own.

Re:Bad example (1)

michelcolman (1208008) | about 4 months ago | (#46937661)

Why does that show most people are not ethical? Rather depends on what kind of ethics you choose, ethics is not an exact science and has evolved quite a bit over time, back and forth. I find it perfectly ethical to kill the escaped criminals rather than the mother with child and foetus.

Now if it was someone elses wife with son and child, exactly the same but just not your wife, it would start to become a bit more difficult.

Depends how much you hate your wife, I suppose...

Re:Bad example (2)

umghhh (965931) | about 4 months ago | (#46937765)

yes my thoughts exactly - the kiddies complicate the picture if there were no kiddies there the decision would possibly be easier than my (almost ex) wife would like it to be.

Re:A bunch of nuns? (1)

Anonymous Coward | about 4 months ago | (#46937527)

Clearly we should do this the way we do everything else in this country. Run a credit check on the owners of the two potential victims and avoid the one with the highest net worth... unless he's a minority.

Re:A bunch of nuns? (5, Interesting)

something_wicked_thi (918168) | about 4 months ago | (#46937543)

Actually, this raises a more interesting question (at least to me) which your little thought experiment approaches. What if my autonomous car decides that the action to take that is likely to cause the least harm is to kill the driver? For example, what if the car has the opportunity to swerve off the side of a mountain road and drop you 1000 feet onto some rocks to avoid a crash that would have killed far more people than simply you? Is my autonomous car required to act in my own best interest, or should it act in the best interests of everyone on the road?

Re:A bunch of nuns? (3, Insightful)

Anonymous Coward | about 4 months ago | (#46937591)

I'd never have a car that did that. Me and mine are number one priority. All other priorities are secondary.

Re:A bunch of nuns? (5, Insightful)

bickerdyke (670000) | about 4 months ago | (#46937599)

But what if the driver of the other car, that will survive by steering your car over the cliff, would become the father of the next Hitler?

A car will never have enough data to make a "right" descision in such a situation. Even the example from the intro is an invalid one as for a morally sound descision, you'd need to know how many passengers (and perhaps even WHICH passengers) are in those cars. Family of 5? Single guy with cancer anyway? And such an alogorith would mean assigning an individual (monetary or any dimensionless number - no difference) value to a human life. And then you've left the field of ethical behaviour quite a while ago.

Live with imperfect descissions, as you never will be able to make the perfect one. So just stick to the usual heuristics: If you can't avoid both obstacles, Avoid the one that's closer. even if you hit the other one, you'll have a split second longer to brake. THAT might make the differnce between life and death.

Re:A bunch of nuns? (0)

Anonymous Coward | about 4 months ago | (#46937665)

Also what if the decimal point is in the wrong place - it throws you off the cliff because it believes a 90% chance of killing the others in a crash, but actually it's just 9%?

Re:A bunch of nuns? (5, Funny)

perryizgr8 (1370173) | about 4 months ago | (#46937749)

post a bug report.

Re:A bunch of nuns? (0)

Anonymous Coward | about 4 months ago | (#46937633)

... Now I'm going to have to root my car so it doesn't toss me off of a cliff? I think I will opt to just continue to drive myself. Maybe I will be like one of those crazy NHL players who didn't wear helmets well into the 90's; Riding my 1975 motorcycle in the year 2055 surrounded by automated traffic.

Re:A bunch of nuns? (5, Interesting)

N1AK (864906) | about 4 months ago | (#46937637)

Now this question I like, it's far more nuanced than the original one. I know I would buy a car with a bias towards keeping me alive (not at any cost) and that bias would likely get even stronger if I had family members in the car! But how plausibly can a car judge whether keeping me and my 2 year old infant alive is more or less important than the unknown occupants of another car?

Now a really difficult situation would be, what should the computer do if another car is going to crash but your car could minimise loss of life by doing something that would harm or kill you? In this situation your car isn't the cause of the accident, nor perhaps even would be involved. Should your car intervene,potentially killing you, for the good of society as a whole?

Re:A bunch of nuns? (2)

craznar (710808) | about 4 months ago | (#46937705)

I think self preservation has got us as a species a long way, it also is the best mechanism currently available for keeping the roads as safe as they are. I don't see any reason to change this just because the 'gizmo' that does it changes.

Once you start having 20 cars each trying to work out what combination of movements results in the least casualties - they will all probably just stop and turn their engines off.

Re:A bunch of nuns? (2, Insightful)

CastrTroy (595695) | about 4 months ago | (#46937815)

This makes a lot of sense. If we wanted to maximize safety, we wouldn't all be driving around in vehicles that weigh a couple thousand pounds. That's a lot of energy to get rid of in a short time in the event of an accident. Cars make sense for long trips or when you have a lot of stuff to carry, but going back and forth to work could be done in much smaller and lighter vehicles. You could easily build an enclosed recumbent bike with a small engine that would get both amazing gas mileage and be safe if all the other vehicles on the road were similarly sized.

Re:A bunch of nuns? (1)

N1AK (864906) | about 4 months ago | (#46937613)

It makes them more nuanced but ultimately the majority of the change is irrational. In this situation you are present and faced with a choice, one choice kills 300 people and one kills 3 people. Some people see 'not doing anything' as not choosing, or somehow being different, but there's no reason for this to be true.

Clearly in the situation posited the best decision for a computer is to minimise harm. Most medical spending decisions are now made on the basis of the number of years of life saved, and I think that's an acceptable method here. My expectation is that clear cut cases like this will be incredibly rare. The car likely doesn't know about occupants of other vehicles which likely has more affect on the situation than make and model, additionally a crash into one car could easily spiral especially if the car crashes into traffic going in a different direction; the car is unlikely to know enough to reliably predict the danger beyond the initial impact and an extremely short window after that.

Require all cars reach reasonably rigorous safety standards (both when being hit and when hitting other cars) and it's a largely theoretical problem.

The 3rd option? (1)

AC-x (735297) | about 4 months ago | (#46937655)

In all these run-away train changing the points questions my answer would always be to switch the points while the train is going over them, or leave the points half set, to cause the train to derail and so have a good chance of saving everyone.

Re:A bunch of nuns? (0)

Anonymous Coward | about 4 months ago | (#46937683)

People can pretend they reacted by reflex, although certainly in some cases where to crash is a not only an instinct decision, but rather a choice to kill someone else to save someone else.
Programs' decisions can be inspected after the fact. It's also a potential lawsuit minefield...

It's simple (4, Funny)

Required Snark (1702878) | about 4 months ago | (#46937451)

Just run the car into the nearest programmer.

Ask a Beautifull Mind (2)

ei4anb (625481) | about 4 months ago | (#46937453)

The decision should be based on the common good and that is not always the worst for the occupants. Remember that the CPU in the other cars will also be evaluating the best strategy to take. http://en.wikipedia.org/wiki/N... [wikipedia.org]

Re:Ask a Beautifull Mind (1)

jrumney (197329) | about 4 months ago | (#46937505)

In that case, the answer is to trick the smaller car into swerving into the heavier car, or vice-versa, leaving space for your car to squeeze past.

Re:Ask a Beautifull Mind (0)

Anonymous Coward | about 4 months ago | (#46937581)

There's an interesting idea - if you networked each of the cars and they shared a common utility function (i.e. the thing that determines how "good" or "bad" each possible result was) they could reach a common consensus on what the "globally best" course of action was.

Or maybe they'd just deadlock.

Of course that all assumes the cars could control the outcome with 100% accuracy. Once you allow for imperfect control and imperfect information (just how slippery is that ice patch in front of me, and what's that going to do to my stopping distance?) it's a whole extra level of complexity.

Heck - once you allow imperfect information, how could you prove a collision was unavoidable in the first place?

Re:Ask a Beautifull Mind (1)

Anonymous Coward | about 4 months ago | (#46937713)

So once you add all this complexity the dilema is solved. Computers will crash just as real human would do :-) They will just manage to swear at GHz clock speed and could replay all their life much faster before they die.

Probabilities, Summation (1)

gjh (231652) | about 4 months ago | (#46937457)

Options would have to be costed. Many things would feed into that. The problem of course is that for all of those costings, probability multiplied by survivability does not produce a linear outcome of quality of life value; you could assign a value of harm to each individual present, but you could not get a meaningful figure by summation.

Re:Probabilities, Summation (4, Interesting)

Zocalo (252965) | about 4 months ago | (#46937557)

This notion kind of cropped up in last weekend's episode of "Continuum" where a next of kin was informed of a crash by an actuary in terms of write downs, compensation, loss adjustments and so on. Given the way insurers tend to operate and how in bed they are with the legal profession I can see that's exactly how this would go in the long run; an evaluation designed to produce the lowest price tag for those that ultimately get to pay the financial/legal bill. Looking at the problem another way, that means the structural integrity of the two cars in the example is probably moot; if the more structurally sound car is an expensive vehicle with a lone occupant owning a huge life insurance policy and the other is a decrepit bus full of uninsured kids, then it's probably not a good day to be one of the kids... or the driver of the car that crashes into them.

Simple answer (5, Insightful)

Anonymous Coward | about 4 months ago | (#46937461)

Slam the brakes on and don't swerve either way. It's by no means optimal, but as far as lawsuits are concerned, it's much easier to defend "the car simply tried to stop as soon as possible" than "the car chose to hit you because it didn't want to hit someone else".

Re:Simple answer (0)

Anonymous Coward | about 4 months ago | (#46937477)

This is the correct answer.

Re: Simple answer (2)

savuporo (658486) | about 4 months ago | (#46937507)

Actually, I'm pretty sure the correct answer will be calculated and given to programmers by insurance companies.
They have a very well defined and characterized value of human life - at different stages of life, too. And for situations like these the formulas will drive them. Hitting a Mercedes with a real estate agent in it will likely be more costlier than bumping a Yaris off the road.

Re: Simple answer (1)

sonamchauhan (587356) | about 4 months ago | (#46937601)

Correct. Hippocrates said it best: "first, do no harm..."

Re: Simple answer (3, Informative)

Wootery (1087023) | about 4 months ago | (#46937657)

But this a hopeless inadequate theory of morality.

Inaction might be worse than action, even if action causes the death of someone who would not otherwise have died. See: the Trolley Problem [wikipedia.org] .

Re:Simple answer (4, Funny)

JosKarith (757063) | about 4 months ago | (#46937531)

Ping both cars and head for the one with most insurance...

Re:Simple answer (4, Insightful)

Wootery (1087023) | about 4 months ago | (#46937663)

You joke, but, like the hit the best protected car policy, it would serve to punish the most safety-conscious, whilst still making some sense on short-term utilitarian grounds.

Re:Simple answer (1)

gmhowell (26755) | about 4 months ago | (#46937533)

In addition, more traction will go towards braking, possibly lessening the force of impact.

Re:Simple answer (1)

UncleWilly (1128141) | about 4 months ago | (#46937767)

Another vote for this being the correct answer.

No need.. (0)

Anonymous Coward | about 4 months ago | (#46937465)

It's not like humans are an endangered species, you could argue that there are to many of us and a step to solve that would be to program these cars to randomly hit someone once every few weeks or so. This action can then be skipped if a natural occurring accident happened during that interval time. So there is no need to avoid accidents, just a way to meet the baseline fatality criteria.

Sue, sue, sue (1)

qbast (1265706) | about 4 months ago | (#46937483)

It really does not matter. Car manufacturer will get sued anyway by family of whoever got hit.

Re:Sue, sue, sue (1)

IWannaBeAnAC (653701) | about 4 months ago | (#46937535)

So the solution that ends up getting implemented is that the autonomous car will take whatever course of action that will minimize costs, which will typically be crash into whichever car contains the occupants with the least total wealth.

Easy (-1)

Anonymous Coward | about 4 months ago | (#46937485)

It should poop out the asshole

Screw other people (5, Insightful)

Cyfun (667564) | about 4 months ago | (#46937487)

Let's be honest. The job of YOUR car is to keep YOU safe, so the smaller car is probably the better bet as it will have less inertia and cause you less harm. Sure, the most important law of robotics is to protect human life... but if it's going to prioritize, it should probably start with its owner.

Re:Screw other people (0)

Anonymous Coward | about 4 months ago | (#46937619)

My car could do a better job at keeping ME safe if it could soften up obstacles with rocket granades before impact.

You'll still have problems getting the permissions for road use.

Re:Screw other people (3, Insightful)

PhilHibbs (4537) | about 4 months ago | (#46937635)

Cars have to be designed with the interests of the road-using population in mind. If you want your car to disregard everyone else's interests in favour of your own, then you should not be allowed to use public roads as you are a dangerous sociopath.

Re:Screw other people (2)

lannocc (568669) | about 4 months ago | (#46937679)

Let's be honest. The job of YOUR car is to keep YOU safe...

And I foresee much competition on this level and a premium cost for the vehicle most likely to save its owner in a multi-party accident scenario.

Re:Screw other people (-1)

Anonymous Coward | about 4 months ago | (#46937701)

You must be a Republican. That is horrible. Saying you want cars to run down children to protect the rich white owners of such expensive devices is exactly what you people are all about. Your kind is disgusting.

Re:Screw other people (2)

umghhh (965931) | about 4 months ago | (#46937807)

maybe they are but it brings them further than you. Ut seems to me we finally reached the situation when the rich not only have more power but can use this power in any case imaginable. In long run this should lead to elimination of the poor parts of the population and t he problem is solved. oh wait.....

Re:Screw other people (5, Insightful)

HyperQuantum (1032422) | about 4 months ago | (#46937753)

Screw other people

And this is what is wrong with the world.

Let's turn the situation around: suppose you and your children are walking on the street. Will you still prefer the autonomous car to protect it's single driver at all costs and kill you and your children instead? And then imagine how many autonomous cars will be on the road in the future, all with that same logic built-in...

That's easy. Let the market decide! (1)

Emmi59 (971727) | about 4 months ago | (#46937491)

Every owner of a car should have the possibility to pay for a "crash avoidance insurance". That is, the more he pays, the higher the chance to be avoided by a crash from an autonomous car. The program then only has to compare 2 numbers (the amount of money each car owner spent) - et voila - hit that poor drivers cheap Suzuki and avoid the rich mans costly BMW...

Car driver ethics: What do I hit? (2)

gnasher719 (869701) | about 4 months ago | (#46937511)

The whole assumption that we should be discussing this for autonomous cars is a bit bizarre. There are millions and millions of cars driven by people, so we should discuss for them first.

And the article is a bit stupid because it forgets a few things: One, a crash with a bigger car is worse _for me_. Second, it's unlikely that two other drivers made mistakes simultaneously, so it would make a lot more sense to crash into the car whose driver caused the problem.

Re:Car driver ethics: What do I hit? (1)

TheRaven64 (641858) | about 4 months ago | (#46937551)

The whole assumption that we should be discussing this for autonomous cars is a bit bizarre. There are millions and millions of cars driven by people, so we should discuss for them first.

As the summary says, you typically don't have time as a human to make a conscious rational decision about what to hit in a collision. In contrast, an autonomous car can do a lot of processing in a tenth of a second.

And the article is a bit stupid because it forgets a few things: One, a crash with a bigger car is worse _for me_

Not necessarily. A larger car can have bigger crumple zones. If its crumple zones are twice the size of the small car, then the acceleration that you'll experience in the collision is a lot less and so there's a greater chance everyone will survive (assuming that the relative impact speeds will be the same).

Second, it's unlikely that two other drivers made mistakes simultaneously, so it would make a lot more sense to crash into the car whose driver caused the problem

That contradicts your first point. Are you using your car as a weapon to punish the guilty driver, or are you using it as a means of ensuring your survival? It's quite likely that it would be better to swerve into a car travelling in the same direction as you that hasn't made any errors to avoid hitting an oncoming vehicle that is doing something stupid (like being on the wrong side of the road). The relative velocity of the impact will be considerably less.

Re:Car driver ethics: What do I hit? (2)

scdeimos (632778) | about 4 months ago | (#46937695)

Not necessarily. A larger car can have bigger crumple zones. If its crumple zones are twice the size of the small car, then the acceleration that you'll experience in the collision is a lot less and so there's a greater chance everyone will survive (assuming that the relative impact speeds will be the same).

Don't let facts get in the way of a good story. :) While survivability is about equal for SUV vs SUV and car vs car impacts, studies have shown that in SUV vs car impacts the passengers of the car are 7.6 times more likely to die.

Armed with this information an autonomous vehicle trying to protect everybody should: (a) choose the impact with the least inertia for all concerned (i.e.: go for the car travelling in almost the same direction as the autonomous vehicle as opposed a car travelling in an opposite direction) and (b) for a choice of head-on impacts, prioritize impacting the car with a mass closest to its own. An autonomous vehicle biased towards protecting its own driver should target the smaller vehicle... but this will inevitably lead to "I've got the biggest autonomous vehicle" wars with people trying to protect themselves from other vehicles as we've seen happen with SUVs.

REF: http://www.consumerreports.org... [consumerreports.org] /p

Re:Car driver ethics: What do I hit? (1)

Balthisar (649688) | about 4 months ago | (#46937561)

> One, a crash with a bigger car is worse _for me_.

Why do you think that? Whether your car hits a stationary brick wall or a parked Suburban, a tiny, little Aveo, or an infinitely thin, infinitely strong force field, the force of the impact is the same for your car. There might be said for variation due to the specific dynamics of the crash, such as, does your little car do under the SUV's front bumper, but the mass of the object you're striking isn't relevant beyond the point your car can no longer move the object you're crashing into.

Re:Car driver ethics: What do I hit? (2)

Ost99 (101831) | about 4 months ago | (#46937731)

Your physics makes no sense.
The energy of the car you crash into is what gets transferred to you. The factors involved is the speed and mass of the car that hits you, not your car.

Crashing with an identical car as your own is the same as crashing into a fixed barrier. Crashing into a SUV at twice your cars mass is much worse, as the other car will not stop. If the difference in mass is large enough your car will crushed without affecting the speed of the other vehicle in any significant way (think train).

Re:Car driver ethics: What do I hit? (1)

PhilHibbs (4537) | about 4 months ago | (#46937567)

New technologies will always be in the minority while they are still new, but that's the right time to start trying to get them right. We, as technologists, can influence and inform the creation of autonomous driving algorithms (we aren't all kids in a basement, some of us are grown-ups with important jobs). We cannot change the way that people drive, that's one for legislators and psychologists.

And, the algorithm isn't written for you. It's written for the whole of the road-using population. No-one's going to write you an algorithm that makes your car aim for a group of fat people for a soft landing, sorry. And just how do you determine the person at fault in a fraction-of-a-second algorithm?

Easy: Hit whatever allows the most braking first (0)

Anonymous Coward | about 4 months ago | (#46937521)

This was simple. If a crash is unavoidable, you avoid as long as possible. That allows maximum braking first, minimizing damage. And also, more time to get lucky. Whatever outside the car computer's control caused the unlikely "must crash" scenario, might undo the damage. (I.e. the oncoming car might run off the road)

Such choice is almost never truly symmetric - with the choice of two cars to crash into, the distances/speeds will never be exactly the same.

Aslo, there are some easier choices. Prefer a parked car over a moving one, it is less likely to have people inside. Prefer going off-road over a head-on crash, unless on a bridge...

Baby on board (1)

loic_2003 (707722) | about 4 months ago | (#46937525)

Maybe those annoying 'Baby on board' stickers will finally have a use?!

Near term options vs long term (0)

Anonymous Coward | about 4 months ago | (#46937541)

Near term the car is not going to have any of this information. It will be "There is an object to hit" or "There is an object to hit" as it will have no real concept of what the two actually are any time soon.

Long term hopefully all the cars are potentially automated so your car will start screaming at the other car "fuck I'm going to hit you!".

It is still unlikely to have any idea of the human cost of each crash (does it hit the car with one person in or the car that has been hack to been out it is choc-o-block full of babies).

Plan C: (2)

ColaMan (37550) | about 4 months ago | (#46937545)

It communicates to both cars and tells them to execute emergency maneouvers to make enough room. Failing that,, all three calculate a vector that imparts minimal g-forces to all occupants.

Too complex (1)

Viol8 (599362) | about 4 months ago | (#46937639)

On a highway those 2 other cars are probably in turn surrounded by further cars and so on. You'd end up with a cascading reaction that in the end may cause more harm than good and may take so long to work out the best case scenario that by the time the computers have agreed on the best outcome its too late to do anything about it because physics gets in the way. We're talking fraction of a second here.

Re:Too complex (1)

emilv (847905) | about 4 months ago | (#46937709)

With a good emergency broadcast channel you could get everyone around you involved in milliseconds. We should try to get the algorithms to work with local data only (and whatever info is in the emergency broadcast packet), and low-runtime algorithms. We might not be able to find the optimal soution but maybe we can find a solution that indeed minimizes accidents, inside that fraction of a second. Maybe just braking and steering away from the accident works in many scenarios? Because everyone steers away at the same time they leave room for each other without need for further coordination.

The fun thing about automated cars is that even the worst-case scenarios should be at most as dangerous as if only humans were involved. So we can only make it better. Everything is more ethical than letting humans decide in realtime imo.

The onus is on the driver (1)

Anonymous Coward | about 4 months ago | (#46937553)

That's why when current autonomous vehicles detect a dangerous situation that they cannot handle they throw an alarm tone and disengage autonomous control - the human in the driver seat is expected to take over. Too bad if they have less than half a second to figure out a way out of it.

But is it hopeless? (0)

Anonymous Coward | about 4 months ago | (#46937555)

Whatever answer to an ethical dilemma the car industry might lean towards will not be satisfying to everyone.

More importantly, is there a possible consensus that can be reached? Or is the author claiming all is hopeless on that front and we should just worry about 'optics'?

Insurance Decides (0)

Anonymous Coward | about 4 months ago | (#46937569)

This is easy. The cars will contact their insurance companies -- they will decide who is involved and who is not "a priori". No insurance means you are not part of the decision.

Easy (0)

Anonymous Coward | about 4 months ago | (#46937577)

Go for bicyclist. Killing someone is a one-time cost, much cheaper than paying decades of medical bills for someone who is only maimed, and the property damage is minimized.

After all, the car manufacturer is likely to get involved with the consequential costs of decisionmaking here, and we are talking about a number of deaths here that al Kaida can only dream about if we look at current numbers of traffic-related deaths.

abc (0)

Anonymous Coward | about 4 months ago | (#46937589)

just a joke asd asd

Re:abc (0)

hillstonetest (3644519) | about 4 months ago | (#46937605)

asdf asdf a

NSA anybody? (0)

Anonymous Coward | about 4 months ago | (#46937607)

How many people of the wrong political persuasion or skin color will get killed as a consequence of "device malfunction"?

A nation-wide network of cheap killer drones that people expect to kill occasionally anyway, paid for by the people itself and with blame to point to wherever else?

What's not to love for psychopathic killers who already got the executive right to kill Americans in America, and who managed to get permission to torture people to death for fun without good reason?

EJECTOR SEAT (0)

Anonymous Coward | about 4 months ago | (#46937617)

GO!

Adding up braking power. (1)

limaCAT76 (2769551) | about 4 months ago | (#46937627)

Physics lovers and automotive geeks answer me: if the car cpu thinks to be in presence of an unavoidable and possibly lethal crash to engage can't it just engage an additional system that adds braking power?

Like an emergency system of additional feets, something like a jet landing gear, ending not in a pair of tires but in a brake. I don't know if that could have side effects requiring the parts to be substituted or putting some odd straining to engine or transmission, but that's still better than swerving into another car.

Re:Adding up braking power. (2)

Bazman (4849) | about 4 months ago | (#46937669)

Braking power isn't infinite. Wheel braking will eventually skid the wheels (which is why we have anti-lock brakes now, so you can still steer while braking). Are you thinking cars should be equipped with dragster-style parachutes, or retro-rockets? Or just a bloody great anchor that the computer can deploy and tear up the road?

Even when the car has deployed the parachute, the anchor, and the retro-rocket is still firing, the computer might still not be able to stop going into that tree that's just fallen over. Plus all those negative G forces are going to smear the drivers eyeballs over the inside of the windscreen.

None of the above (1)

kbg (241421) | about 4 months ago | (#46937631)

The correct answer is not to swerve either left or right but apply maximum breaks going forward.

Easy solution (0)

Anonymous Coward | about 4 months ago | (#46937645)

Default to the following priority (avoid damage to thirds):
1 Walls and other inanimate objects
2 Structurally sound vehicles
3 Smaller cars

Then allow the owners to change the settings and deal with the ethical question themselves.

Better to act predictably? (2)

AC-x (735297) | about 4 months ago | (#46937647)

Until 100% of cars on the road are self driving, it would seem to me that the best response would be to simply slam the breaks without changing course. Trying to purposefully swerve into another car could cause the human drivers (even cars not involved in the crash) to also swerve and possibly cause even more collisions.

rarely is an accident an accident. (4, Insightful)

blackest_k (761565) | about 4 months ago | (#46937653)

There are very few "accidents" just people taking stupid risks. Maintain a safe distance, ie enough manouvering room so you don't join an accident, don't overtake when you can't see the end of the manouvere e.g going up hill or on a bend. Stop when necessary. Procede with caution sometimes you might want to turn off the radio open a window and listen. Use your indicators. Drive within your lights or as conditions allow. Don't be an asshole.

Sometimes you will come across assholes on the road it is best to give them a wide birth even stop and pull over in order to get them out of your way, but don't dawdle if you want or need to drive slow make opportunities for people to overtake.

Bad planning and poor judgement are the most common causes of accidents which is why schools have low speed limits around them as kids can be stupid around roads.

Be helpful, I remember one time I was filtering down the centre line on a motorbike (dispatch rider) past stationary traffic and a taxi driver stuck his hand out. I braked and a pushchair popped out from between the stationary traffic. Without that warning I could have killed a toddler as it was no harm was done and I don't think the mother was ever aware of the danger.

One thing about london traffic professional drivers work the streets most of the day and they are very road aware. The most dangerous times are when schools start and when schools let out, followed by the rush hours when the non professionals are on the road.

     

Re:rarely is an accident an accident. (0, Insightful)

Anonymous Coward | about 4 months ago | (#46937763)

Be helpful, I remember one time I was filtering down the centre line on a motorbike (dispatch rider) past stationary traffic and a taxi driver stuck his hand out. I braked and a pushchair popped out from between the stationary traffic. Without that warning I could have killed a toddler as it was no harm was done and I don't think the mother was ever aware of the danger.

And this is why lane stradling is illegal in most of the places I have driven! Traffic gave way to somebody and you, while being the pushy arrogant asshole and overtaking illegally as you advocate against, were nearly the cause of a fatality on the road.

*rolls eyes* If everyone else follows you're advice you won't have to. Is that what you are thinking?

Time? (4, Insightful)

Bazman (4849) | about 4 months ago | (#46937673)

"Programmers have all the time in the world to get it right". HAHAHAHAHAHA.

No, we have deadlines like everyone else. And even then we only have all the time in the CPU. Yeah, we can add more CPUs to the system, but that makes it more complex, and that makes it harder to hit that deadline. What kind of idiot made that statement?

 

You can never satisfy everyone? (2)

gweihir (88907) | about 4 months ago | (#46937675)

And trying to usually leads to far worse solutions than possible. This is engineering, not politics. In engineering, you pick the best solution, you do not look for some bad compromise.

In case you are alive after the crash (0)

Anonymous Coward | about 4 months ago | (#46937687)

At least over here (Finland) there is a law requiring you to maintain control over your vehicle in all conditions (so no blaming poor weather etc. for your crash).
The autonomous car should follow that one also, and NOT find itself in a situation where it has to choose between two bad scenarios.

If there's a crash on the interstate here involving several cars, then each is liable for the car they smashed into. Doesn't matter if you were standing still and a car from behind pushed you into the one in front. The push will probably be taken into account in case you killed someone in that car, but it's still your fault.

Similarly; if your autonomous car is parked with you inside; a semi comes hurling towards you and you car saves you by pulling away fast while running over three persons (it does not matter if they are nuns, kids, illegal immigrants or murderers). You will be doing time for killing three people.

Best outcome (1)

UltraBadger (2580929) | about 4 months ago | (#46937703)

Computers can calculate stuff fast. Your car could simulate thousands of potential outcomes in a split second and pick the one in which the least damage is caused, e.g. glancing off another vehicle or other object to lose momentum without damaging the occupants. These vehicles should also communicate with each other, factors such as mass, current speed, whether they too are braking and so on.

Easy (1)

kruach aum (1934852) | about 4 months ago | (#46937739)

Have the computer calculate an equilibrium that prioritizes minimizing damage to the driver, then minimizing damage to the environment, and then minimizing costs. Ethics became useless the moment game theory came about.

Re:Easy (1)

kruach aum (1934852) | about 4 months ago | (#46937793)

Even better: allow the driver to choose what should be prioritized.

Where it should crash. (0)

Anonymous Coward | about 4 months ago | (#46937783)

1. Stay on the slowest lane.
2. Start Crash into a tree.
3. If cyclist, pedestrian, motorcyclist, crash into a car.
4. Do not crash into the side.
5. Head or rear collisions are always more preferable if everything else is unavoidable.

Nonsense (2)

scarboni888 (1122993) | about 4 months ago | (#46937801)

There's no such thing as an intentional accidents. An autonomous program that is paying attention will not have such a situation and therefore the manufacturers will always be responsible for failure.

already solved (1)

Charliemopps (1157495) | about 4 months ago | (#46937819)

So you don't swerve, you lock up the breaks and accept your fate.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>