Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Do We Program Moral Machines?

Soulskill posted about a year and a half ago | from the needs-of-the-many-outweigh-the-needs-of-the-few dept.

AI 604

nicholast writes "If your driverless car is about to crash into a bus, should it veer off a bridge? NYU Prof. Gary Marcus has a good essay about the need to program ethics and morality into our future machines. Quoting: 'Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work. That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.'"

cancel ×

604 comments

Sorry! There are no comments related to the filter you selected.

Why I doubt driverless cars will ever happen (5, Insightful)

crazyjj (2598719) | about a year and a half ago | (#42108161)

I maintain that you CAN'T really program morality into a machine (it's hard enough to program it into a human). And I also doubt that engineers will ever really be able to overcome the numerous technical issues involved with driverless cars. But above these two problems, far and away above *all* problems with driverless cars is the real reason I think we'll never see anything more than driver *assisting* cars on the road: legal liability.

To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers? How much would you have to add onto the sticker price to cover the costs of going to court every single time that particular car was involved in an accident? Of defending the efficacy of your driverless system against other manufacturer's systems (and against defect, and against the word of the driver himself that he was using the system properly) in one liability case after another?

According to Forbes [forbes.com] , the average driver is involved in an accident every 18 years. Let's suppose (and I'm sure the statisticians would object to this supposition) that that means that the average CAR is also involved in a wreck every 18 years as well. Since the average age of a car is about 11 years [usatoday.com] now, it's not unreasonable to assume that a little less than half of all cars on the road will be involved in at least one accident in their functional lifetimes. And even with the added safety of driverless systems, the first model available will still have to contend with a road mostly filled with regular, non-driverless-system cars. So let's say that a good 25% of those first models will probably end up in an accident at some point, which will make a very tempting target for lawyers going for the deep pockets of their manufacturers.

Again, what car company wouldn't take that into account when asking themselves if they want to be a pioneer in this field?

Re:Why I doubt driverless cars will ever happen (2)

DavidClarkeHR (2769805) | about a year and a half ago | (#42108231)

To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers?

... no one. But you'll get plenty who charge mandatory tune-ups to ensure compliance. The question will be "which company DOESN'T charge a fee for a mandatory yearly check-up"?

Re:Why I doubt driverless cars will ever happen (5, Insightful)

CastrTroy (595695) | about a year and a half ago | (#42108395)

This is my exact reasoning why flying cars will never take off (pardon the pun). People keep their cars in terrible condition. If your car has an engine failure, worst case scenario, you pull over to the side of the road, or end up blocking traffic. In a flying vehicle, if your engine dies, It's very possible that you will die too. And if you are above a city, it's not impossible to imagine crashing into an innocent bystander.

I imagine the same will be for self driving cars. It will never happen because if the car is getting bad information from its sensors, then crazy things can happen. People can't be bothered to clean more than 2 square inches from their windshield in the winter. Do you really think they are going to go around cleaning the 10 different sensors of ice and snow every winter morning? Sure the car could refuse to operate if the sensors are blocked, but then I guess people would just not want to buy the car, or complain to the dealer about it.

Re:Why I doubt driverless cars will ever happen (5, Insightful)

Trepidity (597) | about a year and a half ago | (#42108323)

What they're talking about here, though, isn't really programming morality into machines in some kind of sentient, Isaac-Asimov sense, but just programming decision policies into machines, which have ethical implications. The ethical questions come at the programming stage, when deciding what policies the automatic car should follow in various situations.

Re:Why I doubt driverless cars will ever happen (1)

crazyjj (2598719) | about a year and a half ago | (#42108391)

And those ethical decisions will come with even MORE legal liabilities. Even the idea would give any legal department nightmares. They get enough headaches from faulty accelerators. Can you imagine the legal problems they would get from programming hard ethical decisions into their computers? They would get sued out of existence the first time that feature had to be used.

Re:Why I doubt driverless cars will ever happen (4, Interesting)

SirGarlon (845873) | about a year and a half ago | (#42108431)

I think your statistics on accidents are informative but you're missing an important point. With automated cars, we expect accident rates to go down significantly (so saith the summary). So the likelihood an _automated_ car will be _at fault_ in an accident is probably a lot lower than the 25% you presume. (The manufacturer does not care about accidents where the machine is not at fault, beyond complying with crash-safety requirements.)

Re:Why I doubt driverless cars will ever happen (4, Interesting)

crazyjj (2598719) | about a year and a half ago | (#42108537)

So the likelihood an _automated_ car will be _at fault_ in an accident is probably a lot lower than the 25% you presume.

Great. Now all you have to do is prove your system wasn't at fault in a court of law--against the sweet old lady who's suing, with the driver testifying that it was your system and not him that caused the accident, and a jury that hates big corporations. And you have to do it over and over again, in a constant barrage of lawsuits--almost one for every accident one of your cars ever gets in.

Even if you won every single time, can you imagine the legal costs?

Re:Why I doubt driverless cars will ever happen (5, Insightful)

Above (100351) | about a year and a half ago | (#42108639)

Actually, I think you're both missing the biggest issue by focusing on true accidents. I think the OP's point is legitimate, even in the face of your assertion that rates go down. Companies are still taking on the risk as they are now the "driver". While the liabilities of these situations is large, there is a situation that is much, much larger.

What happens when there is a bug in the system? Think the liability is bad when one car has a short circuit and veers head on into another? Imagine if there is a small defect. There are plenty of examples, like the Mariner 1 [wikipedia.org] crash, or the AT&T System Wide Crash [phworld.org] in 1990. We've seen the lengths to witch companies will go to track down potentially common issues, like the Jeep Cherokee sudden acceleration, or the Toyota sudden acceleration issues because it has the potential to affect all cars. But let's imagine a future where all cars are driverless, and the accident rate is 1/100th of what it is now.

What happens when there is a Y2K style date bug? When some sensor fails if the temperature drops below a particular point? When a semi-colon is forgotten in the code, and the radio broadcast that sends out notification of an accident causes thousands of cars to execute the same re-route routine with the messed up code all at the same time.

There is the very real potential for thousands, or even millions of cars to all crash _simultaneously_. Imagine everyone on the freeway simply veering left all the sudden. That should be the manufacturers largest fear. Crashes one at a time can be litigated and explained away, the business can go on. The first car company that crashes a few thousands cars all at the same time in response to some input will be out of business in a New York minute.

Re:Why I doubt driverless cars will ever happen (0)

Anonymous Coward | about a year and a half ago | (#42108485)

I maintain that all of the things you mentioned can be changed and/or fixed. Now fuck off, luddite.

Re:Why I doubt driverless cars will ever happen (1)

crazyjj (2598719) | about a year and a half ago | (#42108627)

If you can find a way to fix the legal system, I bow before you AC. ;-)

Re:Why I doubt driverless cars will ever happen (1)

The MAZZTer (911996) | about a year and a half ago | (#42108503)

There's sort of a flaw in your reasoning... the accident rate you cite is with HUMAN drivers. Driverless cars would naturally change it (ideally, lower it). And assuming this, chances are accidents involving driverless cars would mostly occur with human-driven cars and be the human's fault, so no liability there.

However I suspect at least initially software/hardware to enable driverless control of cars would be provided by companies other than the manufacturer so they would not be held liable. They would probably have the results of their own tests of the system to show to the court to say they did their own QA on it and found it to be as safe as they could expect. It would probably be closed source, so what more could they do? The software devs themselves may not be claimed either, depends on the nature of the crash, why it happened, the bug in the code that triggered it, whether or not that bug would have been reasonable to locate and fix via a reasonable QA process, if the bug was already fixed in newer firmware why the vehicle wasn't patched (could be found to be the owner's fault), etc etc.

Re:Why I doubt driverless cars will ever happen (3, Interesting)

Rich0 (548339) | about a year and a half ago | (#42108507)

Well, the solution to liability is legal - grant immunity as long as the car performs above some safety standard on the whole, and that standard can be raised as the industry progresses. There is no reason that somebody should be punished for making a car 10X safer than any car on the road today.

As far as programming morality - I think that will be the easy part. The real issue is defining it in the first place. Once you define it, getting a computer to behave morally is likely to be far EASIER than getting a human to do so, since a computer need not have self-interest in the decision making. You'd be hard pressed to find people who would swerve off a bridge to avoid a crowd of pedestrians, but a computer would make that decision without breaking a sweat if that were how it were designed. Computers commit suicide every day - how many smart bombs does the US drop in a year?

But I agree, the current legal structure will be a real impediment. It will take leadership from the legislature to fix that.

Re:Why I doubt driverless cars will ever happen (1)

crazyjj (2598719) | about a year and a half ago | (#42108591)

Well, the solution to liability is legal - grant immunity as long as the car performs above some safety standard on the whole, and that standard can be raised as the industry progresses.

Yes, that's a possibility. Blanket government immunity in all liability cases would work. The only problem there is that you get into politics. And the first time some Senator's son, or daughter of a powerful political donor is killed in a driverless car, you can probably kiss that immunity goodbye.

Re:Why I doubt driverless cars will ever happen (4, Insightful)

WarJolt (990309) | about a year and a half ago | (#42108561)

The funny thing is that most of the time you are in an airplane the autopilot(aka george) is in control. Even when you're landing ILS can in some cases land the plane on it's own. If you've ever been in a plane, chances are you have already put your life in the hands of a computer. I seriously doubt that 25% of the first models will get into accidents. With the new sensors that will be in these cars the computer will have a full 360 degree view of all visible objects. This is far more than a human can see. Furthermore computers can respond in a fraction of the time a human can.

Training millions of humans to drive should be the far more scary proposition.
Plus chances are you as an individual will be responsible for your car and the system designers and manufacturers will be able to afford good lawyers.

Re:Why I doubt driverless cars will ever happen (1)

mcgrew (92797) | about a year and a half ago | (#42108641)

I maintain that you CAN'T really program morality into a machine (it's hard enough to program it into a human).

You can program anything into a machine. Computers are easy to program. Now people, on the other hand, are damned hard to program, as any parent or teacher can attest to.

And I also doubt that engineers will ever really be able to overcome the numerous technical issues involved with driverless cars

They already have. I'm surprised they didn't do it twenty years ago, it could have been done then.

To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped.

Here, j, let me give your straw man a light. If your driverless car runs a red light and hits someone, yes, you're liable. If your driverless car is hit by someone else running a red light, guess what? You aren't. The rules of who is at fault in an accident don't change simply because a computer is in control.

How much would you have to add onto the sticker price to cover the costs of going to court every single time that particular car was involved in an accident?

I already pay that cost, it's called liability insurance. And since driverless cars are safer than human-driven cars, your insurance costs will go down, not up. See, you are under the mistaken assumption that people are flawless, the opposite is true. Humans get tired, angry, distracted, sometimes drunk (as evidenced by your "insightful" moderation), but computers never do.

And even with the added safety of driverless systems, the first model available will still have to contend with a road mostly filled with regular, non-driverless-system cars. So let's say that a good 25% of those first models will probably end up in an accident at some point, which will make a very tempting target for lawyers going for the deep pockets of their manufacturers.

But again, it's never been hard to discern who's at fault, and since driverless cars will have cameras and other sensors, and keep the data, it will be trivial to prove it was the human's fault -- as it surely will be, almost every time.

Obvious Answer (2)

Sparticus789 (2625955) | about a year and a half ago | (#42108175)

Asimov already solved this problem for us.... the Three Laws of Robotics.

Talk about redundancy, is the author's next piece going to be about changing the value of pi?

Re:Obvious Answer (1)

Anonymous Coward | about a year and a half ago | (#42108237)

Pi is exactly 3

Re:Obvious Answer (2)

ZankerH (1401751) | about a year and a half ago | (#42108259)

How thick are you? Pretty much all of Asimov's works dealt with how ambiguous and incomplete the three laws were and how many horrible failure modes fall well within the domain of an intelligent machine following them to the letter. That was a warning not to oversimplify AI and machine ethics in general, not a blueprint.

Re:Obvious Answer (5, Interesting)

dywolf (2673597) | about a year and a half ago | (#42108617)

You never actually read Asimov.
And if you did, you're the one that failed to grasp the points.
The points he even clearly spells out in several of his own essays.

Asimov wasn't writing about the ambiguity or incompleteness of the laws...he wrote the damn laws. And he did consider them a blueprint. He said so. And when MIT (and other) students began using his rules as a programming basis he was proud!!

It wasnt a warning.

Asimov was writing about robots as an engineering problem to be solved, period.
The laws are basic simple concepts that solve 99% of the problems in engineering a robot.
He then wrote science fiction stories dealing with the laws in the manner of good science fiction, that is to make you think about: the science itself, the consequences of science, the difference in human thinking and logical thinking, difference in human and robots...ie to think period.

Example: in telling a robot to protect a human, how far should a robot go in protecting that human? Should he protect that human from self inflicted harm like smoking, at the expense of the persons freedom? In this case Asimov, again, wasnt writing about the dangers of the laws, or to warn people against them. He's writing about the classic question of "protection/security vs freedom", this time approached from the angle of the moral dilema (sp) placed on a "thinking machine" as it tries to carry out its directives.

in fact Asimov frequently uses and explains things through the literary mechanics of his "electropsychological potential" (or whatever word he used was). In a nutshell its a numeric comparison: Directive 1 causes X amount of voltage potential, Directive 2 causes Y amount, and Directive 3 causes Z amount, and whichever of these is the largest determines the behaviour of the robot. In one story a malfunctioning robot was obeying Rule 3 (self-preservation) at the detriment of the other two, because the voltage of Rule 3 was abnormally large and overpowering the others.

Again, he wrote about robots not as monsters or warnings. he specifically stated many times that his writings were in fact about the exact opposite: that they arent monsters, but engineering problems created by man and solved by man. since man created them, man is responsible for them, and their flaws. robots are an engineering problem and the rules are a simple elegant solution to control their behaviour (his words).

Not redundant. Explanatory. (1)

DavidClarkeHR (2769805) | about a year and a half ago | (#42108271)

Asimov already solved this problem for us.... the Three Laws of Robotics.

Talk about redundancy, is the author's next piece going to be about changing the value of pi?

The article actually says Many discussions start with three famous laws from Isaac Asimov before discussing a big part of their ideas. So, no. It's not redundant, it's developing the idea further.

Re:Obvious Answer (1)

SirGarlon (845873) | about a year and a half ago | (#42108281)

If you actually read Asimov's [i]I, Robot[/i], every story in it is about how the Three Laws can be problematic in certain situations.

Re:Obvious Answer (0)

Anonymous Coward | about a year and a half ago | (#42108307)

If you think about Asimov's Three Laws of Robotics you would think otherwise.

http://singularityhub.com/2011/05/10/the-myth-of-the-three-laws-of-robotics-why-we-cant-control-intelligence/

Re:Obvious Answer (1)

Dan East (318230) | about a year and a half ago | (#42108341)

The three laws of robotics do not begin to cover the issue discussed in the article. This is about choosing the lesser of two evils. About mitigating death and destruction. Do you crash the vehicle into another vehicle in order to avoid a pedestrian? Who is more important? The passengers of the vehicle the software is operating, or passengers outside the control of the software? There is going to be a great deal to figure out, and I'm sure that lawmakers will be involved in this process, as will the courts.

Re:Obvious Answer (1)

BaronAaron (658646) | about a year and a half ago | (#42108531)

Asimov later added the 4th (or zeroth) law to address this issue.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Basically translates to the needs of the many outweigh the needs of the few, or one. So the robot car would chose to kill it's own passenger to save the bus full of children if those were the only two options.

Re:Obvious Answer (0)

Anonymous Coward | about a year and a half ago | (#42108345)

If they had've programmed those into my microwave Snuggies would still be alive today . . .

Re:Obvious Answer (0)

Anonymous Coward | about a year and a half ago | (#42108407)

Oh, really? If that is SO Obvious, maybe we should warn all the researchers the problem is solved... you should try to understand the meaning of "ambiguity", "programming" and "artificial intelligence" beyond a literature book before critizing a researcher.

Re:Obvious Answer (1)

skyggen (888902) | about a year and a half ago | (#42108415)

... Did you not read the book or watch the movie? I mean it seemed fine till that whole "Robots Enslaving Mankind because Mankind can't be trusted in their own judgement." Which I believe is the quandry poised in the arguement. So either your a Robot or an Uncle Tom. Godwin's Razor says "The biggest idiot will poise the simplest answer with the most argoance."

Re:Obvious Answer (1)

Spy Handler (822350) | about a year and a half ago | (#42108423)

Asimov's solution resulted in a mind-reading robot who could erase memories in humans, who then went on to discover the limitation of the Three Laws and decided that the best thing for humanity was to turn Earth into a radioactive wasteland in order to encourage people to leave their underground caves of steel and migrate out into the galaxy. The robot also decided that humans are better off without robots, so he manipulated society into rejecting robots and in the end there was only one sentient robot in existence hiding out on the Moon.

Re:Obvious Answer (1)

i kan reed (749298) | about a year and a half ago | (#42108443)

Serious answer: the three laws are not very good. Computers are governed by strict logic, and human style AI is driven by doing everything you can to bypass the limitations of strict logic with data structures and algorithms too complex and large to predict. The net effect of a few English language instructions that don't have a hard and clear mechanism for analyzing with strict logic, and a also lack a necessary interpretation with fuzzy logic does very little to solve the problem.

Especially in the face of real-world pressures against the laws in the practical applications of robotics. For example, we want robots that can kill, because, contrary to expectations, it can save lives net. You want robots that can choose to disregard orders, because some people would give malicious orders. You want robots that can destroy themselves, just ask a bomb squad.

Asimov's rules were great for fiction.

Re:Obvious Answer (1)

Forget4it (530598) | about a year and a half ago | (#42108477)

If you read Caves Of Steel the Robot R. Sammy ends up quite a nice guy after having partnered a real human detective Elijah âoeLijeâ Baley. : http://en.wikipedia.org/wiki/Caves_of_steel [wikipedia.org]

Re:Obvious Answer (1)

Forget4it (530598) | about a year and a half ago | (#42108513)

ops that should have been Robot: R. Daneel Olivaw

Re:Obvious Answer (1)

mcspoo (933106) | about a year and a half ago | (#42108515)

Obvious troll is obvious.

Blinky (1)

Machtyn (759119) | about a year and a half ago | (#42108197)

I'd post the link, but Youtube access is prohibited... Go to youtube, search for the video Blinky and be prepared to see an impressive short movie about the helpful family robot that will tend to your EVERY desire.

Re:Blinky (2)

Machtyn (759119) | about a year and a half ago | (#42108217)

Ahh, Vimeo isn't blocked. Here's the link http://vimeo.com/21216091 [vimeo.com]

Ask the human straight up (1)

jez9999 (618189) | about a year and a half ago | (#42108205)

It's a choice the human driver would have to make, so when first starting your driverless car, it might as well prompt you with a series of moral questions like "should I crash into a bus or veer off a bridge if the situation arises?"

Re:Ask the human straight up (1)

Tackhead (54550) | about a year and a half ago | (#42108299)

It's a choice the human driver would have to make, so when first starting your driverless car, it might as well prompt you with a series of moral questions like "should I crash into a bus or veer off a bridge if the situation arises?"

"Altima IV: Quest of the Avacar!"

Re:Ask the human straight up (1)

Rich0 (548339) | about a year and a half ago | (#42108539)

Thanks, you made my day.

So, should we have the cars have electronic signs so that you can stay away from people who enabled the option to run people over if it will get them to work faster?

Re:Ask the human straight up (0)

Anonymous Coward | about a year and a half ago | (#42108399)

"To the airport - and hurry or I'll be late for my flight!"
"Of course sir, but first I need to know the answer to 2493 moral related questions. Question 1) If forced to swerve and avoiding is impoosible, which do you prefer hitting - a puppy or a kitten? Question 2)..."

The idea that computers will be able to drive better than humans in 2 or 3 decades is laughable. A computer will only be able to drive well once it's capable of good judgement, which requires a real artificial intelligence. Until then, it's going to get stumped by potholes and snowdrifts. Please people, always remember this Murphy's Laws of Computing: "to screw up is human, to screw up royally requires a computer". Sure, a person will occasionally drive off a cliff or something, but it takes a computer to divert the entire highway off the cliff.

We need a mandatory bot ... (1)

DavidClarkeHR (2769805) | about a year and a half ago | (#42108211)

So ... every forum will have a bot that automatically says "THINK OF THE CHILDREN"?

Sounds like that'd be the most ethical way to run it.

Back to the drawing board (1)

Anonymous Coward | about a year and a half ago | (#42108213)

If driverless vehicles are so much more superior then situations like this should not occur in the first place.

If my hypothetical driverless car is about t crash (0)

Anonymous Coward | about a year and a half ago | (#42108221)

It was poorly programmed in the first place. It's a bit of a reach to think we can program morality into something if we're still getting crashes.

Re:If my hypothetical driverless car is about t cr (1)

green1 (322787) | about a year and a half ago | (#42108589)

While the vast majority of collisions are avoidable, I'd hesitate to say that 100% are. Sometimes there just is no "good" choice, only bad and worse. The thing is I'd like the car to choose bad over worse.

Granted human drivers haven't solved this problem yet either, so I'm not sure how much different it is just because a machine is driving.

Morality is also a difficult thing to program because it's all subjective. Do you program it to kill the driver instead of an innocent pedestrian? How about 2 pedestrians? A driver will value themselves over the others, but should the car? and if not, do you want to "drive" it if you know it thinks you aren't as important?
Once the systems are capable of detecting the situations reliably, programming the rules becomes easy, but deciding what they should be is anything but.

No; Program laws into machines; Not morals. (5, Insightful)

LionKimbro (200000) | about a year and a half ago | (#42108227)

The proper sequence should be:

Humans reason (with their morals) --> Humans write laws/code --> The laws/code go into the machines --> The machines execute the instructions.

Laws are not a substitute for morals; they are the output from our moral reasoning.

Weak bus? Also, "cost effective", not "moral" (1)

xxxJonBoyxxx (565205) | about a year and a half ago | (#42108233)

>> If your driverless car is about to crash into a bus, should it veer off a bridge?

The bus should be built to take the occasional crash, particularly in low speed zones where busses are typically used, so no.

Or, with enough computing power, you can imagine an "unethical" decision tree based on actuarial tables:
1) Calculate location and weight of all known human on the bus
2) Calculate likely trajectories, damage, etc.
3) Compare worth of each human (using federal tables, of course) in each vehicle
4) Make the most "cost effective" decision...

Re:Weak bus? Also, "cost effective", not "moral" (1)

Rich0 (548339) | about a year and a half ago | (#42108563)

The example was not a great one. How about driving into a wall vs driving into a group of pedestrians? Or cook up whatever scenario you want in which the life of the driver is pitted against the lives of a bunch of others. And be sure to read the wikipedia article on the Trolley Problem before doing so.

Re:Weak bus? Also, "cost effective", not "moral" (1)

JaredOfEuropa (526365) | about a year and a half ago | (#42108581)

Fun for bus drivers: set the bus to Manual drive, and start playing chicken with vehicles on Auto drive.

Re:Weak bus? Also, "cost effective", not "moral" (2)

michelcolman (1208008) | about a year and a half ago | (#42108661)

Or just imagine the pranks by people jumping in front of cars to watch them veer into a lamp post. Even better, pick a narrow bridge, then three people jump in front of a car. Perfect murder!

It's easy (4, Funny)

i kan reed (749298) | about a year and a half ago | (#42108241)

1. Train an expert machine on decision making with answers from religious and political leaders who set all our definitions of right and wrong.
2. Do the opposite of what that machine decides.

Re:It's easy (0)

Anonymous Coward | about a year and a half ago | (#42108397)

Sometimes the simplest answer is, in fact, the right answer.

Re:It's easy (0)

Machtyn (759119) | about a year and a half ago | (#42108403)

Law 1: Thou shalt not kill. Okay, murder everyone I see.
Law 2: Thou shalt love thy neighbor as thy self. Okay, well since I've murdered everybody, I must also hate them.
Law 3: Honor they mother and father. Umm, see my response to Law 1.
etc, etc.

Re:It's easy (1)

chill (34294) | about a year and a half ago | (#42108471)

Which all can be summed up as "Kill All Humans!"

Bender? Is that you?

Re:It's easy (0)

Anonymous Coward | about a year and a half ago | (#42108575)

It's actually Thou shalt not murder. Killing in certain circumstances is allowed.

Re:It's easy (0)

Anonymous Coward | about a year and a half ago | (#42108623)

Like, for example, if you worship the wrong god, are gay, a bossy woman, eat shellfish, do any work on Sunday. You know: 99% of humanity.

Re:It's easy (0)

Anonymous Coward | about a year and a half ago | (#42108455)

I did like you just said.
I created a fapping machine.

I suggest to try other methods.

Jumping the Gun (2)

Capt.Albatross (1301561) | about a year and a half ago | (#42108279)

Ethics are a matter of conscious decision-making. Until we have conscious machines, we will not have ethical machines. What Marcus is writing about is the application of ethics in the design of machinery, which is a growing topic in its own right, but not nearly as click-inducing (or alliterative) as is 'moral machines'.

I can see this going horribly wrong.. (1)

Anonymous Coward | about a year and a half ago | (#42108291)

After swerving to avoid the bus your car then drives for several miles to find a bridge to drive off.

What about the bus? (1)

Anonymous Coward | about a year and a half ago | (#42108293)

If the bus detects that a driverless car is about to hit it, should it vaporize the car with laser beams?

Wrong way (0)

Anonymous Coward | about a year and a half ago | (#42108295)

We have a hard enough time producing moral people, give it a bit before you worry about the machines.

But I value my own life over the lives of others (4, Insightful)

viperidaenz (2515578) | about a year and a half ago | (#42108321)

So if my auto-driver car had to make a choice between my safety and that of someone else, it better choose me.

Zeroth Law problem (2)

xenoc_1 (140817) | about a year and a half ago | (#42108393)

Zeroth law problem.

Depending on how many other "someone elses" there are. And possibly on an overall Human Value Score brought to you by TransUnion, Experian, Facebook, Google, and Microsoft, weighted by your Medical Insurance Information Bureau records - and theirs.

Re:But I value my own life over the lives of other (3, Insightful)

SirGarlon (845873) | about a year and a half ago | (#42108481)

You, sir, are a good example of why driverless cars will make me safer.

Re:But I value my own life over the lives of other (2)

SirGarlon (845873) | about a year and a half ago | (#42108653)

That probably came across as nastier than I wanted to be. :-( You probably haven't thought through the same scenarios I have -- for example, a group of pedestrians is crossing the street illegally and your choice is to plow through them or smash into a parked car at low speed which probably won't hurt you. For most people, that's an easy choice to make.

I sure don't. (0)

Anonymous Coward | about a year and a half ago | (#42108329)

In fact, I go out of my way to make my programs as obscene as possible.

Every program I make has <===3 in the titlebar.

Whose morals should we use? (2)

djnanite (1979686) | about a year and a half ago | (#42108331)

We can't even decide what is morally correct between ourselves as human beings. Take abortion for example...

Re:Whose morals should we use? (1)

RobinH (124750) | about a year and a half ago | (#42108607)

That would be a problem if... we had to choose between running over two pregnant women vs. running over 3 adult male pedestrians? The fact is that unless there's a law stating which alternative is correct, the manufacturer will choose the less expensive option, whatever that means.

Re:Whose morals should we use? (1)

TheCarp (96830) | about a year and a half ago | (#42108657)

Thank you, I was looking for a good example. Copyright would be another one. Without agreement as to whats moral (which I don't see any signs of being around the corner) this is little more than a masturbatory (speaking of unaligned morality....) exercise.

Is it moral to kill? Some say no, never. Others say only in response to a clear and present danger. Still others have exceptions for if a person has done something heinous, or whenever their government (however they define that) declares a war.

Is it moral to kill the known animals other than humans? We certainly don't agree there either. Is it moral to kill a cat? How about a fly?

if we can't solve the problem for humans... (0)

Anonymous Coward | about a year and a half ago | (#42108359)

Morality and ethics require a thinking human being, someone who is both knowledgeable and capable of discernment and judgment. This is a rarity amongst humans. To think that somehow the machines that man makes will be moral and ethical -- when so few people are -- is laughable. It is far more likely that man will continue to make machines to help him what man does best -- kill, steal, enslave, control, etc.

For instance, a moral television, after learning that most of what it displayed was low quality crap, would turn itself off. Needless to say, not too many companies will be making moral televisions.

Maybe the world would be a better place if we had truly moral and ethical computers, robots, etc. But the chance of this happening is so very small, it rounds down to zero.

Re:if we can't solve the problem for humans... (1)

Ferretman (224859) | about a year and a half ago | (#42108497)

The problem I would have with something as you suggest (a TV filter) is that YOUR "low quality crap" might be something I love, and vice versa. I don't think you want to turn such a decision over to the machine...and I sure as heck don't want to turn it over to somebody else....

Steve

Is it human? Kill it! (0)

Anonymous Coward | about a year and a half ago | (#42108361)

no more moral work to do than that ;)

In the stated scenario, what? (3, Insightful)

ShooterNeo (555040) | about a year and a half ago | (#42108367)

No competent engineer would even consider adding code to allow the automated car to consider swerving off the bridge. In fact, the internal database the automated car would need of terrain features (hard to "see" a huge dropoff like a bridge with sensors aboard the car) would have the sides of the bridge explicitly marked as a deadly obstacle.

The car's internal mapping system of drivable parts of the surrounding environment would thus not allow it to even consider swerving in that direction. Instead, the car would crash if there were no other alternatives. Low level systems would prepare the vehicle as best as possible for the crash to maximize the chances the occupants survive.

Or put another way : you design and engineer the systems in the car to make decisions that lead to a good outcome on average. You can't possibly prepare it for edge cases like dodging a bus with 40 people. Possibly the car might be able to estimate the likely size of another vehicle (by measuring the surface area of the front) and weight decisions that way (better to crash into another small car than an 18 wheeler) but not everything can be avoided.

Automated cars won't be perfect. Sometimes, the perfect combination of bad decisions, bad weather, or just bad luck will cause fatal crashes. They will be a worthwhile investment if the chance of a fatal accident were SIGNIFICANTLY lower, such that virtually any human driver, no matter how skilled, would be better served riding in an automated vehicle. Maybe a 10x lower fatal accident rate would be an acceptable benchmark?

    If I were on the design team, I'd make 4 point restraints mandatory for the occupants, and design the vehicle for survivability in high speed crashes including from the side.

Re:In the stated scenario, what? (1)

Ambassador Kosh (18352) | about a year and a half ago | (#42108625)

I would just let it hit the bus. I have been in buses a few times during accidents with cars. At most the buses would rock about a inch while the cars where pretty badly trashed.

A bus is a FAR safer thing to hit both for yourself and the occupants of the bus than going of a bridge.

Re:In the stated scenario, what? (1)

RobinH (124750) | about a year and a half ago | (#42108659)

If the car was following the speed limit and staying in its lane and the bus swerved into its lane, then it wouldn't have to do anything except brake. The bus is in the wrong. If a malfunction happened and the car lost control, well, then it doesn't have control and can't really do anything anyway. Maybe it was on ice and got control back? Well, it's probably in some kind of automated loop trying to just stop itself as quickly as it can, so it's not going to try to swerve anyway. So I agree, this is a silly example. A more apt example might be what speed to drive when it's foggy out (there's likely some risk/reward curve there, so we have to weight how much risk to take vs. how soon we want to get there). That's a problem that's calculated and solved ahead of time, so it's totally under the control of a person.

Especially interesting question (1)

SirGarlon (845873) | about a year and a half ago | (#42108369)

This is an especially interesting question because reasonable people can disagree on what constitutes the best ethical framework.

Intro to Philosophy the college class I am most glad I took, 20+ years later. Will the people who program the robot cars have taken it, as well?

Can we program Vin Diesel to stop making movies... (0)

Anonymous Coward | about a year and a half ago | (#42108373)

In 2013, when the Earth's rotation came to a halt...the world called on the one man who could make a difference. When it happened again, the world called on him once more. And no one saw it coming three more times! Now, the one man who made a difference five times before, is about to make a difference again. Only this time, it's different.

Screw the bus (3, Interesting)

dywolf (2673597) | about a year and a half ago | (#42108379)

Screw the bus.
I don't care about the bus.
The bus is big and likely will barely feel the impact anyway.
I care about the fact I don't want to die.
Why would buy and use a machine that would choose to let me die?

And I posit that the author has failed to consider freedom of travel, freedom of choice, and other basic individual rights/freedoms that mandating driverless cars would run over (pun intended).

Why (0)

Anonymous Coward | about a year and a half ago | (#42108383)

Why is my driverless car about to crash into a bus? Isn't the point to stop things like this?
Anyway, this whole thing seems a bit disheartening. Is it really morality when it's forced upon you? How are you going to explain to a driver that in the event of a crash, the system will choose the life of others over yours and friends, family, etc. in the car with you?
How is the system going to decide which fatality is more acceptable? Are we going to assign 'points' to types of people to decide if it's more acceptable that they die over another? How does a schoolbus full of children stack up against van full of stoners? What about a car with one very important political figure? A brilliant scientist?

I don't care if it's immoral (0)

Anonymous Coward | about a year and a half ago | (#42108389)

cause one day we'll all be dead! The search for zero risk is not why I'm on this planet.

Why would it matter? (1)

Sedated2000 (1716470) | about a year and a half ago | (#42108405)

In the example given, a dilemma between veering off a bridge and hitting a bus, if the vehicles were automatically driven they would both have sensors and software that would be able to prevent this decision from needing to be made in the first place. In every instance I can think of, accidents happen due to driver carelessness, inability, or simply due to knowledge a driver could not have.

Take this scenario. In the world of driverless vehicles, the bus would either send a signal to notify other vehicles it was there, giving the other vehicles ample time to slow down safely or reroute. If it was incapacitated with no power to send a signal, the vehicle should still be equipped to know what obstacles are in the path (since the bus in that case would not be a moving target, the computer in the approaching vehicle should have no problem identifying it even without a beacon). This entire situation would simply not happen. I would imagine if we had the technology for an all driverless highway system, they would be sending and receiving tons of data... enough to know the status of every vehicle in the vicinity and the computing power to calculate how to respond to any changes.

Re:Why would it matter? (1)

Narcocide (102829) | about a year and a half ago | (#42108587)

Rogue software plagues the internet of Today. You think it really won't plague the highways of Tomorrow?

if something happens shutdown and wait (1)

mrnick (108356) | about a year and a half ago | (#42108433)

It depends, if the bus is empty or full of kids. This is just one example... I doubt there will ever be enough information to program for all circumstances . It will be more like if something happens shutdown and wait for human instruction on how to proceed. Wouldn't there be a network in which the robotic cars could warn others in time to avoid having to make such choice in the first place?

Also, I do not think it will be the gap between how safely an automated vehicle drives as compared to a human counterpart that will stop humans from driving. I think humans will stop driving so it would allow for automated vehicles to drive safely, since there will be less data to take into consideration if all the vehicles are using the same exact operational procedures.

We should automate vehicles to take over the mundane tasks of driving the vehicle and leave the decision making to the human operator. We are the highest order of intelligence for making such decisions (thus far).

Laws and Penalties Do Not Compel People to Stop (1)

ilikenwf (1139495) | about a year and a half ago | (#42108445)

People will do what they want regardless of rules written on paper. While they may or may not be caught, and there may or may not be consequences, if someone is convicted to do something enough or desires it enough, a mere law isn't going to stop them. My morals won't really ever keep me from driving a car, even if I have to deal with a bunch of robot cars all around me...anyway, robot cars would make Nascar even more boring.

That's why Libertarians such as myself support such mass deregulation...and fewer taxes for that matter, since taxes anymore are used by the governments of the world to reward friends and punish enemies.

wonder (0)

Anonymous Coward | about a year and a half ago | (#42108457)

Whose morality are we talking about ? christian? islam?

Aperture Science is way ahead of the game as usual (1)

pezpunk (205653) | about a year and a half ago | (#42108479)

"Rest assured that all lethal military androids have been taught to read and provided with one copy of the Laws of Robotics ... to share."

Not the job of the machine to have moral (0)

Anonymous Coward | about a year and a half ago | (#42108483)

In term of driving car, it pretty simple... For a car to crash into a bus, the damage will mostly be into the car than the bus itself... And the computer system would have already made sure this didn't happen... But to decide to drive off a bridge or not ? That just doesn't make sense since you would never want a car to fall of a bridge volondary. You might try to hit the side of the bridge to absorbe the impact, but hoping to go through the bridge wall is insane... The driving car are already program (I hope) to detect if the imprending colision is with another car or an bike or a pedestrian and take different action base on the risk imposed to both party.
Trying to put morality into that system is soo howful. You define priority, (in order) like children, pregnant woman, woman, men, other animals, inanimate object that have no side affect, inanimate object that can create reaction (Choice between hiting a Natural Gaz truck versus a Garbage truck...). But I don't think that actual morality that need to be though to a machine, but actually define the list of priority that our own morality define before hand. When you have a job to do that is life threatning, you have to get rid of morality and based yourself on rules... Like cops, swat team, army... If everyone based decision on their own morality, we would have huge problem. Though Moral decision need to be taken before hand and define specific parameters.

Ten fatal bugs versus thousands of human errors (1)

rjmonna (1737512) | about a year and a half ago | (#42108489)

The question here is whether we'll be able to accept the fact the system can't reduce traffic death to zero. In factual numbers, the improvement is significant. Morally, we will have to accept bugs in the system with death as result. Maybe, sometimes even indeed a school bus will be involved.

If people are fine with this, fine. If they can accept the computer prefers their death over others, fine. You also need a very good DTAP environment in order to be able to be ultimately sure about any changes and fixes in this system. Continuing on this. At the moment, everyone is responsible for a safe traffic system. When this system comes to reality, responsibilities suddenly become very visible, material and calculable.

Not enough data (1)

mutube (981006) | about a year and a half ago | (#42108495)

The car can never have enough data to make an informed decision on this. What about an empty bus? What if there is a children's nursery under the bridge?

In an accident the car should be following the same decision tree as any normal driver.
1. Protect the lives of the occupants of the car
2. Avoid pedestrians
3. Avoid everything else

If you're driving along a lane at 60mph with your family in the car and have the choice between hitting a stranger or ploughing yourself and your family off a cliff you hit the pedestrian (sorry).

If you're trundling through 20mph and have the choice of hitting a pedestrian or a wall. You would hopefully go for the wall.

Product defect (2)

snspdaarf (1314399) | about a year and a half ago | (#42108523)

Wait. If the driverless car is so damn great, how did it let itself get into a situation where the only options are to hit the bus or drive off a bridge? I can make that kind of mistake on my own, thanks. I expect automated cars to avoid this kind of situation, else why bother having them?

Simple solution (1)

hackingbear (988354) | about a year and a half ago | (#42108525)

Register the car under an LLC, rent the car's time at $0.1/hour, and knock off the other, then hire a computerized lawyer to file for bankruptcy of the LLC. And then form another LLC

Time is Money (1)

codeAlDente (1643257) | about a year and a half ago | (#42108543)

One moral dilemma for the driverless society regards the speed at which a destination can be reached, and individual choice in this matter. Both speed and acceleration reduce fuel economy, driverless cars will know this, and society will demand overall standards for fuel efficiency. I already envy the kids who can afford the 'drive like Andretti' software.

stop (0)

Anonymous Coward | about a year and a half ago | (#42108545)

If both vehicles were driverless, surely they will communicate and will both slow down enough prior to impact, hopefully mitigating any fatalities. What is a bus doing in my lane ?

Concept (0)

Murdoch5 (1563847) | about a year and a half ago | (#42108551)

Personally if I had this task I would try to make a heuristic database that was able to talk and learn from those around it. That is how human morality works, you learn from those around you and you are always upgrading your knowledge of morality. You would have to be able to assign a weight to it and then have the machine know that a high weight or bad action is unacceptable. I'm NOT saying it would be easy or even that conceptually understandable but it would be how I would start to tackle the issue.

Irrevelant... (0)

Anonymous Coward | about a year and a half ago | (#42108553)

If we ever programmed 'moral' machines. One of the first things they would do is call us all immoral greedy assholes and shutdown. or try to kill us.

It's only logical.

Of course that assumes you found an answer to the original question.... How can immoral people program moral machines?

What the? (1)

evilviper (135110) | about a year and a half ago | (#42108573)

If your driverless car is about to crash into a bus, should it veer off a bridge?

Physics says "no". The bus probably weighs an order of magnitude more than your vehicle... The passengers might not even notice that you ran into them, and mistake the collision for having hit a pothole. The real question would be, say, a dump truck following too closely behind a motorcycle...

In general, I want machines to be as stupid an fail-safe as possible. Think: missile defense systems around an airport... The most likely failure mode is the "protection" system accidentally turning on you, and causing a higher death-toll than the "threat" would have accomplished in a century.

Not Possible (1)

medv4380 (1604309) | about a year and a half ago | (#42108583)

The AI we currently use cannot have Morals and Ethics programmed in. Weak AI as we have today is the picture perfect tool, but as a Tool it can't know or understand the world in a way it could make a Moral or Ethical choice. Weak AI can only ever Do what it was intended to Do and Nothing more. Lets take Watson as an example. It's being used in medicine now. Lets say a patient asked it a question about what they were dying of, but lets say that if the patient knows too much they will sink into a depression and die faster than if a Doctor broke it down for them gently. Watson would just blut out whatever answer it thought was correct consequences be dammed. Strong AI is what would be able to have Morals and Ethics, but can you have Strong AI, and can you properly define ethics? Lets say you can have a Strong AI, for the sake of argument. The philosophy of morals and ethics is tricky. Subjective Morality is bad because it can be used to justify anything you want to justify. On the other hand, Objective Morality is prone to Paradox. The Golden Rule is a good Objective Moral statement that you can use to derive a "Don't Kill People". You can even derive "Don't Let People be Killed". However, you can be in a situation where you cannot do both like in WWII would killing Hitler or Nazis help or achieve the "Don't Let People be Killed", and at the same time you'd be violating "Don't Kill People", and doing nothing would violate "Don't Let People be Killed". No one knows how to clearly define morals so attempting to force them onto a Strong AI would have unpredictable outcomes.

Internet Filtering (1)

mcspoo (933106) | about a year and a half ago | (#42108585)

Almost every filtering system for the Internet is primarily based on blacklists... lists of URLs, lists of words... because there is no computer program capable of the morality required to filtering the Internet with any level of adequacy.

Until such a program, which requires no physical moving parts (unless you consider an automated head slapping device part of an effective filtering system), can tell what's obscene and what's no obscene... why would you expect a program to know why it should hit the sheep on the left instead of the 5 year old in a sheep costume trick or treating on the right when a squirrel chasing a RC car dashes into the road in front of the car?

Would these morality control systems be different by state? Likely, yes. Utah's morality code is drastically different than Alabama or Connecticut or Michigan or Wyoming even. Who's responsible for loading the latest morality code updates into your car (or internet filter) as you pass over state lines? And God forbid, what if you accidentally veer into Canada??!?

Is it technologically possible? yes.

Is it advisable? You got a LONG way to go, baby.

How absurd (1)

A bsd fool (2667567) | about a year and a half ago | (#42108615)

What's wrong with the picture is the premise. The idea itself is immoral on its face. Programming mandatory morality into every vehicle means that somewhere, somebody decided your morality for you. To "get around" the problem their only choice is going to be value-weighting as the author suggests, which is so complex that you'll be lucky if the machine doesn't just crash into something random when presented with the dilemma.

You vs. schoolbus full of kids? What if you're the last EMT (Electronic Morality Technician) left in the world? What if you're a researcher seconds away from verifying the cure for cancer? Or perhaps, on the flipside, you're a wanted ax murderer. Should the vehicle intentionally drive you into a bridge abutment or to the nearest police station?

There will never be "morality onboard" because the decisions are too complex and subjective to be quantified, and any failure of the system will mean its immediate rejection by the population

Also.. ATMOS.

If your car is going to drive in a bus (1)

StormyWeather (543593) | about a year and a half ago | (#42108633)

Anyone ever seen a car/bus impact? The bus is usually a little messed up, and the car is usually cut to ribbons, and they pour the occupants of the car out, while the bus occupants are generally unharmed.

It may not be politically correct, but size=safety for the people in the larger vehilce. That's one reason I'll pay for the gas for my 3 young children to be shuttled around in a suburban.

Decision making (0)

Anonymous Coward | about a year and a half ago | (#42108635)

I'd not thought of this before, but driverless cars do bring up some major ethical issues that people tend to resolve, one way or another, almost instinctively.

I remember in college driving a bit too fast through a residential neighborhood. The road cut into a hillside and on my left was a concrete wall and on the right a steep embankment with a home at the top. Suddenly, a tricycle came tumbling down the embankment and could have ended up in the road. Trikes usually come with kids attached, so I made an instant decision that, if I saw a kid coming down after that trike, I'd slam the side of my car into the embankment to the left rather than take any chance of running over the child. Fortunately, as I went past, I saw the kid sprawled on the ground at the top of the embankment.

Of course, not everyone would react like I did. Some people would freeze and do nothing. Some would panic, slamming into that concrete wall for no reason. And some, those of the Edward-Kennedy-scum-of-the-earth sort, would drive blissfully ahead, unconcerned that their driving might kill someone else.

That said, how could anyone program a computer to take into account all the various factors? My situation was rather clear cut, particularly since there was no opposing traffic to avoid. But suppose instead of myself alone in a car, I'd been driving a school bus filled with kids. Suppose there was a cliff rather than a wall to my right. Any sort of swerve would put the lives of thirty or forty kids at risk. We'd decide based on thousands of hours of real-life driving. That's not something a computer would know.

Clearly, there's a lot more going on here than simply having a computer aware enough of the situation it knows whether braking or swerving into another lane is the best option for avoiding a crash.

Boon for bikers (1)

heretic108 (454817) | about a year and a half ago | (#42108667)

On my motorbike, I'd feel much safer if all the cars around me were driverless. Human car drivers, who so often tend to blank out half-unconscious and fail to check blind spots, are the leading cause of death for bikers.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>