Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Selectable Ethics For Robotic Cars and the Possibility of a Robot Car Bomb

samzenpus posted about 2 months ago | from the no-hands dept.

Transportation 239

Rick Zeman writes Wired has an interesting article on the possibility of selectable ethical choices in robotic autonomous cars. From the article: "The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible. Philosophically, this opens up an interesting debate about the oft-clashing ideas of morality vs. liability." Meanwhile, others are thinking about the potential large scale damage a robot car could do.

Lasrick writes Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." Lin explores the many ways in which robot cars could be exploited for nefarious purposes, including the fear that they could help terrorist organizations based in the Middle East carry out attacks on US soil. "And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real." But Lin isn't too worried about these threats, and points out that there are far easier ways for terrorists to wreak havoc in the US.

cancel ×

239 comments

Sorry! There are no comments related to the filter you selected.

Hi welcome to Jonny Cab (4, Funny)

garlicbready (846542) | about 2 months ago | (#47695839)

Hope you enjoyed the ride ha ha

Re:Hi welcome to Jonny Cab (0)

Anonymous Coward | about 2 months ago | (#47695859)

I'm not familiar with that address. Would you please repeat the destination?

Fuck people! (0)

Anonymous Coward | about 2 months ago | (#47695841)

Can I set my car to kill as many people as possible? Add chariot spikes, etc.?

Re:Fuck people! (4, Funny)

Opportunist (166417) | about 2 months ago | (#47696175)

Judging from Monday morning traffic in my town, a lot of people already set their cars to that setting.

Re:Fuck people! (3, Funny)

GameboyRMH (1153867) | about 2 months ago | (#47696359)

First you'd need to root the car and run "echo 1 > /dev/morality/evil"

easier ways for terrorists to wreak havoc (2)

fustakrakich (1673220) | about 2 months ago | (#47695855)

Yeah, run for office...

MUCH easier. (3, Interesting)

khasim (1285) | about 2 months ago | (#47696009)

From TFA:

Do you remember that day when you lost your mind? You aimed your car at five random people down the road.

WTF?!? That makes no sense.

Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right.

Again, WTF?!? Who would design a machine that would take control away from a person TO HIT AN OBSTACLE? That's a mess of legal responsibility.

This scene, of course, is based on the infamous "trolley problem" that many folks are now talking about in AI ethics.

No. No they are not. The only "many folks" who are talking about it are people who have no concept of what it takes to program a car.

Or legal liability.

Itâ(TM)s a plausible scene, since even cars today have crash-avoidance features: some can brake by themselves to avoid collisions, and others can change lanes too.

No, it is not "plausible". Not at all. You are speculating on a system that would be able to correctly identify ALL THE OBJECTS IN THE AREA and that is never going to happen.

Wired is being stupid in TFA.

Re:MUCH easier. (4, Insightful)

Qzukk (229616) | about 2 months ago | (#47696239)

You are speculating on a system that would be able to correctly identify ALL THE OBJECTS IN THE AREA and that is never going to happen.

It doesn't have to identify all the objects in the area, it simply has to not hit them.

Re:MUCH easier. (2)

khasim (1285) | about 2 months ago | (#47696341)

It doesn't have to identify all the objects in the area, it simply has to not hit them.

Which is an order of magnitude EASIER TO PROGRAM.

And computers can recognize an obstacle and brake faster than a person can.

And that is why autonomous cars will NEVER be programmed with a "choice" to hit person X in order to avoid hitting person A.

So the premise of TFA is flawed.

Drivers already have variable ethics (1)

Anonymous Coward | about 2 months ago | (#47695863)

Not sure why the "ethics" of a robot driver are such a big deal.

Re:Drivers already have variable ethics (4, Insightful)

kruach aum (1934852) | about 2 months ago | (#47695885)

Because ethicists like making work for themselves -- it's unethical to wait for another disaster or human rights violation just so you can do more work!

Re:Drivers already have variable ethics (0)

Anonymous Coward | about 2 months ago | (#47695919)

One word: Battlebots.

Re:Drivers already have variable ethics (4, Informative)

maliqua (1316471) | about 2 months ago | (#47696077)

I'm not really sure why they call it 'ethics of the car' not ethics of the owner or programmer, or administrator of the car.

If you put a bomb in a robot car and had tell it to drive to a statium, the car didn't fail to make an ethical choice. I doubt the car would even be aware of the bomb, or what a bomb is, or why its bad.

Insurance rates (3, Interesting)

olsmeister (1488789) | about 2 months ago | (#47695865)

I wonder whether your insurance company would demand to know how you have set your car, and adjust your rates accordingly?

Re:Insurance rates (2, Insightful)

Twinbee (767046) | about 2 months ago | (#47695887)

Car insurance companies will die off when car AI becomes mainstream.

Re:Insurance rates (1)

Anonymous Coward | about 2 months ago | (#47695951)

Not as long as your finance company requires that you insure it.

Re:Insurance rates (1)

Lunix Nutcase (1092239) | about 2 months ago | (#47695991)

And the government requires it to be on the road.

Re:Insurance rates (0)

Anonymous Coward | about 2 months ago | (#47695969)

+5 funny. One of the funniest jokes I've seen all day.

Re:Insurance rates (4, Insightful)

Lunix Nutcase (1092239) | about 2 months ago | (#47695987)

Hahahahahahahahaha. No, they won't. They will keep themselves around through lobbying efforts.

Re:Insurance rates (0)

Anonymous Coward | about 2 months ago | (#47696007)

Insurance is a highly competitive industry. If accident rates go down competition will force rates close to $0.

Re:Insurance rates (1)

king neckbeard (1801738) | about 2 months ago | (#47696047)

Is it? It's probably more competitive than the ISP market, but it seems like there are mostly just a few consolidated companies that handle most everything.

Re:Insurance rates (1)

Shimbo (100005) | about 2 months ago | (#47696091)

Isn't that much the same as the ISP market then? Lots of choice but lots of consolidation happening behind the scenes.

Re:Insurance rates (1)

Lunix Nutcase (1092239) | about 2 months ago | (#47696057)

Do unicorns and flying pigs exist in that fantasy world, too?

Re:Insurance rates (1)

Twinbee (767046) | about 2 months ago | (#47696117)

I'm so sorry that there will be hardly any accidents, and so the number of claims will nose-dive. It's tragic, but you can't stop progress.

Re:Insurance rates (1)

Lunix Nutcase (1092239) | about 2 months ago | (#47696155)

Accidents and injuries have been decreasing for more than a decade. And yet you're still required to have insurance both by financing companies and state governments.

There is absolutely zero reason to believe that finance companies and state governments will not still require insurance even when cars are automated.

Re:Insurance rates (1)

Twinbee (767046) | about 2 months ago | (#47696219)

If it still exists, it will be next to nothing. Being next to nothing means that there's no benefit to anyone, unless you like doing unnecessary paperwork. Which granted, most business seem to love a lot.

Re:Insurance rates (1)

Lunix Nutcase (1092239) | about 2 months ago | (#47696281)

Way to not address my points. You've simply repeated your assertion. Why would any bank finance a car loan without insurance? That would be monumentally stupid.

Re:Insurance rates (1)

Twinbee (767046) | about 2 months ago | (#47696439)

Because it wouldn't be a liability to anyone anymore. Imagine zero crashes. That's an extreme, but let's assume that for the sake of argument.

Insurance at that point would be seen as pointless, and so the government would not require insurance like the 'good ol days' (read "bad old days").

Re:Insurance rates (1)

CanHasDIY (1672858) | about 2 months ago | (#47696181)

Do unicorns and flying pigs exist in that fantasy world, too?

I'm so sorry that there will be hardly any accidents, and so the number of claims will nose-dive. It's tragic, but you can't stop progress.

So... yes, then.

Re:Insurance rates (3, Insightful)

Jason Levine (196982) | about 2 months ago | (#47696301)

You will still be required to have car insurance (whether because of some actual need or because of lobbying from the insurance industry). Your rates might lower a bit to give you an incentive to get a car that drives itself, but they won't plummet. Less accidents/claims will just mean that the insurance companies will wind up with more profits. Which means more money to spend lobbying the government to require auto insurance and robot cars which means more profits. Rinse. Repeat.

Re:Insurance rates (2)

ColdWetDog (752185) | about 2 months ago | (#47696125)

Insurance is a highly competitive industry. If accident rates go down competition will force rates close to $0.

You perhaps might see collision rates go down but there are many other liabilities that one typically insures a vehicle for - weather related damage, medical, liability and others (usually bundled under the rubric of 'comprehensive').

You are also assuming, without any data, that the future Johnny Cab will never get itself into an accident. I'm not so sure I would make such a bold claim.

Re:Insurance rates (2)

Twinbee (767046) | about 2 months ago | (#47696137)

Yeah, just like dealers will lobby hard against companies like Tesla. It won't be enough, they'll die off too.

Re:Insurance rates (1)

Lunix Nutcase (1092239) | about 2 months ago | (#47696213)

Why would finance companies and state governments not still require you to carry insurance? No finance company is going to give you a car loan and not require you to insure it. Your post is hilariously naïve.

Oh and the insurance companies are hugely greater in size than car dealerships. Car dealers are chumps in comparison.

Re:Insurance rates (1)

Twinbee (767046) | about 2 months ago | (#47696367)

Maybe you've been ripped off so much by the car insurance companies that you're missing the obvious. There in principle cannot be a car insurance market if cars don't crash anymore. If the reduction in accidents is one tenth of the what is was before, then the insurance premium will be about one tenth too (all else being equal, with maximum automation). At that point, the sheer paperwork will more than cancel out any benefit gained to anyone.

Re:Insurance rates (1, Insightful)

CanHasDIY (1672858) | about 2 months ago | (#47696161)

Car insurance companies will die off when car AI becomes mainstream.

Kind of like how representative democracy died off when we all got smart phones, right?

No, dude, sadly middlemen will always exist, adding no value to things but taking your money anyway.

Re:Insurance rates (1)

Lunix Nutcase (1092239) | about 2 months ago | (#47696261)

If you were financing car loans would you do so without requiring it be insured? That would be an extremely dumb thing not to do.

Re:Insurance rates (1)

Twinbee (767046) | about 2 months ago | (#47696263)

Send my best wishes then to the middlemen who WON'T exist when Tesla Motors and companies like them eventually sell direct.

Re:Insurance rates (0)

Anonymous Coward | about 2 months ago | (#47696501)

Not likely, rates MAY drop as self driving cars become prevalent but most states have laws forcing people to buy insurance and I'm sure that insurance companies will fight to the last to keep those laws on the books thereby guaranteeing them revenue with little risk.

Re:Insurance rates (4, Interesting)

grahamsz (150076) | about 2 months ago | (#47696221)

More likely that your insurance company would enforce the settings on your car and require that you pay them extra if you'd like the car to value your life over other lives.

With fast networks it's even possible that the insurance companies could bid on outcomes as the accident was happening. Theoretically my insurer could throw my car into a ditch to avoid damage to a bmw coming the other way.

Re:Insurance rates (1)

GameboyRMH (1153867) | about 2 months ago | (#47696447)

With fast networks it's even possible that the insurance companies could bid on outcomes as the accident was happening. Theoretically my insurer could throw my car into a ditch to avoid damage to a bmw coming the other way.

I might get to see the first car get diverted into a schoolbus to avoid a 50-million-dollar superduperhypercar. I'll have to dress for the occasion with my best fingerless gloves and head-worn goggles.

Will not matter. (4, Insightful)

khasim (1285) | about 2 months ago | (#47696265)

I wonder whether your insurance company would demand to know how you have set your car, and adjust your rates accordingly?

That does not matter because it won't be an option.

That is because "A.I." cars will never exist.

They will not exist because they will have to start out as less-than-100%-perfect than TFA requires. And that imperfection will lead to mistakes.

Those mistakes will lead to lawsuits. You were injured when a vehicle manufactured by "Artificially Intelligent Motors, inc (AIM, inc)" hit you by "choice". That "choice" was programmed into that vehicle at the demand of "AIM, inc" management.

So no. No company would take that risk. And anyone stupid enough to try would not write perfect code and would be sued out of existence after their first patch.

"Philosophically, this opens up an interesting deb (0)

kruach aum (1934852) | about 2 months ago | (#47695871)

ate" -- Not at all! In fact, it does the exact opposite. By implementing every possible position on the software level and allowing the vehicle's owner to choose, no one needs to debate anything. A utilitarian can choose "minimize overall damage", a randroid "protect me at all costs", and a lawyer "minimize liability", without any of them having to agree about anything. I wish all philosophical debates were this easy to solve.

Re:"Philosophically, this opens up an interesting (1)

medv4380 (1604309) | about 2 months ago | (#47695943)

Isn't minimize overall damage, and minimize liability usually equal. I can't think of one where they wouldn't be off hand. Anything greater than minimized damage would be increased liability.

Re:"Philosophically, this opens up an interesting (1)

kruach aum (1934852) | about 2 months ago | (#47696023)

Whoops, I should have written "harm". Read damage as "damage to human beings." I can imagine scenarios where those diverge, as can the article's summary's author.

Re:"Philosophically, this opens up an interesting (3, Interesting)

Rob Riggs (6418) | about 2 months ago | (#47696189)

Just wait until the AI has to keep track of liability awards so that it can make the correct decision regarding minimizing liability. At some point you are going to have a stupid jury award and all the cars are just going to refuse to go anywhere because the AI's cost benefit analysis says "just stay in park".

Re:"Philosophically, this opens up an interesting (2)

canadiannomad (1745008) | about 2 months ago | (#47696233)

Can I program mine to always claim to other vehicles that I have 7 babies on board?

Re:"Philosophically, this opens up an interesting (0)

Anonymous Coward | about 2 months ago | (#47696087)

Increased harm to yourself would not increase your liability.

Re:"Philosophically, this opens up an interesting (4, Insightful)

Opportunist (166417) | about 2 months ago | (#47696203)

No. To minimize damage, you'd have to brake when approaching a child. To minimize liability, you have to accelerate when you notice that you can't stop in time to avoid severe injury, i.e. to ensure death which is cheaper than a lifetime cripple.

Re:"Philosophically, this opens up an interesting (1)

Jason Levine (196982) | about 2 months ago | (#47696365)

Scary thought. What if the liability the car sought to minimize was for the insurance companies?

"Upcoming crash detected. Liability analysis pending. If the crash is fatal, typical payout is $N. If the crash is non-fatal, initial payout will be lower, but long-term repeated payments will increase until they are greater than $N. Minimizing liability demands a fatal crash. Initiating termination of car's occupants."

Re:"Philosophically, this opens up an interesting (1)

Opportunist (166417) | about 2 months ago | (#47696441)

Well, it's unlikely that occupants would be killed off (unless so specified by the driver), it would be kinda bad for the sales if it got out. And such things have a way to get out.

Though I could fully see, at the very least modifications to the software (which will probably be outlawed soon), is logic that ensures an unavoidable crash with physical harm to another person is as fatal as possible while at the same time leaving the proper skid marks that suggest trying to avoid it.

Re:"Philosophically, this opens up an interesting (1)

orgelspieler (865795) | about 2 months ago | (#47696225)

"minimize liability" = "If I have to hit a human, make sure I kill him instead of maim him."

Re:"Philosophically, this opens up an interesting (1)

jellomizer (103300) | about 2 months ago | (#47696099)

Being that the autonomous automobile (the Auto-Auto) will probably be released when its safety ability exceeds that of a person, and each generation will get better. Being that the algorithm may be designed to Protect Passenger, vs. Max Insurance liability, or save most amount of people. In essence really doesn't matter as they all try to avoid accidents all together. And these algorithms will only come up in a world of decreasing rare possibilities.

I would expect protect passenger algorithm is the easier one to maintain as it has the most information available. The Insurance Calculation may be the next best, but how do you know if there are a lot of people in the bus or is it empty?

Blue Screen of Death... (4, Funny)

bobbied (2522392) | about 2 months ago | (#47695877)

BSOD starts to take on a whole new meaning..

As does, crash dump, interrupt trigger, dirty block and System Panic...

Re:Blue Screen of Death... (4, Funny)

jpvlsmv (583001) | about 2 months ago | (#47696433)

You're right, officer, Clippy should not have been driving.

Now, what to do when my Explorer crashes...

Click on the Start button, go to "All Programs", then go to "Brakes", right-click on the "Apply Brakes" button, and choose "Run as Administrator". After the 15-second splash screen (now with Ads by Bing), choose "Decelerate Safely".

"oft-clashing ideas of morality vs. liability." (1)

Anonymous Coward | about 2 months ago | (#47695879)

Am I the only one who expected it to read "oft-clashing ideas of morality vs. legality" ?

Labor costs (0)

Anonymous Coward | about 2 months ago | (#47695881)

Right now, impressionable youth from 3rd world countries are cheaper than robots. There won't be much worry about this for a while. A rust-bucket Honda and some dumb kid are going to be a lot cheaper than the latest Google-Tesla joint venture product.

We have plenty of time to think about it before Is-lame-oh terrorists are using them.

Re:Labor costs (1)

ColdWetDog (752185) | about 2 months ago | (#47696285)

Right now, impressionable youth from 3rd world countries are cheaper than robots. There won't be much worry about this for a while. A rust-bucket Honda and some dumb kid are going to be a lot cheaper than the latest Google-Tesla joint venture product.

We have plenty of time to think about it before Is-lame-oh terrorists are using them.

Except that one limiting factor in the jihad is the ability to get the starry eye idealist soon-to-be-martyr over on this side of the pond. Blowing oneself to tiny bits appears to be a hard sell to westernized folk. The concern here would be that an autonomous vehicle could alleviate that problem.

Of course, it's not a perfect solution. You have to purchase or steal the thing which now are in rather short supply. An autonomous vehicle is going to be fairly tightly regulated once let out into the wild - one of the basic tenets is that it communicates with other vehicles and presumably some sort of central command. Driving into a crowd or a chemical factory might be frowned upon.

Much easier to just blow up a chemical factory somewhere in New Jersey. Or start a forest fire near LA.

But it's the FBI's task to get all paranoid and look at all potential possibilities to ensure our ultimate freedom. USA! USA!

Re:Labor costs (2)

canadiannomad (1745008) | about 2 months ago | (#47696297)

You get less human rights complaints when you let the children(or entire populations) stave by using robots for your cheap labour instead of paying them a pittance. :(

We will need liability laws before we let them hit (1)

Joe_Dragon (2206452) | about 2 months ago | (#47695883)

We will need liability laws before we let them hit the road without any robot drivers.

We can't let them use EULA's even if there are some it will very hard to say that an car crash victim said yes to one much less them standing up in a criminal court.

Not really game changing (-1, Flamebait)

Chrisq (894406) | about 2 months ago | (#47695891)

To the Muslims life is cheaper than a robot car. They are always going on about how they love death and killing infidels, so they are just as likely to do it in a normal car just like they do now [google.co.uk] .

Scare of the day (4, Insightful)

Iamthecheese (1264298) | about 2 months ago | (#47695897)

Dear government, Please shut up bout terrorism and get out of the way of innovation. sincerely, informed citizen

Re:Scare of the day (1)

jellomizer (103300) | about 2 months ago | (#47695949)

Exactly,
Technology can be used for good or for evil.

That rock that Ugg used to start a fire to keep his family warm, also worked really good at throwing at his rivals to kill them.

Re:Scare of the day (2)

Opportunist (166417) | about 2 months ago | (#47696209)

Ffft. Where's the kickback in that?

ohnonotagain (1)

cellocgw (617879) | about 2 months ago | (#47695899)

This exact topic has been on /. several times. I will not be in the least surprised to see the exact same collection of wildass FUD claims in the comments.

Been discussed before (2, Insightful)

gurps_npc (621217) | about 2 months ago | (#47695901)

Not news, not interesting.

1) The cars will most likely be set by the company that sold it - with few if any modifications legally allowable by the owner.

2) Most likely ALL cars will be told to be mostly selfish, on the principle that they can not predict what someone else will do, and in an attempt to save an innocent pedestrian might in fact end up killing them. The article has the gall to believe the cars will have FAR greater predictive power than they will most likely have.

3) A human drivable car with a bomb and a timer in it is almost as deadly as a car that can drive into x location and explode is. The capability of moving the car another 10 feet or so into the crowd, as opposed to exploding on the street is NOT a significant difference, given a large explosion.

4) The cars will be so trackable and with the kind of active, real time security monitoring, that we will know who programmed it and when, probably before the bomb goes off. These are expensive, large devices that by their very nature will be wired into a complex network. It is more likely the cars will turn around and follow the idiot, all the time it's speakers screaming out "ARREST THAT GUY HE PUT A BOMB IN ME!"

Re:Been discussed before (1)

Iamthecheese (1264298) | about 2 months ago | (#47695953)

I agree that the article is a waste of time but you're a little off with point number 3: There are places one can drive but not park where a bomb would be more secure. That said it's not a large change to simply disallow driverless and passengerless cars where security is a problem.

Re:Been discussed before (2, Informative)

Anonymous Coward | about 2 months ago | (#47696079)

2) Most likely ALL cars will be told to be mostly selfish, on the principle that they can not predict what someone else will do, and in an attempt to save an innocent pedestrian might in fact end up killing them. The article has the gall to believe the cars will have FAR greater predictive power than they will most likely have.

This is a thing that is starting to irritate me. This is a piece from the director of the "Ethics + Emerging Sciences Group"
Recently we have seen writeups about the ethics of automate from psychologists and philosophers that are completely clueless to what laws are already in place and what best practices are when it comes to automation.
They go in with the assumption that a machine is conscious and will make conscious decisions, ignoring that it it impossible to get anything remotely resembling an AI through the certification process needed for safety critical machinery.

Re:Been discussed before (1)

RobinH (124750) | about 2 months ago | (#47696177)

However, what's particularly weird, when I hear about software-based automotive recalls like the Toyota accelerator stack overflow bug, is that automotive companies don't seem to have to be certified to anything near the machine safeguarding standards we use to certify factory-floor automation. Nowadays a piece of equipment on the plant floor is pretty much provably safe to operate assuming you don't start disassembling it with a screwdriver. I don't see any such methodology being applied to vehicle control systems.

Re:Been discussed before (2)

bluefoxlucid (723572) | about 2 months ago | (#47696193)

Point 4 will never happen. A little duct tape over the security sensor. Sealed briefcase bomb.

The rest of this is stupid. We have already put RC receivers into regular cars and used a Radio Shack car controller to drive. They did that on Blues Brothers 2000, and probably The Simpsons. We have real RC car races. You just need a Pringles can, a wire, and a car.

Drones and robotic cars (1)

Anonymous Coward | about 2 months ago | (#47695905)

Well, now soon everybody is going to have access to drones and robotic cars as the new guns the same as the middle ages crossbows were a game changer.

May be they should have considered of the consequences of their extrajudicial drone assasinations, tortures and such kind of petty things.

What about maintenance settings? (1)

Joe_Dragon (2206452) | about 2 months ago | (#47695909)

What about maintenance settings?

We can't let the car makers set them to only go to the dealer for any and all work.

We can't can't jet jacks low cost auto cars push the limits of maintenance to being unsafe.

Re:What about maintenance settings? (1)

GrumpySteen (1250194) | about 2 months ago | (#47696031)

We can't can't jet jacks low cost auto cars push the limits of maintenance to being unsafe.

That made lots of sense.

Automation, remote controls already exist (5, Insightful)

i kan reed (749298) | about 2 months ago | (#47695911)

Let's skip "car" because I can, in theory, attach enough explosives(and shrapnel) to kill a large number of people to a simple homemade quadrotor, run with open source software, give it a dead-reckoning path and fire and forget from a relatively inconspicuous location. Multiple simultaneously, if I have the amount of resources a car bomb would require.

Automation is here. Being paranoid about one particular application of it won't help anyone.

Re:Automation, remote controls already exist (1)

bobbied (2522392) | about 2 months ago | (#47696017)

Automation is here. Being paranoid about one particular application of it won't help anyone.

Yea, what you say is true, but it really doesn't make good news to talk about things that way. At least until somebody actually does it, then we get weeks of wall to wall "breaking news" and "Alert" coverage and the hosts of MSNBC will pontificate about how we should have known this was going to happen and stopped it.

Re:Automation, remote controls already exist (1)

idontgno (624372) | about 2 months ago | (#47696215)

Yea, what you say is true, but it really doesn't make good news to talk about things that way. At least until somebody actually does it, then we get weeks of wall to wall "breaking news" and "Alert" coverage and the hosts of MSNBC will pontificate about how we should have known this was going to happen and stopped it.

If your point is that the talking heads always talk about everything but the threat which will actually materialize, true. Not a deep insight, but true.

OMG ROBOT BOMB CARZ is what's playing up on stage on tonight's episode of Security Masterpiece Theatre.

Quadcopter grenade bombers is what will actually happen. Unless it's something even more lo-tek, like moar pressure cookers.

Will a robo car be able to break the law to save (1)

Joe_Dragon (2206452) | about 2 months ago | (#47695957)

Will a robo car be able to break the law to save some one from death / injury?

Re:Will a robo car be able to break the law to sav (1)

i kan reed (749298) | about 2 months ago | (#47696205)

You seem to think that a self-driving car is a self-aware, subjective, thinking thing.

Within this particular field, the application of "AI" algorithms gives fuzzy answers to difficult questions, but only as inputs to boring, more traditionally algorithmic processes. Laws, conveniently, are codified in much the same way as those traditional algorithms(though, again, with fuzzy inputs).

Any company even remotely trying to engage this would encode the laws at that level, not as something some AI tries to reason out.

Re:Will a robo car be able to break the law to sav (2)

Cro Magnon (467622) | about 2 months ago | (#47696493)

It will, if it's an Asimov car. The law should only be Second Rule. No death to humans is the First.

Not so fast (2)

Moof123 (1292134) | about 2 months ago | (#47695961)

It sure seems like such selectable ethics concerns are kind of jumping the gun. Regulatory behavior is going to clamp down on such options faster than you can utter "Engage!". Personally I would want my autonomous car to be designed with the most basic "don't get in a crash" goal only, as I suspect regulators will as well.

Far more important is the idea that we will have at least an order of magnitude or two increase in the amount of code running a car. If Toyota had trouble with the darn throttle (replacing the function of a cable with a few sensors and a bunch of code), how can we trust that car companies will be able to manage a code base this big without frequent catastrophe? Adding extra complexity to tweak the "ethics" of the car just sounds like guilding the lilly, which increases the opportunities for bugs to creep in.

Re:Not so fast (1)

Joe_Dragon (2206452) | about 2 months ago | (#47696043)

even in a big Regulatory system with most basic "don't get in a crash" you may end up in a place where it needs to pick from a choice of crash choices even ones like may do damage but low chance of injury vs say try a move that may have a 5% chance of being crash free.

Re:Not so fast (-1)

Anonymous Coward | about 2 months ago | (#47696267)

That would be "gilding", not "guilding". Just like it's "nip it in the bud", not "nip it in the butt". Is Engrish really that hard?

Philosophy Settings (5, Funny)

Joe Gillian (3683399) | about 2 months ago | (#47696001)

I, for one, cannot wait for the day when I can set my car's logic system to different ethical settings, sorted by philosopher. For instance, you can set your car to "Jeremy Bentham", which will automatically choose whoever looks less useful to ram into when in a crash situation. You could also set it to "Plato", which will cause the car to ram into whoever appears less educated (just hope it doesn't happen to be you).

Just make sure you don't set the car to "Nietzsche".

Re:Philosophy Settings (3, Funny)

Rob Riggs (6418) | about 2 months ago | (#47696109)

I'm looking forward to the Ayn Rand setting. "Me first!!"

Re:Philosophy Settings (1)

AmiMoJo (196126) | about 2 months ago | (#47696401)

You will never get these options I'm afraid. All manufacturers will simply program their cars to avoid accidents as far as possible, and in the event that one is unavoidable simply try to stop moving as quickly as possible. No selecting targets, no deciding who to save, just brake as hard as possible if there is no other option.

FBI: 1, Ethics: 0 (4, Insightful)

some old guy (674482) | about 2 months ago | (#47696039)

So, the FBI is already making the case for, "We need full monitoring and control intervention capability for everybody's new cars, because terrorists."

FBI failed the grade (0)

Anonymous Coward | about 2 months ago | (#47696443)

They can wish for it, but if they get it, all it does is prove how clueless both law enforcement and law making really is.

Simply this: Consider the suicide bomber.

And that's just one side of the whole thing. The other is similarly flawed.

Since when are people philosophically consistent? (1)

eyelikepi (3788211) | about 2 months ago | (#47696045)

These questions really shouldn't be answered by techies.

Too early for this discussion (1)

Lose (1901896) | about 2 months ago | (#47696147)

Until a proposed system to make automated vehicles feasible on public roads in mass is proposed, developed, protocols and legal procedures released related to this come about, this is nothing but a scare topic making vague assumptions about things that aren't even a topic for development yet.

Because terrorists don't blow themselves up now... (0)

Anonymous Coward | about 2 months ago | (#47696167)

Obviously written by an ass that has not interaction with reality. There can and will be ways to counter this... and if it was such an issue it would already be occurring. Instead, terrorists find that if they blow themselves up they get several handfuls of virgins. Hence, it's a detriment to that mentality to use a car bomb or such.

Granted, terrorists that don't actually believe what they espouse (the higher ups who have created excuses not to blammie for Allah) might get some use out of it. Again, there will be ways to counter this then going forward. Fortunately, with driverless cars come other forms of robotic technology that will allow reduce risk to military installations, etc.

Not that difficult (1)

nine-times (778537) | about 2 months ago | (#47696173)

Wired has an interesting article on the possibility of selectable ethical choices in robotic autonomous cars. From the article: "The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible. Philosophically, this opens up an interesting debate about the oft-clashing ideas of morality vs. liability."

Before we allow AI on the road, we'll need to have some kind of regulation on how the AI works, and who has what level of liability. This is a debate that will need to happen, and laws will need to be made. For example, if an avoidable crash occurs due to a fault in the AI, I would assume that the manufacturer would have some level of liability. It doesn't make sense to put that responsibility on a human passenger who was using the car as directed. On the other hand, if the same crash is caused by tampering performed by the owner of the car, then it seems that the owner would be liable.

As far as I know, even these simple laws don't explicitly exist yet.

Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." Lin explores the many ways in which robot cars could be exploited for nefarious purposes, including the fear that they could help terrorist organizations based in the Middle East carry out attacks on US soil. "And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real." But Lin isn't too worried about these threats, and points out that there are far easier ways for terrorists to wreak havoc in the US.

Normal cars also make it easier to commit terrorist acts and other crimes. So what? I mean, yes, let's consider whether we want to take special safeguards and regulations regarding AI cars, but this shouldn't be something to go crazy worrying about.

Slashdot = Gizmodo / Gawker Media (0)

Lumpy (12016) | about 2 months ago | (#47696195)

Oh dear god, can this be more full of wild speculation?
They have been possible for well over 4 decades, mythbusters build them on a regular basis, one blew up so massively that it spread the car across the desert after it hit the ramp.

Honestly I really wish the sensationalism would go away and DICE would hire people that actually knew something about the technology.

Lastly it is a LOT easier to convince someone to blow themselves up in the name of their god than it is to build a remote control/ robotic car. This is a non-issue designed only to scare people about technology.

Stupid scaremongering (2)

aepervius (535155) | about 2 months ago | (#47696253)

"Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." "

Only if the potential terrorist have never learned to drive. Because otherwise :
1) for criminal you will be far better off with a car which do not respect speed limit/red lights/stops if you want to run away
2) a terrorist can simply drive the bomb somewhere then set it to explode one minute later and go away. What is the difference if he drove it himself or not ?

Terrorism is the least worry with robot car.


As for point 1 , laws and insurance will be setting your car "ethics" and not you personally.

If I were a terrorist hacker (0)

Anonymous Coward | about 2 months ago | (#47696259)

How are laws or programming restrictions going to stop me? As a terrorist, I'm already ignoring your law by killing people (this has already been outlawed). If I'm a hacker, I'm already ignoring the restrictions you've built into the software by using it in a way it wasn't intended (by bypassing existing restrictions). How is anything you're going to do with either going to stop someone who intends to do harm?

Never. Going. To. Happen... (1)

Anonymous Coward | about 2 months ago | (#47696373)

There is absolutely no chance of mass manufactured cars/trucks having user controllable ethics. They will be decided by a panel of company lawyers & government regulators. At most owners might have options like fuel efficiency, get me there fast, most direct route and/or drive like an old granny modes. The only way you'll be able to get at the ethics are after market mods, and that will probably be either made illegal or driven underground by lawsuits. As far as the whole t3rr0r1sts angle, grow up FBI. I'm sure someone will misuse it at some point, people have figured out ways to misuse virtually every, single, solitary thing in human history. But calling it a "game changer" is disingenuous at best. So a car can drive somewhere without a human at the wheel, get me within few hundred yards of somewhere and I can do the same thing with a bungee cord and a brick, give me 10 minutes and a few simple items and I can probably put the brick on a timer so that I can get miles away before anyone's the wiser.

umm... (0)

Anonymous Coward | about 2 months ago | (#47696399)

We can already make a driverless RC car, could do that for years... on the cheap. So just because the car will now drive itself doesn't change much.

Roadside EMP's. (0)

Anonymous Coward | about 2 months ago | (#47696423)

Daily cause accidents on the highways causing misery for people getting to work and truckers supplying consumables.

Remote Control Cars? (1)

Jason Levine (196982) | about 2 months ago | (#47696427)

My first thought upon reading this summary? What about the Mythbusters?

In many episodes, they've rigged up a remote control setup to a car. Many times, it has been because testing a particular car myth would be too risky with a person actually inside driving the car. They've even gone so far as to have a camera setup so they could see where they were driving.

I'm sure there's a learning curve here - not everyone could stop by their local hobby shop and remote control enable their car in an afternoon - but learning curves aren't a hindrance to people who are motivated enough. (i.e. People who want to commit acts of terrorism.) They could even put some sort of dummy in the car to keep people from realizing that the car was driving itself. Then again, they are "motivated" enough to not care if they kill themselves in the process so they could easily just load a car up and drive it where they want it to be.

Self-driving cars aren't any more of a threat than any other piece of new technology. Yes, some people will use it for bad purposes, but many more people will use it for good purposes. If we banned any technology that anyone ever used to harm another person, we wouldn't have any technology left at all.

Car bomb? Whatever... (1)

HalAtWork (926717) | about 2 months ago | (#47696483)

There are kits that turn cars into remote controlled vehicles already. It would have already been possible. Meanwhile, self-driving cars still need someone in the seat and still require heavy modification to perform the task. It is not any more attainable with those than is already possible. Stop giving idiots ideas in news headlines, and stop pissing your pants every time there's new tech.

Prioritizing my own life is "jealousy" now? (0)

Anonymous Coward | about 2 months ago | (#47696519)

This must have been written by the same liberal minds that insist that self-defense is selfish because criminals have more rights to live than their victims.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?