Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Robot Warriors Will Get a Guide To Ethics

kdawson posted more than 5 years ago | from the some-distance-yet-to-the-three-laws dept.

Robotics 317

thinker sends in an MSNBC report on the development of ethical guidelines for battlefield robots. The article notes that such robots won't go autonomous for a while yet, and that the guidelines are being drawn up for relatively uncomplicated situations — such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target. "Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own? Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an 'ethical governor,' a package of software and hardware that tells robots when and what to fire. His book on the subject, Governing Lethal Behavior in Autonomous Robots, comes out this month."

cancel ×

317 comments

Sorry! There are no comments related to the filter you selected.

Been there, done that (4, Informative)

Locke2005 (849178) | more than 5 years ago | (#28019171)

Three Laws of Robotics [wikipedia.org] from 1942.

Re:Been there, done that (4, Insightful)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#28019247)

Been there, wrote fiction about that(much of which was about how, even in fiction land, it wouldn't work so well).

Re:Been there, done that (4, Insightful)

NeutronCowboy (896098) | more than 5 years ago | (#28019313)

Not to mention... some of the assumptions aren't great. As the article itself points out, it's been a long time since there was a civilian-free battlefield.

As for the direct example of the robot locating a sniper and being offered the choice of a grenade launcher and rifle - how does the robot know that the buildings surrounding it aren't military targets? How do they get classified? How does a hut differ from a mosque, and how does a hut differ from some elaborate sniper cover?

I don't think this is going to work out as planned.

Re:Been there, done that (4, Interesting)

Locke2005 (849178) | more than 5 years ago | (#28019437)

Since you can never be 100% certain of a target, the robots would have to use fuzzy logic. That is something that humans are better than robots at; I'm not really comfortable with hardware designed to be lethal making decisions like this. Truly autonomous killer robots are probably not a good idea -- haven't 60 years of B movies taught us anything?

Re:Been there, done that (3, Insightful)

Trepidity (597) | more than 5 years ago | (#28019577)

Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.

Re:Been there, done that (4, Insightful)

Architect_sasyr (938685) | more than 5 years ago | (#28019751)

But (most) humans have this innate condition where taking another life weighs on them somewhat - even most veterans and soldiers I know get twitchy about having to shoot at another person. A robot removes this and replaces it with cold logic.

Put another way, replace the robots with the WOPR, and the humans with, well, the humans in the bunkers.

Re:Been there, done that (4, Interesting)

Trepidity (597) | more than 5 years ago | (#28019781)

The cold logic can be better though, if you know what you actually want to optimize. Humans often make decisions that don't do what they claim they want, e.g. minimizing civilian casualties.

Re:Been there, done that (5, Insightful)

Locke2005 (849178) | more than 5 years ago | (#28019929)

Yes, robots are much better at calculating probabilities; given a series of "facts" with a confidence level assigned to each one, a robot would make a better decision. What I should have said is that "Humans are better than robots at making decisions based on incomplete data." Humans can develop "intuition" and many have a great deal of experience in interpreting the context of the data. While it may be possible some day for robots to have a deeper understanding of context than humans, that day is still a long way off.

Re:Been there, done that (1)

fooslacker (961470) | more than 5 years ago | (#28020047)

Not to mention... some of the assumptions aren't great. As the article itself points out, it's been a long time since there was a civilian-free battlefield.

As for the direct example of the robot locating a sniper and being offered the choice of a grenade launcher and rifle - how does the robot know that the buildings surrounding it aren't military targets? How do they get classified? How does a hut differ from a mosque, and how does a hut differ from some elaborate sniper cover?

I don't think this is going to work out as planned.

I think the interesting part is that it's being worked on, this isn't the end all be all answer. This is a first step, I'm sure more will developments will come.

A better idea (2, Interesting)

MichaelSmith (789609) | more than 5 years ago | (#28020161)

Robots on the battle field seem to be designed as extensions of current human operations. They basically shoot at things and try to destroy them.

How about building a hardened robot which can take a lot of punishment. It rolls or walks up to one of the enemy, grabs hold of them and shuts down. That way, the opposition can be disabled with fewer casualties.

Re:Been there, done that (0)

Anonymous Coward | more than 5 years ago | (#28019259)

That book detailed the futility of those particular laws.

Re:Been there, done that (1)

chimpo13 (471212) | more than 5 years ago | (#28019271)

Since Homo sapiens only natural predator is itself, this is a very good move at controlling population.

Now to provide background music. Monkey vs Robot [youtube.com]

Re:Been there, done that (4, Funny)

Red Flayer (890720) | more than 5 years ago | (#28019541)

Since Homo sapiens only natural predator is itself,

Well, itself and wolves. And tigers. And lions.

And don't forget bears. Definitely bears.

I think we should build giant ethical bear robots. That would scare the SHIT out of our enemies.

Re:Been there, done that (2, Funny)

JustOK (667959) | more than 5 years ago | (#28019599)

I think we should build giant ethical bear robots

playing bagpipes

Re:Been there, done that (3, Funny)

Falconhell (1289630) | more than 5 years ago | (#28019777)

And there I was thinking the US had given up torturing people. (-:

Do we really need the piper?

Re:Been there, done that (4, Funny)

Chris Burke (6130) | more than 5 years ago | (#28020179)

He said ethical!

Re:Been there, done that (2, Funny)

T Murphy (1054674) | more than 5 years ago | (#28019817)

I think we should build giant ethical bear robots. That would scare the SHIT out of our enemies.

...I fail to see how robots saying "Only YOU can stop forest fires" would be terrifying.

Re:Been there, done that (2, Funny)

geekoid (135745) | more than 5 years ago | (#28019415)

You do realize they were flawed, right?

Re:Been there, done that (1, Informative)

Zironic (1112127) | more than 5 years ago | (#28020013)

The laws worked perfectly, the book was all about how things went wrong when people tried to modify them.

Re:Been there, done that (1)

JohnnyBGod (1088549) | more than 5 years ago | (#28020175)

I think wanting to kill the enemy conflicts with all the Three Laws. :)

Good News/Bad News (2, Funny)

hey! (33014) | more than 5 years ago | (#28019175)

The good news: Robots are going to get a guide to ethics.

The bad news: It was drafted by Focus on the Family.

Re:Good News/Bad News (3, Funny)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#28019261)

Well, I guess that homosexbot won't be making it out of the lab; but crusadebot is a go...

Re:Good News/Bad News (1)

SirLurksAlot (1169039) | more than 5 years ago | (#28019883)

Well obviously we know this is not the case in the year 3000!

Hobo1: "Let's give a friendly welcome to this new robo."
Bender: "What did you call me?!"
Hobo2: "A Robo. You know ... a robot hobo."
Bender: "Oh, ok, I thought you said romo."

Free association? (3, Funny)

FlyByPC (841016) | more than 5 years ago | (#28019181)

I'm not even British, and I'm hearing "EX-TER-MI-NATE!" in my head...

Re:Free association? (0)

Anonymous Coward | more than 5 years ago | (#28019897)

Given the subject matter of the article I think that the voice in your head should be saying "EX-TER-MIN-ATE!? EX-TER-MIN-ATE!?"...

Re:Free association? (0)

Anonymous Coward | more than 5 years ago | (#28019971)

It's not in your head.

Run man! For the love of god RUN FOR YOUR LIFE!

Not Robots (2, Informative)

Roger W Moore (538166) | more than 5 years ago | (#28020011)

They aren't robots - there is still a living thing in control. Effectively they are one person tanks.

Three Laws updated for 2009 (0, Offtopic)

Lunzo (1065904) | more than 5 years ago | (#28019189)

1. You shall not let a human first post.

I for one welcome our ethical killbot overlords (0)

Anonymous Coward | more than 5 years ago | (#28019191)

May they live long and keep us safe.

"Robots don't have inherent right to self-defence" (0)

Anonymous Coward | more than 5 years ago | (#28019193)

Fucking liberals.

Re:"Robots don't have inherent right to self-defen (0)

Anonymous Coward | more than 5 years ago | (#28019579)

Of course if the robots were underprivileged, LGBT, minority, robots-of-color the fucking liberals would be fully supportive of the robot's right to use violent means against the oppressive, featherless biped, bags-of-mostly-water, ruling class in order to secure their robot rights.

Re:"Robots don't have inherent right to self-defen (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#28019667)

gay dongs jizzing in your face.

FOREVER.

"How goes the battle, Sgt?" (4, Funny)

v1 (525388) | more than 5 years ago | (#28019213)

Sgt: We lost sir! badly!

Gen: What happened?

Sgt: We're still gathering up the details, but it looks like they hacked our network and uploaded Asimov Strain B.

New meaning (2, Funny)

Roger W Moore (538166) | more than 5 years ago | (#28020029)

I suppose that would bring new meaning to "the blue screen of death".

Whew! (1)

Locke2005 (849178) | more than 5 years ago | (#28019215)

Fortunately, SkyNet isn't capable of violating it's programmed rules of ethical behavior, so we're all saved! Unless there is a programming error, but THAT would NEVER happen!

Re:Whew! (0)

Anonymous Coward | more than 5 years ago | (#28019393)

Now I'll have nightmares of Diebold's Robot 9000.

Dear Robot Overlord, Logic is a little tweeting bird chirping in a meadow. Logic is a wreath of pretty flowers which smell BAD. Are you sure your circuits are functioning correctly? Your ears are green.

Need a good spell checker (1)

rumblin'rabbit (711865) | more than 5 years ago | (#28019231)

Hope Arkin has good grammar. Wouldn't want the instructions to contain things like...

"How To Cook Four Prisoners"

Re:Need a good spell checker (4, Funny)

MarkvW (1037596) | more than 5 years ago | (#28019509)

Yeah! Or, or "How to Serve Man."

Re:Need a good spell checker (1)

geekoid (135745) | more than 5 years ago | (#28019891)

So what you are saying is since that robots don't take prisoners and there fore will get a divide by zero error?

Ethical War Robots? (5, Insightful)

Fantom42 (174630) | more than 5 years ago | (#28019269)

Weird. So this fails the Asimov criteria.

More importantly, would also necessarily fail the Golden Rule and Kant's Categorical Imperative.

If this is ethics, its a pretty limited version of it, and to be honest sounds more like rules of engagement than actual ethics.

Re:Ethical War Robots? (1)

Capitalist1 (127579) | more than 5 years ago | (#28019997)

The Golden Rule and Kant's Categorical Imperative have absolutely nothing to do with actual ethics, so there's no problem.

Judgement day is closer. (0)

Anonymous Coward | more than 5 years ago | (#28019281)

The final product will be called The Ethical Nonautonomous Yielding Killing System, But when abbreviated backward....

Great book title... (4, Funny)

GPLDAN (732269) | more than 5 years ago | (#28019287)

Governing Lethal Behavior in Autonomous Robots


That is the title of the book you tell your 7th grade teacher you are GOING to write when you grow up.

Sounds like the FAQ for Robot Battle.
http://www.robotbattle.com/ [robotbattle.com]

Smart missiles (1, Insightful)

buchner.johannes (1139593) | more than 5 years ago | (#28019317)

There is no such thing as a smart missile unless it immediately destroys itself safely.

Jesus Christ (4, Insightful)

copponex (13876) | more than 5 years ago | (#28019359)

If you drop a fucking robot into a village where a vast majority of the people don't know how to read, what do you think they're going to do? They'll shoot at it, get the backs of their heads blown off, and then everyone will say, "Well, the dumbass shouldn't have shot at the robot!"

If this war on terror is so important, sign up. If you can't, get your brother or sister or even better, sign your kids up. If they're not of age yet, they'd better be in the JROTC. Then you can talk to me about how using drones and missiles isn't the dominion of motherfucking cowards. It's for freedom lovers defending freedom!

And if you think it isn't, imagine what the headlines would be if China landed a few thousand autonomous tanks and droids in Los Angeles. Oh, but that's right. This is about principles for others to follow, and for us to ignore.

Wish I had mod points, I'd mod you up. (5, Interesting)

Anonymous Coward | more than 5 years ago | (#28019511)

Great post, man.

But I have a buddy in the autonomous killer robot biz, and he says it's worse than that.

See, you drop a killer robot in the village, and it immediately kills a shitload of people. The ones that live, figure out why. Then, as soon as they know that the robot destroys everything that looks like an AK47, the local up-and-coming gang leader makes an AK47 stencil and paints AK silhouettes on the old warlord's cows, house, laundry, etc. you get the picture. Then the young punk gives all the old leader's women to his buddies to rape and takes the young virgins for himself. Yay democracy! Or, at least, that's what they say when GI Joe comes to town, we are the heroes who took out the old anti-democratic leaders, yay us and you villagers better keep your cake-holes tight shut about the rape and opium parties.

It doesn't matter what you use for a trigger - robots are inherently less complex in their behavior than humans, so the local baddies end up with the robots working for them. You just identify the kill behavior and use it, the robot builder is just providing free firepower to the local mafia in effect.

Which is why the US military in the field abso-fucking-lutely refuses to let the robots go full autonomous. They are NOT allowed to shoot unless a callow 18-year old miles at a console away says it's OK.

You might think I'm kidding, but I'm not. Have to be anonymous for this one!

That would be really cool... (3, Interesting)

voss (52565) | more than 5 years ago | (#28019629)

If china could do it.

"...if you think it isn't, imagine what the headlines would be if China landed a few thousand autonomous tanks and droids in Los Angeles..."

Once the hapless and helpless got out of LA the droids would have to fight off all the hundreds of thousands of worldwide armed geeks decending on LA wanting spare parts for their robots.

Re:That would be really cool... (1)

kaizokuace (1082079) | more than 5 years ago | (#28019845)

yea and after defeating all the robots the nerds will all get lead poisoning.

Re:That would be really cool... (4, Funny)

hairyfeet (841228) | more than 5 years ago | (#28019983)

Shiiiiit, you think those damned bots would make it out of South Central intact? The gang bangers would have robot heads mounted on their rides like trophies. I'm sure that any of them that managed to roll out the other side would have the weapons stripped off of them faster than a Toyota Camry ends up on blocks and it would be so covered in tags that the poor thing couldn't even see where it was going.

Re:Jesus Christ (2, Insightful)

QuantumG (50515) | more than 5 years ago | (#28019921)

Meh.. If the alternative is to bomb the village, a robot that shoots only those that shoot at it sounds like a great idea.

Sweet (1)

copponex (13876) | more than 5 years ago | (#28020021)

When can we drop one in your backyard?

Re:Sweet (1)

QuantumG (50515) | more than 5 years ago | (#28020127)

Presumably when I start threatening national security.. or at least when your president can convince the least intelligent members of your society that I have.

Re:Jesus Christ (0)

Anonymous Coward | more than 5 years ago | (#28019927)

"It's for freedom lovers defending freedom!"

War has never been about defending freedom, most US wars were never about defending itself or anyone else.

Re:Jesus Christ (1)

maxume (22995) | more than 5 years ago | (#28019939)

You don't put a sign saying "Don't shoot the friendly robot." on it, you blare a recording of the Ride of the Valkyries.

Everyone knows that means bad news.

Fundamental change (2, Interesting)

StreetStealth (980200) | more than 5 years ago | (#28019367)

We joke about SkyNet. And we don't have to worry about such things because even the most sophisticated drones and killbots in service require humans to pull the trigger.

The moment you give a computer the responsibility of deciding when to pull the trigger, that's a pretty fundamental change.

And yet, is it fundamentally a bad thing? We give less-than-stable humans [guardian.co.uk] that responsibility all the time.

I suppose it's the military equivalent to the civilian tech quandary of one day letting autonomous vehicles on the roads. Perhaps once the tech has advanced to the point where it can demonstrate not merely parity with but vast superiority to the discernment exhibited by humans, it will be a shift we're ready to make.

Re:Fundamental change (2, Informative)

grahamd0 (1129971) | more than 5 years ago | (#28019687)

Perhaps once the tech has advanced to the point where it can demonstrate not merely parity with but vast superiority to the discernment exhibited by humans, it will be a shift we're ready to make.

"All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterward, they fly with a perfect operational record. The SkyNet funding bill is passed."

We're doomed even if it is flawless (1)

copponex (13876) | more than 5 years ago | (#28019871)

The real ethical problem with this is that a fully autonomous robot army, or even a semi-autonomous one remotely controlled by humans, further removes the people who benefit from warfare from it's reality.

Imagine if someone has real intelligence stating that there is a nuclear - not dirty - bomb in possession of a terrorist, and if we kill these two thousand people tonight, there's a 99% chance that one of the casualties will be the suspect. If you're sending in a bunch of robots to break down the doors and shoot people in the face, what decision do you think will be made?

When the only thing holding the Pentagon back is their own set of ethical questions, we are all fucked.

Re:We're doomed even if it is flawless (4, Interesting)

Bigjeff5 (1143585) | more than 5 years ago | (#28020103)

Right, because we have the capability of doing just that with nukes now, nevermind robots, and it has been such a problem for us over the last 50 years...

Only an idiot would think physical separation from the battlefield immediately reduces the gravity of killing a human being. You still know it's a human being you are killing, the separation doesn't change anything. You could make the case that it reduces the trauma of being mid-fight, but that only puts more emphasis on the fact that you are killing someone, you don't have the fear of your own death to force your hand.

By your logic, shooting someone at point-blank range would be significantly more difficult than shooting them from 200 yards away, which would be more difficult than shooting them with battlfield artilary from 1 mile away, which would be more difficult than launching a missile from tens of miles away, which would be more difficult than pressing the button to launch an ICBM.

The logic doesn't follow, because as you move farther away and impact more people, the decision becomes more and more difficult. The decision at point blank is simple: act or die. Traumatic? Yeah, some people are screwed up for life because of it. Do you have time to weigh to think about the fact that you are about to end another human being's life? No, you don't. Making the decision is easy, living with the consequences is difficult. It doesn't change much when you make that decision from half a world away through a monitor. If anything, without the stronger pressures of battle to force the decision it could be harder on a person's psyche to make the decision to kill, and more likely to question their own actions.

For some reason, you are assuming that physical separation suddenly turns people into sociopaths. It's the same reasoning that makes the asinine argument that video games desensitize kids and turn them all into violent killers. It's just not the case. You're basically saying soldiers in the drones can't tell that those are real people they are killing. That's just stupid.

It is a bad thing (3, Insightful)

Roger W Moore (538166) | more than 5 years ago | (#28020115)

And yet, is it fundamentally a bad thing? We give less-than-stable humans that responsibility all the time.

Yes it is fundamentally a very bad thing. First instead of being limited to one trigger that unstable human can now pull hundreds of triggers simultaneously. The robot will never question his orders it will simply comply no matter how morally questionable the order is.

Secondly the one big way in which democracy helps maintain peace is that the people who will do the dieing in any conflict are the ones who also effectively control the government through their votes. If suddenly Western democracies can send robots in then they are far more likely to go to war in the first place which is never a good thing.

Re:Fundamental change (1)

pizzach (1011925) | more than 5 years ago | (#28020135)

And yet, is it fundamentally a bad thing? We give less-than-stable humans that responsibility all the time.

That is the obvious part my friend. The question is when something goes wrong with a robot instead of a human, how much harder will it be to stop? I think the feeling of powerlessness also scares people.

please put down your weapons (1)

Anonymous Coward | more than 5 years ago | (#28019389)

you have 20 seconds to comply

Illegal (3, Insightful)

schlick (73861) | more than 5 years ago | (#28019391)

It should never be legal for a robot to "decide" to take lethal action.... Ever.

Re:Illegal (1)

Cajun Hell (725246) | more than 5 years ago | (#28019585)

If you outlaw killbots, then only outlaws will have killbots. And if the killbots don't have pre-set kill limits, then that means the outlaws will win.

Re:Illegal (2)

Trebawa (1461025) | more than 5 years ago | (#28020121)

Even if there are legal killbots with preset limits, the outlaws will still have killbots without them.

Re:Illegal (2, Interesting)

artor3 (1344997) | more than 5 years ago | (#28020037)

Yeah, clearly the right thing to do is send good ole fashioned humans [wikipedia.org] over there to fight. No way that could ever go wrong. /sarcasm

Robots can be made not have feelings of vengeance or anger. Which means they won't go murdering civilians. They will do what robots always do, which is to say, EXACTLY what they are told to. If they kill civilians, it's due to human error, not because it's "evil".

Let's say a battle happens near your town. People are going to be shot, and die, and you (a civilian) could be one of them. Would you rather have that decision made by:

A) A team of highly-trained emotionally-detached engineers, working for years to ensure minimal casualties.

or

B) A team of stressed-out twenty-somethings who just watched their best friends get blown to pieces by your next-door neighbor, and have to make a snap judgment about whether you're going to do the same to them.

Re:Illegal (2, Insightful)

Allicorn (175921) | more than 5 years ago | (#28020221)

Sadly, your humble, kindly engineers will just build and maintain the thing. It'll be a committee of politico-military-management-morons that decide what instructions the thing is given. :-(

Robot Warriors Will Lose (1)

sexconker (1179573) | more than 5 years ago | (#28019405)

Robots vs People:

Robots have to be "ethical" to people.
People don't have to be ethical. It's a fucking robot. Beat the shit out of it. Pretend to surrender then turn on the fucking thing when it treats you all nice like. "Oh, mr robot, I'm so cold and sick. I'm bleeding, too, help me." Then you attack the piece of shit.

Robots vs Robots:
The least "ethical" side has a distinct advantage.

People vs Robots:
The least "ethical" side has a distinct advantage.

Why would it be any different when robots are involved?

I dare say our generation would need a war (Iraq, Afghanistan, etc. are not wars - they are political occupations) to appreciate what war really is.
When faced with ruthless, relentless destruction and elimination, you must respond in kind, or be eliminated. It's not nice, it's not a pretty picture, but that's what it is.

There are no rules in war.

Re:Robot Warriors Will Lose (1)

geekoid (135745) | more than 5 years ago | (#28019865)

"There are no rules in war."

Of course there are, don't be daft.

Re:Robot Warriors Will Lose (2, Insightful)

Renraku (518261) | more than 5 years ago | (#28020077)

Good points, but I don't think this is about robotic soldiers lumbering over battlefields just yet. I think this will, at first, be more about semi-automated fire control systems and drones. Like a future Predator drone might decide to wait to fire its Hellfire missile if it thinks there's too many civilians in the area and the projected accuracy is too low due to interference. Or a point-defense system might see a kid walking around in a field and decide that he's not a threat, because he's not carrying any weapons or moving in a threatening manner.

Since our drones are still somewhat dumb, most of the ethical considerations are the responsibility of the programmers and project commanders. For example, that drone might be programmed to distinguish straight dusty road with no other cars or civilians around from twisty road in the middle of downtown with lots of civilians walking around and a poor chance to hit the target.

Besides, if a robotic soldier were pointing a gun at you and demanding that you surrender, it would probably be tracking you with multiple sensors and would blow your face off as soon as your finger twitched in the direction of your gun.

Can I? Huh? Can I? (-1, Redundant)

arizwebfoot (1228544) | more than 5 years ago | (#28019475)

IF they look like Seven of Nine, can I take one home with me and reprogram it?

Humans (5, Insightful)

DoofusOfDeath (636671) | more than 5 years ago | (#28019477)

But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own?

Why is this a when question, rather than an if question?

Re:Humans (1)

selven (1556643) | more than 5 years ago | (#28019795)

Because it has to happen eventually. Any human introduced into the loop adds a few seconds of reaction time and in the modern "first guy to shoot the nukes/missiles at the other side wins" style of warfare a military commander who lets robots do all the quick decision making will always win.

Also, what's with this idea that keeps floating around that humans are somehow by definition more capable of moral thought than robots? I would argue that robots are much more capable of such thinking - humans get bogged down by concepts such as "the death of one is a tragedy, the death of a million is a statistic" and "they killed my brother, at least X people must die to pay for that"

Re:Humans (1)

Fallingcow (213461) | more than 5 years ago | (#28019805)

Next you'll be telling me that we were too preoccupied with whether or not we could that we never stopped to think about whether we should.

I'm telling you, those electrified fences are foolproof. Now go enjoy the tour.

Meanwhile, back in reality (1)

rastoboy29 (807168) | more than 5 years ago | (#28019485)

Battlefield situations where all non-combatants have already fled do not exist.

This is why war is bad, mmkay?

Re:Meanwhile, back in reality (2, Insightful)

geekoid (135745) | more than 5 years ago | (#28019855)

What? this isn't true, there ahve been many battle fields where civilians aren't at.

Re:Meanwhile, back in reality (0)

Anonymous Coward | more than 5 years ago | (#28020063)

[Citation needed]

Tough calls (3, Interesting)

FTL (112112) | more than 5 years ago | (#28019567)

Even in a battlefield devoid of both enemy and non-combatants, when to shoot or not can be extremely difficult. Consider the case (which occurred in Iraq) where one group of soldiers are fired upon by another group from the same side. Yes, that's a tragic blue-on-blue action. But the interesting question is what should the soldiers on the receiving end do? Assuming communications aren't working, do they:
a) Sit back and get slaughtered.
b) Fire back and take out the aggressors.
One consideration is the size of the forces involved. Another consideration is the importance of the missions each side is involved in.

Making a robot handle these cases would be interesting.

Re:Tough calls (0)

Anonymous Coward | more than 5 years ago | (#28019843)

Really, two options there?

How about waving a flag or screaming "flash" or "thunder". I'm pretty sure under no circumstances is o.k. to fire back on your own side.

Re:Tough calls (1)

wolf12886 (1206182) | more than 5 years ago | (#28019953)

This is actually one of the classic decisions that's alot easier with robots than with humans, if the soldiers getting shot at are humans there really is no good course of action accept maybe try to surrender, but for a robot it's easy, just sit back and get slaughtered, all that'll be lost is some easily replaceable machinery.

Robots have a significant advantage when decisions involving their own safety. For them, self defense is optional.

Take the following scenario for example, an individual within a combat zone is seen entering a building in front of a convoy, they're carrying something which may or may not be an rpg. Human soldiers couldn't take the chance and would probably just blow the building apart, whereas if a robot were available, it could be sent in to clear the building, possibly avoiding the killing of an innocent civilian.

Re:Tough calls (0)

Anonymous Coward | more than 5 years ago | (#28020041)

But what about when dealing with robot on robot action? In that case I'd rather the $1m war machine take out the fritzy $1k drone that's got a loose circuit and is strafing it.

One rule. (0)

Anonymous Coward | more than 5 years ago | (#28019569)

Kill 'em all. Let God sort it out.

This will be great until (1)

EEPROMS (889169) | more than 5 years ago | (#28019611)

you get some idiot playing with his FoF (Friend or Foe) tag while in a active combat zone

Soldier 1 "Hai look at me, now Im a good guy [takes FOF tag off], now Im a"
BANG!!.......Thump
Soldier 2 "I swear, we lose more first timers that way than any other"

Re:This will be great until (1)

Lloyd_Bryant (73136) | more than 5 years ago | (#28020169)

you get some idiot playing with his FoF (Friend or Foe) tag while in a active combat zone

Soldier 1 "Hai look at me, now Im a good guy [takes FOF tag off], now Im a"
BANG!!.......Thump

Soldier 2 "Ah crap - Hey! Sarge! Need to fill out another Form 7095/36b. You want to to fill out the Darwin Award entry while I'm at it?"

Ethical Robots? (2, Interesting)

Mr_Tulip (639140) | more than 5 years ago | (#28019643)

I think it's great that someone is drafting some ground rules for what will undoubtedly become the 'future of warfare', but I wonder how this can possibly be enforcable in the real world.

The 1st generation robots will have the governor software, but once the second gen hits, made cheaply by a rogue state, then thigs will get complicated very quickly. And unlike nuclear weapons, which are kept under control because the materials and technology are relatively hard to come by, I reckon that death-bots will be made of far more readily available materials, and easily mass-produced.

There are rules of engagement now which many armies happily ignore, so how can the world enforce a rule that only ethical robots will be able to autonomously fire weapons?

Perhaps the software that allows the autonomous behaviour can be encrypted and protected in such a way that it is difficult to reverse-engineer, though once an enterprising hacker gets his hands on the hardware, it's only a matter of time before the open-source version, curiously missing the 'ethics governance' will be available as a .torrent somewhere.

anyone who shoots at you is a legit target (2, Insightful)

Cajun Hell (725246) | more than 5 years ago | (#28019663)

..a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target.

In any war zone (regardless of who has fled and who hasn't), isn't anyone who shoots at you, defined as a combatant and a legitimate target?

Robot Warriors Will Get a Guide To Ethics (1)

geekoid (135745) | more than 5 years ago | (#28019671)

.. and have it strapped to the outside of there chassis.

Is this a promo? (3, Insightful)

greymond (539980) | more than 5 years ago | (#28019675)

Was this article an attempt to promote Terminator 4?

Over-ethical? (1)

digitig (1056110) | more than 5 years ago | (#28019701)

such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target.

You know, on a battlefield I'd be inclined to think that anybody who shot at me was a legitimate target whether non-combatants had fled or not...

I know this *seems* like a bad idea (2, Interesting)

RexDevious (321791) | more than 5 years ago | (#28019727)

but don't human soldiers, at their best, pretty much just follow algorithms - a combination of training and orders - already?

The big difference, is that human soldiers are taught to defend themselves - whereas that wouldn't really fly with robots. If the guys at the checkpoint slaughter a family of five because they didn't stop, they get investigated and it's determined that - sad but true - killing everything that doesn't do what you say is the only way to protect the troops (short of removing them from other people's countries, which apparently defeats the point of having soldiers). If a robot did that though - they'd be considered "flawed", and recalled. Can't get much sympathy with "but our *machines* could have been in danger!!!". So you wouldn't give them that order.

Plus, it's really the supplier who gets to decide how deadly to make these things. While the government that buys them might rather have non combatants killed that even risk losing multi million dollar robots, the supplier who sells them to the government would *much* rather have to sell them more rather than risk the fallout from a wrongful death incident.

Yes, soldiers mess up, as will robots - but experience with both men and machines has so far shown me that when humans mess up they're more likely to hurt something, and when machines mess up they just stop working.

So as counter-intuitive as it is, as long as the culture still considers robots potential evil killing machines (eg, using the skynet tag on this article), it seems we'd all actually be better off using robots over humans. Well, until they become self-aware and enslave all - which is something a human army would *never* do!

Re:I know this *seems* like a bad idea (2, Insightful)

T Murphy (1054674) | more than 5 years ago | (#28020075)

Humans are (generally) concerned about self-preservation. Wrongfully killing someone could get them in jail or executed. Robots, on the other hand, simply decide based on some algorithm and have no concern about the effects of their actions. While you could try to boil down the soldier's logic to an algorithm, the key difference you can't resolve is that the soldier has free will, while the robot has no real choice of its own.

Another thing that's nice about restricting the ability to kill to humans is that a rogue soldier, no matter how well-trained, can be killed easily enough with the right application of force. We have no idea how advanced lethal robots could be. We don't have any reasonable guarantee that a rogue robot could be stopped.

Terrible idea (2, Interesting)

S77IM (1371931) | more than 5 years ago | (#28019733)

Autonomous killing machines are a terrible idea.

1. I don't like the idea of people killing people, but delegating that responsibility to machines seems downright stupid. There are too many things that could go wrong. (See the "youhave15secondstocomply" tag. Why doesn't this have a "skynetisaware" tag?)

2. Humans remote pilots are cheap. Dirt cheap, compared to the cost of developing fully autonomous weapons. Human pilots may not be totally reliable but at least they are very well understood and we know how to control them and shut them down quickly.

It would be much smarter and safer for all involved if we just put a strict moratorium on giving robots lethal capabilities or the ability to decide who to kill. AI technology would continue to advance in non-lethal robots.

  -- 77IM

Why do they have to be autonomus (1)

Joker1980 (891225) | more than 5 years ago | (#28019945)

Whats wrong with human controled drones and bot's, after all that would remove human casualties on your side...until the other side develops something similar. Thinking about it two armys of human controlled or autonomous bots would essentially turn war into a video game.

legitimate target? (1)

CrimsonAvenger (580665) | more than 5 years ago | (#28019951)

such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target.

I've never thought of the people shooting at me as "non-combatants"...

Re:legitimate target? (0)

Anonymous Coward | more than 5 years ago | (#28020033)

If the non-combatants have left, only combatants are left, and therefore anybody shooting at you is a combatant.

Re:legitimate target? (1)

Fallingcow (213461) | more than 5 years ago | (#28020057)

... but so is anybody not shooting at you.

Re:legitimate target? (1)

justinlee37 (993373) | more than 5 years ago | (#28020199)

Isn't anybody shooting at you a combatant by default?

Children (1)

Beer_Smurf (700116) | more than 5 years ago | (#28020093)

What I have wondered is how a robot will respond to children with bedsheets.

what is the fallback mode? (2, Insightful)

Joe The Dragon (967727) | more than 5 years ago | (#28020095)

what is the fallback mode / data link lost?

crush kill destroy?

That's a relief (1)

imarsman (305818) | more than 5 years ago | (#28020153)

Now we can be sure that robots will never break the rules, just as nowadays phosphorous bombs never get dropped on civilians, nor cluster bombs that in any case, never lay around for years waiting to explode when picked up by a child. Who do these idiots think they are fooling? Rhetorical question, unfortunately; the same people who have been putting up with this sort of BS forever and a day.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>