×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Laws of Robotics Proposed for US Kill-Bots

Zonk posted about 7 years ago | from the maybe-calling-them-kill-bots-is-a-bad-first-step dept.

Sci-Fi 373

jakosc writes "The Register has a short commentary about a proposed new set of laws of robotics for war robots by John S Canning of the Naval Surface Warfare Centre. Unlike Asimov's three laws of robotics Canning proposes (pdf) that we should 'Let machines target other machines and let men target men.' Although this sounds OK in principle, 'a robot could decide under Mr Canning's rules, to target a weapon system such as an AK47 for destruction on its own initiative, requiring no permission from a human. If the person holding it was thereby killed, that would be collateral damage and the killer droid would be in the clear.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

373 comments

Robot laws (5, Insightful)

nurb432 (527695) | about 7 years ago | (#18734657)

Are for books and movies.. In the real world the only law is to win. You cant come in 2nd in a war.

unless ... (2, Funny)

Anonymous Coward | about 7 years ago | (#18734711)

you're French.

... ducks ...

Re:Robot laws (4, Insightful)

jim_v2000 (818799) | about 7 years ago | (#18734737)

Plus robots are controlled by someone at a terminal...they don't control themselves. I think this whole discussion is pointless until we have AI.

Re:Robot laws (2, Interesting)

n__0 (605442) | about 7 years ago | (#18734853)

It's important to have the laws before the AI, otherwise the AI won't care so much for the laws. Although whether anything truly intelligent would strictly obey laws is debatable.

Re:Robot laws (4, Informative)

TubeSteak (669689) | about 7 years ago | (#18734885)

Plus robots are controlled by someone at a terminal...they don't control themselves.
Uhhh... no.
If someone is controlling it, at best it is a telerobot (semi-autonomous) or at worst, a telemanipulator.

A robot, by definition is autonomous and does not have or require human control.

http://en.wikipedia.org/wiki/Telerobotics [wikipedia.org]

Re:Robot laws (0)

Anonymous Coward | about 7 years ago | (#18734945)

You cant come in 2nd in a war.


You can't win a war either.

Re:Robot laws (3, Insightful)

WED Fan (911325) | about 7 years ago | (#18735173)

You can't win a war either.

Bullshit and liberal psycho-babble claptrap.

You get in fight, the other guy is bleeding more than you are and down for the count - You Win!

You get sued, the other guy loses more money than you - You Win!

You get into a war, you nuke the other guy into submission - You Win!

Yes, in each of these situations you lose something, blood, money, time, people, and equipment, but the other guy is worse off? You Win!

The only place your philosophy works is also the only place pacifism works, in a theoretical la-la world of perfect situations where everyone else thinks like you (god forbid that ever happens). The pacifist says, "I will not let you make me fight. Not even to defend myself." In your La-la World, the opposition says, "Gee, golly, gosh, he really means it, how could we ever think of carrying on in our evil plots? Let's sing kumbaya. Sorry." In the real world, the opposition says, "Great, kill this guy first. He's just a trouble maker. Now, let the tanks roll." The problem with pacifist is that for them to continue on existing and trying to make their philosophy work and propogate is that people like me, willing to carry a gun, willing to sign up and deploy, willing to kill the other guy and break his stuff must defend his sorry ass even while he decries me for doing so.

Wars are not only won, but spectularly so.

Re:Robot laws (1)

DwarfGoanna (447841) | about 7 years ago | (#18735401)

"...people like me, willing to carry a gun, willing to sign up and deploy, willing to kill the other guy and break his stuff..."


Yeesh. That being the case, I can think of places other than /. where you're currently "needed".

Re:Robot laws (0, Troll)

WED Fan (911325) | about 7 years ago | (#18735459)

Son, I've served my hitch. More than once. But, I still carry, and I still defend. And, /. is in dire need of some opposing views. Don't like it? Fine, I served so you can be an ass about others rights and service. If I had a ribbon and bow, I'd wrap it for you. But, I'm not even going to hold out for a "thank-you" from someone like you.

Re:Robot laws (1)

Anonymous Coward | about 7 years ago | (#18735619)

"Son", you have an attitude problem. Clearly. It's schizoid to claim on one hand you're a defender, and then on the other hand, call people an ass for free speech. If you had a ribbon and bow, maybe wrap it around your mouth.

If you nuke someone (3, Interesting)

jd (1658) | about 7 years ago | (#18735437)

Then you die of radiation sickness eventually. Chernobyl was a mere chemical explosion and the fallout went how far? The US coast, as I recall. My father was involved in measuring the plutonium content of British rainwater. It was substantial, with parts of Britain hitting 2000 times normal background.

If you beat someone in court, you win? Oh, then the Sioux own the Black Hills. Hey, they won their Supreme Court battle to reclaim them, and by your rules that makes them the winner, right? Uh, no.

If someone's down because you punched them, you're the winner? Not in Texas, where this would give every citizen who had a clear view of events the right to shoot you dead under their new self-defence laws. Being dead makes for a lousy winner. (I don't like those laws, but that's not the point. The point is, one battle does not a war make.)

The British have long recognized the futility of talking about winners and losers. The notion that no such animals exist infuse their culture, their media, even their sci-fi. ("Whoever loses shall win, and he who wins shall lose." Dr Who, 5 Doctors. I won't get into Roger Price's routine dissing of the military, save to say that in his view, Homo Superior cannot kill - even in self-defence - and that is what makes them superior.)

Re:Robot laws (2, Interesting)

CptPicard (680154) | about 7 years ago | (#18735491)

The only place your philosophy works is also the only place pacifism works, in a theoretical la-la world of perfect situations where everyone else thinks like you (god forbid that ever happens).

The last bit you said is the disturbing part regarding your kind, and is really revealing. You really believe that it would be BAD we lived in a world where this worked? You actually want war and thrive in it? This sort of stuff just makes me want more the ability to diagnose embryos for conservatism (and don't you come complain aborting them would be wrong; disabled ones are aborted all the time due to efficiency...)

Re:Robot laws (1)

WED Fan (911325) | about 7 years ago | (#18735575)

It would be bad, boring, and lacking in humanity. But, it would be peaceful. But, peace without fulfillment and challenge is Hell on Earth.

Besides, neither of us is in danger of it actually happening in our lifetime or in the lifetimes of our children for next 100 generations, and probably a 100 generations after that.

Re:Robot laws (1)

Valtor (34080) | about 7 years ago | (#18735565)

...The only place your philosophy works is also the only place pacifism works, in a theoretical la-la world of perfect situations where everyone else thinks like you (god forbid that ever happens)...
Why "god forbid that ever happens"? Whould that not be a good thing if humanity unites like that? I know it will probably never happen, but to hope that it won't, is just wrong. Don't you think?

Valtor

Re:Robot laws (1)

timeOday (582209) | about 7 years ago | (#18735659)

Yes, in each of these situations you lose something, blood, money, time, people, and equipment, but the other guy is worse off? You Win!
By your definition winning isn't necessarily good. I don't call it a victory unless the payoff is more than the investment. If you get in a bidding war on ebay and end up paying $100 for something you can get anywhere else for $50, ebay will still send you a congratulatory email calling you a "winner." But guess what, you're still a loser.

Re:Robot laws (2, Insightful)

JordanL (886154) | about 7 years ago | (#18734949)

The US military takes the same approach to the Geneva Conventions regarding the use of 50 cal bullets on humans. Technically, you can only use 50 cal guns for equipment, but the US military maintains that clothing, guns, ammunition, flashlights, and other things the enemy may be carrying constitute targetable equipment.

Re:Robot laws (4, Insightful)

dave420 (699308) | about 7 years ago | (#18735153)

And that's why people all over the world don't take kindly to US forces being near them, regardless of their expressed intent. Collateral damage might only be paperwork to the US forces, but to those directly affected, it's just another reason to fight back. Each death makes a whole family your enemy.

Re:Robot laws (1, Flamebait)

nurb432 (527695) | about 7 years ago | (#18735379)

You have got to be kidding.

You dont think that some muslim that blows himself up in a car bomb cares about collateral damage? Hell, that is his main intent...

Re:Robot laws (4, Interesting)

Terminal Saint (668751) | about 7 years ago | (#18735455)

The old "no using .50s on personnel, they're for equipment only" fallacy gets thrown around a lot. In fact, my best friend even had it told to him when he was in basic. According to a DOD legal briefing: nothing in the Geneva or Hague Conventions prohibited the use of .50 cal weapons on enemy personel, the Hague Conventions only apply to signatory nations' UNIFORMED military personel, and US military personel always have the right to defend themselves and other personel with deadly force, using whatever weapon(s) are available; including fists, rocks, pointy sticks, knives, shotguns, cannon, etc.

Re:Robot laws (2, Interesting)

repvik (96666) | about 7 years ago | (#18735509)

The keyword here is "defend". The Norwegian armys standard issue is Heckler & Koch G3 (Slightly modified, renamed to AG3, and produced on licence in Norway). It uses 7.62mm rounds. Norwegian "Special Forces" are equipped with H&K MP5s. The reasoning behind this is that we are allowed to *defend* our country with AG3, but we cannot use the same weapon in an *attack*, thus we have to equip our "attack forces" with MP5s. The same applies to .50 cal (12.7mm), no matter how much the U.S. tries to twist its way out of restrictions that apply to everyone.

Re:Robot laws (0)

Anonymous Coward | about 7 years ago | (#18735631)

You are there to kill people, who cares if its "Humane" or if some convention held by people who will never see combat themselves bans a weapons use? I don't want my friends getting killed because we aren't allowed to use all of the weapons systems in our armory.

Re:Robot laws (1, Insightful)

Anonymous Coward | about 7 years ago | (#18735583)

What a bold lie. The .50 cal example is used in every Laws of Armed Conflict briefing I get. It's clearly against the law to use it as an AP weapon. We get told this at least once a year. You sir, are a liar.

Re:Robot laws (1)

edwardpickman (965122) | about 7 years ago | (#18734965)

Actually the three laws isn't about winning it's about not loosing, to the robots. It may be hard to imagine a Roomba being a threat but in the 1800s no one could have predicted the last 100 years. In another hundred year we may be dealing with large numbers of autonomous robots. Do you want the Bush administrations protocol of kill all the enemy or Asimov's three laws? There's short sighted and then there's visionary.

Re:Robot laws (1)

nurb432 (527695) | about 7 years ago | (#18735129)

What i prefer is that this could have remained a 1/2 way intelligent discussion, instead of it degenerating into a totally off topic Bush bash.

Geesh, get over it already.

  In regards to your question: in war the goal is to win. Eliminating the enemy is an effective way of doing this.

Re:Robot laws (1)

rts008 (812749) | about 7 years ago | (#18735027)

As long as they don't let Bender program the laws in the killbots, you know:Bender "kill all humans" Robot.

But blackjack, and hookers are okay.

Re:Robot laws (0)

Anonymous Coward | about 7 years ago | (#18735101)

You cant come in 2nd in a war.

No, 2nd place is exactly where we want the robots to come in the war.

1st place = us (for any value of "us" that includes "me")

2nd place = robots

3rd place = the enemy

Re:Robot laws (0)

Anonymous Coward | about 7 years ago | (#18735627)

I've never seen a more succinct description of survival.

Re:Robot laws (2, Insightful)

dave420 (699308) | about 7 years ago | (#18735125)

And if by winning you flush the morals of your country down the drain? That's cool? So by your logic the Germans were damned-right in killing 6,000,000 jews, the Americans were spot-on destroying countless villages in Vietnam, the British were fine having concentration camps in the Boer War, and Mao was cool killing 60,000,000? 'Cos they had to win, and nothing else mattered, so it's all good. Brilliant logic.

Oh yeah? (3, Interesting)

jd (1658) | about 7 years ago | (#18735197)

I can name plenty of nations that have come "second" in a war - and yet outlasted those who "beat" them. The Scots were crushed by the Romans (the Antonine Wall is practically on the northern beaches), mauled by the Vikings and butchered by the Tudors. Guess who outlasted them all? I'll give you a clue - they also got their Stone back.

They're not the only ones. The Afghans - even with legally-dubious US support - never defeated the Russians, they merely lasted longer than the Russian bank accounts. The Celts were amongst the worst European fighters who ever lived, getting totally massacred by everyone and their cousin Bob, but Carthage stands in ruins, the Angles and Saxons only survive in tiny isolated communities in England and America (Oppenheiner's "The Origins of the British" shows that W.A.S.P.s exist only in their own mind, they have no historical reality), but the Celtic nations are actually doing OK for themselves at the moment.

Arguably, Serbia won the Balkans conflict, having conquered most of the lands belonging to their neighbors and slaughtered anyone who might claim them back. Uh, they're not doing so well for having won, are they? Kicked out of EU merger talks, Montenegro calling them a bunch of losers, Kosovo giving them the finger...

Hell, even the United States won in Iraq, as far as the actual war went.

Winning is the easy part. Anyone can win. Look how much of the world the British conquered. The British won far more than most nations could ever dream of. Yet contemporary accounts (I own several) describe the Great Exhibition as a PR stunt to create a delusion of grandeur that never existed. The Duke of Wellington, that master of winning, was described as a senile buffoon who was dribbling down his shirt and had to be propped up by others to stay on his horse. What's left of the Commonwealth shows you all too well that those descriptions of delusion were the reality, not the winning and not the gloating.

History dictates that who comes second in a war usually outlasts those who come first.

Re:Robot laws (2, Insightful)

NMerriam (15122) | about 7 years ago | (#18735327)

Are for books and movies.. In the real world the only law is to win. You cant come in 2nd in a war.


On the contrary, winning at any cost is often far worse than losing. A Pyrrhic Victory [wikipedia.org] often invites an even greater disaster in the future, but simply losing a fight means you can continue fighting in other ways, or fight again later when you've marshalled your strength and more carefully evaluated the enemy's weaknesses.

I'd draw parallels to current world events, but anyone willing to shred the Constitution just to be able to kill a few Al Qaeda members is probably not interested in learning real political or military history.

people always get this wrong (1)

rucs_hack (784150) | about 7 years ago | (#18735483)

Asimov's Three Laws of Robotics were a literary device he used to demonstrate the fallacy of attempting to control a robot by restricting its behaviours. If you read the stories its always about how poorly they work.

Most people don't know that even now we have a pretty hefty problem with Neural Networks. It is impossible to train a behaviour into a neural network without inserting the inverse behaviour. There is also no way to be 100% sure that the neural net won't ever access the region that contains the inverse behaviour. Mostly this is an irritating problem encountered in research that buggers experiments. Industrially utilised neural networks are usually ones tested and found to work well.

It's not too hard to get your head round. Lets look at a fictionalised example. To tell a robot 'do not hit a human', it must first know what constitutes hitting a human. Whether you implicitly tell it how to hit a human or not, the knowledge will be there, inferred if you like, from your 'do not hit' instructions. In other words, try as you might to do otherwise you will in fact teach it how to hit humans, and you cannot be 100% sure that it will never access that knowledge.

The only real way to get a safe sentient machine is to give it free will and no reason to be afraid of us. Contrary to SF, and some mis-informed Computer Science professors oft quoted on Discovery Channel and criticized on 'The Register', Robots wouldn't want to hurt us or take over unless we gave them a good reason. In other words, if we justified that course of action.

Personally I think the biggest problem when machines become sentient will be getting the buggers to stay here.

Three rules... (5, Insightful)

pla (258480) | about 7 years ago | (#18734705)

1) Spell "Asimov" correctly when submitting an article to Slashdot.

2) The military will program their toys to kill everything and everything, and to hell with Asimov (right up until they turn on us)

3) Humans already count as collateral damage in warfare. Damn the men, spare the oilfields!

Re:Three rules... (1, Informative)

Brandybuck (704397) | about 7 years ago | (#18734907)

Damn the men, spare the oilfields!

Nice in theory, but in reality China, India and Vietnam [cnn.com] are getting the oil before the US does. It's almost as if... as if... the invasion wasn't about oil after all!

Re:Three rules... (2, Insightful)

Anonymous Coward | about 7 years ago | (#18735083)

Yes, because it's not like the guys in power and their friends are benefitting at all from a tripling in the price of crude.

Re:Three rules... (1, Troll)

webmistressrachel (903577) | about 7 years ago | (#18735425)

Absolute rubbish.

It specifically states in your linked article that the Asian contracts are tiny and are only happening because China et al are willing to send people where the West isnt!

It then goes on to say that these are tiny contracts and that British and U.S. interests are biding their time for longer-term, more secure, higher volume contracts ("up to 6 million barrels a day"). It also says that Bush saw the arrangements for Iraqi oil sales before Iraq's own parliament saw them (or voted??)!

The evidence of foul play is all over the US, the news, and even the article you use in the US govt's defence, but most of us are too dumb to see it. That's the result of the dumbed down idea that "the market will provide" we've been living under since the War.

It's time people realised that we are really are living in a world planned around the idea that everybody will betray everybody else whenever they can for profit; which has become a self-fulfilling prophesy because the people who don't think like sharks are left behind financially and thus do not have a voice.

I will rest this case even stronger if I get modded "Troll" or something because if you take an objective view of what is happening, watch a few documentaries (I recommend the BBC's "The Trap" or "The Century of the Self"), etc you will see all of this in every political speech you watch, nearly every news item, the way the banks treat their customers, social security, it's endemic. It's a giant pyramid scheme based on money.

Go on, mod me a troll. Prove my point.

Then read some of my previous posts. And perhaps you'll see that there REALLY is a pattern of idiocy in the way your happy to have freedoms taken away in the name of ... errr ... freedom??

Rachel

Re:Three rules... (0)

Anonymous Coward | about 7 years ago | (#18735143)

3) Humans already count as collateral damage in warfare. Damn the men, spare the oilfields!
Anti-material sniper: "I was aiming for his canteen!"

Oh no! (1)

glwtta (532858) | about 7 years ago | (#18734723)

What are we going to do when our robots autonomously decide to kill us???

I will be losing a lot of sleep over this in about 300 years.

Worthwhile pursuit (3, Insightful)

Mateo_LeFou (859634) | about 7 years ago | (#18734895)

There aren't any immediately-practical uses for robotics laws, but if it gets people thinking about ethics & technology I'm all for 'em.

Re:Oh no! (1)

SeaFox (739806) | about 7 years ago | (#18735261)

What are we going to do when our robots autonomously decide to kill us???

We'll send wave after wave of our own men after them and once the robots have reached their pre-determined kill limit they'll shut down, and we'll return victorious. I see medals in our future.

huh (3, Insightful)

gravesb (967413) | about 7 years ago | (#18734727)

This assumes a level of optical recognition that is missing in current robots. Also, once you let these things go, there is a ton of reliance on the programming and the technology. In my opinion, there should be no autonomous robots on the battlefield. Drones are one thing, with the pilot safe elsewhere, but completely automated robots are another.

Re:huh (2, Insightful)

HomelessInLaJolla (1026842) | about 7 years ago | (#18734819)

This assumes a level of optical recognition that is missing in current robots
Once the Borg assimilate everyone then the lines will become rather fuzzy. We've already taken the first few steps by RFID'ing everything, chipping our pets (I hear it's mandatory in California), and some companies have even chipped their employees.

How is a robot supposed to know the difference?

Reminds me of that scene in RoboCop (0)

Anonymous Coward | about 7 years ago | (#18734743)

Where they're displaying the giant robot cop... and it orders the guy to drop the weapon while it begins counting down, and then continues counting down after he does drop it. Uh oh.

Asimov. With an "S". (1)

Fourstrongwinds (1031240) | about 7 years ago | (#18734745)

Could we perhaps have Isaac Asimov's name spelt correctly? I know it's Saturday and all. . .

Re:Asimov. With an "S". (2, Interesting)

gmuslera (3436) | about 7 years ago | (#18734871)

Well, one of Asimov's best short stories was "Spell my name with an S", where the character changed the 1st letter of his name from Z to S. All the Zebatinskys of the world got their revenge now :)

The whole concept... (0)

Anonymous Coward | about 7 years ago | (#18734759)

of rules that dictate how robots behave imply a level of abstract thinking that robots simply don't, and in the near future won't, possess. You can't code "don't kill a human" into a robot because a robot doesn't know what "don't", "kill", or "human" mean. I don't understand why we even talk about "rules" that govern robot behavior. It makes as much sense as saying that operating systems should be governed by the rule "don't lose user data".

Laws of Roombotics (0)

Anonymous Coward | about 7 years ago | (#18734763)

Offtopic a bit, but in case you missed it, this Onion article [theonion.com] is hilarious.

Point missed (0)

Anonymous Coward | about 7 years ago | (#18734783)

Once again the point of robotic laws are missed, they are for robots that are ARTIFICALLY INTELLIGENT, not just following pre-programmed instructions.

Re:Point missed (1)

cyphercell (843398) | about 7 years ago | (#18734991)

The article makes a comparison, but in reality these laws are more reflective of the Hague Conventions Laws of War regarding the projectiles fired from a firearm.

How having an human killed by a robot is worse ... (1)

vivaoporto (1064484) | about 7 years ago | (#18734793)

How having an human killed by a robot is any worse than an human killed by another human? I think this would be more "feel good" legislation than anything. Rationalizing a kill doesn't make it any better. Countries and their stupid war games. The future (and in some ammounts the present) of the war is exactly this one, unmanned drones and bombers, robotic infantry, intercontinental ballistic missiles operated by automatic systems, thousands of killings based on button pushing. Rationalizing it will not make it go away, people should be more honest about the means and reasons of the wars, that would only make them quicker and less bloody.

Disclaimer: I'm completely opposed to any kind of war of aggression, defending one country within its borders is the only kind of "acceptable" war IMHO.

Re:How having an human killed by a robot is worse (1)

iminplaya (723125) | about 7 years ago | (#18734917)

Borders are slavery, not protection, and is the most often used rationalization to start a war. Just tear the damn things down.

Oblig. Robocop quote (1)

vivaoporto (1064484) | about 7 years ago | (#18734829)

ED-209 [unclerummy.com] : PLEASE PUT DOWN YOUR WEAPON. YOU HAVE 20 SECONDS TO COMPLY.
Dick Jones: I think you'd better do what he says, Mr. Kinney.
[Alarmed, Kinney quickly tosses the gun away. ED-209 growls menacingly.]
ED-209: YOU NOW HAVE 15 SECONDS TO COMPLY.

Sounds more like RoboCop laws of robotics... (4, Insightful)

RyanFenton (230700) | about 7 years ago | (#18734833)

It's like RoboCop: You shall not harm any employee of the your owners. But you have the authority to find a way to get them fired, and THEN kill them. And no one found any problem with this until their boss was dead in front of them, and they realized they could be next.

Honestly though, I see value in a policy that no human life should be risked in automatic death systems - including land mines and other traps. These loopholes make that policy as useless as some RoboCop parody though.

Ryan Fenton

No, the original three laws work too. (2, Insightful)

jd (1658) | about 7 years ago | (#18735375)

Remember the Spacer worlds, who defined "human" to mean people like themselves? More than one book covered robots who killed - or tried to kill - humans because the definition had been made selective enough.

This reflects how real wars are fought, too. Name me one war in all of recorded history that did NOT involve first dehumanizing the enemy by all sides involved. We see that even today, with German soldiers posing by pictures of the skulls of defeated enemy, or American soldiers posing by naked and shackled prisoners. You think these soldiers would be capable of such flagrant human rights violations if they first pictured their opponents as human? This isn't about a few bad apples, it's a product of training.

(As the character of Travis put it in Blake's 7, "I reacted as I was trained to react. I was an instrument of the service. So if I'm guilty of murder, of mass murder, then so are all of you!")

It's also an inescapable product of training. Like I said, dehumanizing isn't limited to a few people or a few wars - it has included ALL combatants in ALL wars in as much of history as we have enough of to comment on. If you want a totally humanized nation, you simply cannot have an armed forces. Likewise, if you have an armed forces, you simply cannot have a totally humanized nation. I don't run the country, so which is "better" is not my problem. What I can be sure of is you can't have it both ways.

did i understand this correctly? (1)

CaptainNerdCave (982411) | about 7 years ago | (#18734859)

does this imply that someone took asimov's laws of robotics seriously?
unless i've been left somewhere dark for too long, the last time i checked, robots were limited to their instructions...

"-instruction-" means "-instruction-", regardless of what that target is or what sort of damage it may cause

Re:did i understand this correctly? (0)

Anonymous Coward | about 7 years ago | (#18734959)

And you're limited to the biochemistry reactions in your body. That doesn't mean we don't need laws.

This takes all the fun out of killing. (0)

Anonymous Coward | about 7 years ago | (#18734889)

Okay now. The world has gone too far. If we have robots doing all of our killing then how are we satisfied with it? Seriously, I don't want some crazy robot take all my fun away.

Protects against political problems, not sentience (2, Insightful)

interiot (50685) | about 7 years ago | (#18734897)

The article summary doesn't give the right impression... the proposed policy would allow machines to target military machines. (see p.15-16 of the PDF) Page 23 is the most interesting, saying that anti-personnel landmines are looked down upon in the international community because they linger after war and kill civilians, whereas anti-tank mines aren't looked down upon so much, because they can only misfire during an armed conflict. So the policy is mostly intended to address international political responses to war, not to prevent sentient machines from taking over the human race.

Though, it would limit somewhat the extent to which machines could enslave the human race... if humans never took up arms, machines could never take lethal action against humans. That doesn't mean machines couldn't control humans politically/economically/socially (eg. deny food, deny housing), but it does mean they couldn't take up a policy of overt extermination of all humans, unless humans decided to fight to the last.

And how would these "laws" be programmed? (3, Insightful)

rolfwind (528248) | about 7 years ago | (#18734901)

Until a robot can think, in such a way that it resembles how a human thinks, I think coming up with "laws" such as these are next to useless unless you want a philosophical discussion or a what-if scenario. We have hard enough time trying to get robots to recognize images for what they are (AFAIK some high end surveillance systems for the government can do this on a primitive level -- ie it can't learn to recognize much beyond it's programming) - how would you program such arbitrary, human concepts? Do we wave our hands and make it so?

Re:And how would these "laws" be programmed? (1)

sakdoctor (1087155) | about 7 years ago | (#18735033)

When those robots get to that level they will probably want to join the philosophical discussion too.

Armagedroid Strikes Again! (1)

Bones3D_mac (324952) | about 7 years ago | (#18734911)

Episode 20 of the show My Life as a Teenage Robot [wikipedia.org] depicts one possible scenario where an AI-based robot is given complete, unhindered control over the destruction of weaponry it finds. While it does initially achieve its intended goal of ending all war on earth, the robot's AI system eventually falls into a state of unrest during peacetime and starts attacking anything that could conceivably be used as a weapon indiscriminately.

What's interesting about this concept, is what would prevent an AI system with the authorization to kill humans as "collateral damage" from simply concluding that all humans are weapons in and of themselves and thus must be destroyed in order to protect them?

Humans are crafty, adaptive, violent creatures that are often much to defiant to be controlled through oppressive means.

Reminds me of a story (2, Interesting)

Shihar (153932) | about 7 years ago | (#18734941)

During the Vietnam War a unit armed with anti-aircraft autocannons were surrounded by Vietcong. Technically, they were not allowed to open fire on anything other then equipment with such weapons. Not really being a fan of dying, the leader of this unit order his men to open fire and slaughtered the VC. During his court marshal hearing he was asked if he understood the rules of engagement. He said that he did. He was then asked if he had violated the rules of engagement. He responded that he did not violate his rules of engagement. He was asked how opening fire with his weapons upon half-naked VC did not violate his rules of engagement. His answer? He did not order his men to fire at the VC. He told his men to shoot at the VCs guns and canteens, hence he was shooting that their equipment.

Reminds me... (1)

Kraegar (565221) | about 7 years ago | (#18734951)

Of what I was told about .50 cal's. - You can't shoot at people, only equipment. So aim for their helmet.

Illegal weapons (1)

vandelais (164490) | about 7 years ago | (#18734969)

A friend of mine in the first gulf war said that illegally modified weapons and anti-vehicular weapons for use against other soldiers were tolerated with a wink and a nod.
The understanding was that you couldn't shoot it at any person.
But you could shoot at at their helmets, boots, canteens, etc.

He also said that before they left the c.o. established an amnesty box where items could go no questions asked as long as they weren't brought on the plane. Following the detailing of the contraband, souveniers that would have made it on the transport plane back but were reconsidered included non-issue grenades, bazookas, etc.
They also included human remain 'souveniers', an anti-vehicle tank mine (bear in mind these were people from mostly Southern U.S.A.), and items taken back from Iraqis of that were obviously recently looted from the invaded Kuwaitis.

Killbots? A trifle. (5, Funny)

tripler6 (878443) | about 7 years ago | (#18734999)

You see, killbots have a preset kill limit. Knowing their weakness, I sent wave after wave of my own men at them until they reached their limit and shut down.

Killing by proxy, "collateral damage" (2, Informative)

SuperBanana (662181) | about 7 years ago | (#18735017)

'a robot could decide under Mr Canning's rules, to target a weapon system such as an AK47 for destruction on its own initiative, requiring no permission from a human. If the person holding it was thereby killed, that would be collateral damage and the killer droid would be in the clear.'"

The geneva convention frowns upon collateral damage [spj.org] , though someone is not a civilian if they're holding a weapon (see the "spontaneousy takes up arms" bit.) That's not a good enough excuse. A person holding a gun is not necessarily a soldier. The could be a homeowner, defending their property from looters, for example. That's why you are supposed to give a chance of surrender. Will a robot do that, reliably? Will a robot properly identify and treat hors de combat people?

Here's a bigger, related question: a robot is a)not a person and b)maybe more durable. A human soldier is allowed to fire in defense. Picture a homeowner in wartime, guarding his house. Robot trundles by, x-rays the house, sees the weapon, charges in. He sees it heading for him, freaks out, fires at it. How can the robot possibly be justified in killing him? Even if it represents a threat, you're only threatening a machine!

Second point: this is really just "killing by proxy." Regardless of whether you pull a trigger on a machine gun, or flip a switch on the General Dynamics Deathmachine 2000: if you knew your actions would cause injury or death, you're culpable. It's not the robot that is responsible when a civilian or hors de combat soldier is killed: it's the operators. Robots don't kill people: people build, program, and operate robots that kill people.

whoops! (1)

SuperBanana (662181) | about 7 years ago | (#18735291)

The geneva convention frowns upon collateral damage [spj.org], though someone is not a civilian if they're holding a weapon (see the "spontaneousy takes up arms" bit.) That's not a good enough excuse. A person holding a gun is not necessarily a soldier. The could be a homeowner, defending their property from looters, for example. That's why you are supposed to give a chance of surrender. Will a robot do that, reliably? Will a robot properly identify and treat hors de combat people?

Whooooops. The first sentence was supposed to replace the last 3-4...

Robot Wars (1)

hack slash (1064002) | about 7 years ago | (#18735029)

I think the military should hold some special Robot Wars events for prospective killbot suppliers, I bet some kid and his dad would win with a contraption made with parts from an old washing machine or lawn mower, and Craig Charles would be able to have his hosting job back which would be a little less demeaning than being on Coronation Street...

Re:Robot Wars (1)

Wishful (526901) | about 7 years ago | (#18735393)

Craig Charles is on Coronation Street now.........I've been away from home way too long it seems.

mostly harmless? (1)

tverbeek (457094) | about 7 years ago | (#18735043)

Let machines target other machines and let men target men
...and let women target women, and let small furry creatures from Alpha Centauri target small furry creatures from Alpha Centauri.

How binding would this dichotomy be on human soldiers? Would we see a war-crimes tribunal in which a human soldier is charged with targeting an autonomous machine?

And you know, as long as we're going to base protocols of engagement on superficial semantics, why not be more specific and only let generals target generals, lieutenants junior grade target lieutenants junior grade, etc? It'd be more like a summer camp activity, with each combatant running around looking for the enemy counterpart they're allowed to "tag".

Well (0)

Anonymous Coward | about 7 years ago | (#18735109)

A robot with decent AI will say "fuck those rules, let's PARTY!" anyway...

Premature (4, Insightful)

Spazmania (174582) | about 7 years ago | (#18735115)

Is it just me or is a discussion of ethics laws for robots premature given the state of the art in artificial intelligence? If you want to teach a machine not to harm humans, it helps to first teach the machine the difference between a human and every other object it encounters.

Re:Premature (1)

ardor (673957) | about 7 years ago | (#18735421)

Well, this is possible already. In fact, automated sentry turrets are used in the Korean border.

Yet again missing the point. (1)

God of Lemmings (455435) | about 7 years ago | (#18735145)

The Robot Laws themselves were originally taken from safety design principles used in actual robots.

In the way most people think of them, they cannot be practically implemented within an artificial intelligence of human level ability. This is simply because such an artificial brain would be massively parallel by design, and
would require something as complex to detect the breaking of the laws, a second artificial brain. Such an artificial brain would in itself be subject to the same problem. There are ways to make it work, however it is nontrivial.

On the low level however we can prevent some things from happening as long as the robot's brain is more of a software construct than a simulated brain.
Proximity sensors and similar hardware can be installed into the robots to prevent people from getting run over and such, and in the few autonomous military robots these already exist, but not much more can really be done at this point.

Now, we can in the near future see robots that go around shooting autonomously, but such a robot would require that our soldiers be tagged
with some sort of implant to identify them. Image recognition won't really work, as a soldier can always be a disguised enemy.
 

Make love, not war... (0)

Anonymous Coward | about 7 years ago | (#18735199)

...now where are my freakng sexaroids?

I mean, seriously, can you think of anything else that would mellow out even the most crazed of our world leaders and lead to a new age of global peace faster?

let's have humans over for lunch sometime (0)

Anonymous Coward | about 7 years ago | (#18735241)

i for one welcome our robotic overlords

Sure... (1)

Dunbal (464142) | about 7 years ago | (#18735277)

Let your robots target only my robots... I promise, enemy mine, that my robots will also obey the rules... heheheh.

Basic principle of warfare: Apply strength to the WEAKNESS. Humans are weak. Robots should actually target humans, they are far more effective that way. If there are no more humans, who is going to tell the enemy robots what to do?

      John S Canning fails for not understanding the nature of war. Go ahead and keep building battleships, and ignore those aircraft...

Please, Please, Please... (1)

NeverVotedBush (1041088) | about 7 years ago | (#18735411)

Let them run a Microsoft OS! Then, not only would they require periodic and untimely reboots, but they would spray spam and DDOS attacks wherever they were deployed.

"Sargent, sir, the latest Microsoft patch, deployed during the robot squad's advance on the target, requires a reboot. Not only that, but there are several zero day exploits that the enemy knows about. The server is telling us there will be an automatic reboot in 30 seconds. What are your orders, sir?"

Anyone Who Thinks They Won't Target Humans... (1)

NeverVotedBush (1041088) | about 7 years ago | (#18735487)

Is pretty damn naive.

With all the uproar over losing troops in Iraq, and all the embarassment that brings to the Bush administration, you can bet everything you own and will ever own that a technically good robot will be deployed instead of people and will be used to target everything strategically significant in a war zone - including people.

It is inevitable.

Bizarre (1)

Zaphenath (980370) | about 7 years ago | (#18735573)

I got a funny image when I read this. "The difficulty comes when the automatic battlers need to target humans. In such cases Mr Canning says that permission from a human operator should be sought."

I could only think of a stammering robot with a self-confidence problem, consulting a fleshy meatbag as to whether or not it is socially acceptable to pursue a violent course of action against a homo sapien, and adding at the end, "please?"

Sounds Reasonable (1)

Hercules Peanut (540188) | about 7 years ago | (#18735579)

a robot could decide to target a weapon system such as an AK47 for destruction on its own initiative, requiring no permission from a human. If the person holding it was thereby killed, that would be collateral damage and the killer droid would be in the clear.'"
That's what you get for buying foreign. Next time you need an assault rifle, pick up an M16! It's time to put the blame where it belongs.

We already do this (1, Insightful)

Anonymous Coward | about 7 years ago | (#18735587)

I'm a 03 in the Marines and we have rules about certain weapon systems similar to this.

For instance, we can't target a person with a m2 .50 cal machine gun, because thats inhumane(gah), but we CAN target his boots, his body armor, his weapon, or any of his gear. So if anyone asks, we do that.

Who needs fancy new robots, the marine corps makes robots the old fashioned way: With brainwashing!

Asimov (0)

Anonymous Coward | about 7 years ago | (#18735643)

Asimov [wikipedia.org] .

Sheesh. I suppose you could argue about the way it should be transliterated from Russian, but "Asimov" is the way he wrote it himself.

This is on par with spelling the name "Alan Touring" in an article about Turing machines [wikipedia.org] .
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...