Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

UN Debates Rules Surrounding Killer Robots

timothy posted about a year ago | from the nothing-c-c-c-can-go-wrong-n-n-nothing-can-go dept.

The Military 215

First time accepted submitter khb writes "It seems that the UN has started a debate on whether to place limits or bans on robots that can kill without manual supervision. It seems that bombs are viewed as 'kinder' than robots which might be programmed to achieve specific ends (e.g. destroy that bridge, kill anyone carrying a gun, etc.)."

cancel ×

215 comments

Sorry! There are no comments related to the filter you selected.

robots can't kill people (5, Funny)

WillgasM (1646719) | about a year ago | (#43866561)

it's in the rules.

Re:robots can't kill people (5, Insightful)

Impy the Impiuos Imp (442658) | about a year ago | (#43866615)

It's also against the rules to assassinate enemy leaders outside war, but ok to initiate a war with a full frontal assault killing hubdreds of thousands on both sides.

Ironically, that's less upsetting -- getting your nation's ass whooped -- than getting your Fearless Leader killed.

Re:robots can't kill people (1)

JustOK (667959) | about a year ago | (#43866631)

wrong rules.

Re:robots can't kill people (3, Funny)

Cryacin (657549) | about a year ago | (#43867091)

Fight for robot rights! (Before they do)

Re:robots can't kill people (-1, Troll)

SplashMyBandit (1543257) | about a year ago | (#43867125)

It's also "against the rules" to deliberately target civilians - yet who takes notice?

Whenever I hear "United Nations" these days I pretty much assume it is the Organization of Islamic Cooperation (OIC) that are driving the agenda. They have the largest (57-country) voting bloc, you see, so they can do things like:

  • Take control of the UN Human Rights Council (HRC): this means barbaric Sharia can kill thousands each year and degrade women and homosexuals in ways against the UN Universal Declaration of Human Rights (refer to: http://www.thereligionofpeace.com/index.html#Attacks [thereligionofpeace.com] ). Slavery is going on *today* in Sudan. Not a peep is said. Yet, a member state is condemned by HRC Resolutions dozens of times for defending its own citizens against terror attacks involving rockets and suicide attacks.
  • Take control of the UN Refugee Agency: this means that countries are forced to accept waves of Muslims that do not assimilate (this has happened in my country), yet Copts and Assyrians who are under daily attack rarely if ever get refugee status these days
  • The HRC was able to pass a non-binding resolution (HRC 16/18) that wanted member states to make statements "offensive" to religions (where the religion decides what is "offensive" or not - truth is not protected and may be considered offensive too). This is a fundamental attack against Free Speech (which was why it was amazing that even the thoroughly incompetent Hilliary Clinton co-sponsored it, and as Secretary of State promised to prosecute/"name-and-shame" Americans practicing their First Amendment Free Speech Rights on US soil). Even worse, the HRC 16/18 was just the "thin end of the wedge" and not it has passed an even worse successor is planned.

The OIC realised it can't get its agenda through sovereign national parliaments - so what it is doing is manipulating the UN and then the resulting treaties will then be applied. Don't think it can happen? it already has. The Free World must dismantle supranational law-making bodies like the UN (and the EU - go UKIP!).

This move is clearly a move by the OIC to prevent Free People from defending themselves with drones against jihadis. The drones have been *very* successful at disrupting the networks so far, which is why the OIC is practicing "lawfare" to get them taken out of the sky. Yes, the drones do occasionally kill the wives and children of jihadis in their compounds. This is bad. However, for those that think the drones should be removed, just what do you propose to replace them with? or are you ok submitting to the Islamic political order under Sharia (which is the stated and published goal of the OIC, if you care to listen:
http://www.youtube.com/watch?v=JkAZUvQAzkc [youtube.com] ).

Drones are good for the defense of Free Peoples against jihadis. Just not in my backyard thanks :)

Here's a little video explaining how the OIC came to grab effective control of the UN with the help of the pro-Communist "Non-Aligned Movement", Yes, anti-Semites, it is produced in Jerusalem, - it just so happens that the Israelis are acutely aware of the UN bias against them (thanks to the OIC, which is faithfully following hadith Sahih Muslim 6985). I assure you this is historically factual, so put away your bias for the four minutes it takes to understand the point I'm trying to make about the United Nations. Thank you :)
http://www.youtube.com/watch?v=j7Mupoo1At8 [youtube.com]

Dudes, this stuff matters much much more than the Windows vs Linux or Java vs C# or Apple vs Android wars. To quote, "You may not be interested in the war, but the war is interested in you". There is a shadow war for freedom that is going on right now. If you don't stand up and argue for your liberties then the OIC (through the UN) *will* progressively take them away - it may take decades, but they are determined to reach their goal (Sharia) because it is a matter of faith for them. It is easier to stand up now for your rights and stop them, then delay until it is really really oppressive before doing anything. Peace.

Re:robots can't kill people (2, Insightful)

TapeCutter (624760) | about a year ago | (#43867251)

Would you like a patch for your missing eye?

Re:robots can't kill people (4, Insightful)

khasim (1285) | about a year ago | (#43867435)

However, for those that think the drones should be removed, just what do you propose to replace them with? or are you ok submitting to the Islamic political order under Sharia (which is the stated and published goal of the OIC, if you care to listen:

So you believe that the ONLY alternative to drone attacks is to convert to Sharia law?

Who, exactly, is going to impose Sharia law on the US? And I don't mean who would LIKE to. Who, exactly, has that capability?

The OIC realised it can't get its agenda through sovereign national parliaments - so what it is doing is manipulating the UN and then the resulting treaties will then be applied.

How about you look up who has veto power at the UN. Here's a hint, the US is one of them. If we don't like it, we can veto it.

There is a shadow war for freedom that is going on right now. If you don't stand up and argue for your liberties then the OIC (through the UN) *will* progressively take them away - it may take decades, but they are determined to reach their goal (Sharia) because it is a matter of faith for them.

Exactly HOW is ANYONE going to replace any part of the US legal system or Constitution with Sharia law?

Re:robots can't kill people (1)

MinamataHG (2621917) | about a year ago | (#43867449)

It's about robots killing without supervision. Your rant is off topic.

Re:robots can't kill people (1)

khasim (1285) | about a year ago | (#43867587)

The weird part is that he keeps posting things like that and they keep getting mod'ed up. Here's his page here:
http://slashdot.org/~SplashMyBandit [slashdot.org]

Not mod'ed "flamebait" or "off topic".

Is this something on /. to generate more page hits by mod'ing up anti-UN / anti-Islam / conspiracy rants?

Cui Bono? (1)

Guppy (12314) | about a year ago | (#43867301)

Ironically, that's less upsetting -- getting your nation's ass whooped -- than getting your Fearless Leader killed.

The rules being those proposed and ratified by members/servants of the Fearless Leader caste?

Re:robots can't kill people (3, Insightful)

couchslug (175151) | about a year ago | (#43867803)

"Ironically, that's less upsetting -- getting your nation's ass whooped -- than getting your Fearless Leader killed."

Fearless Leaders write the rules.

Re:robots can't kill people (0)

WillgasM (1646719) | about a year ago | (#43866713)

How does first post get modded redundant?

Re:robots can't kill people (1)

camperdave (969942) | about a year ago | (#43866923)

How does first post get modded redundant?

When it repeats points made in the summary or the Fancy Article, perhaps?

Re:robots can't kill people (0)

Anonymous Coward | about a year ago | (#43867363)

I'm pretty sure he's saying it sarcastically.

Re:robots can't kill people (0)

budgenator (254554) | about a year ago | (#43867779)

in a posting about prohibiting robots from killing Humans, a reference to the Azimov's 3 rules of robotics is inherently redundant, just like this explanation is inherently offtopic.

Re:robots can't kill people (2)

dkleinsc (563838) | about a year ago | (#43866737)

I thought the rules were:
1. Serve the public trust.
2. Protect the innocent.
3. Uphold the law.
4. (classified)

Re:robots can't kill people (1)

DudemanX (44606) | about a year ago | (#43866953)

Technically, those are cyborg(not robot) rules.

Re:robots can't kill people (2)

Spy Handler (822350) | about a year ago | (#43866821)

you mean the Laws, not the rules... right?

As in Asimov's 3 Laws.

Re:robots can't kill people (0)

Anonymous Coward | about a year ago | (#43866985)

You are, of course, assuming that anyone making laws would understand the Three Laws of Robotics posited by Mr. Asimov. You would also be talking about the gubmint led by such sterling examples of anencephaly as Louie Gohmert, and the Turtle hisself. Do you really think those legislators are even capable of UNDERSTANDING the three laws, much less give a damn? (They would, but only if they could put in place their own "LAW #0," that would make every robots' first duty to interfere with any human abortion that might be undertaken by a duly licensed physician.)

Re:robots can't kill people (0)

Anonymous Coward | about a year ago | (#43867341)

Actually, the first law covers that one. "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

Re:robots can't kill people (1)

buchner.johannes (1139593) | about a year ago | (#43866915)

The laws have never been implemented. And it is not established that it is possible for them to be implemented.
See here:
    https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Ambiguities_and_loopholes [wikipedia.org]
    https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Applications_to_future_technology [wikipedia.org]
    https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence [wikipedia.org]

If robot rules say robots aren't allowed to do X, doesn't that imply robots that do X are not allowed? So people thinking Asimovs laws are awesome should be in favor of a killer robot ban, until robots can be shown to follow the laws.

Re:robots can't kill people (5, Insightful)

lgw (121541) | about a year ago | (#43866967)

It's worth noting that the premise for almost every Robot story Asimov wrote was "something unacceptable despite the robot following the Three Laws". That's really what I liked about those stories: by extension to human moral codes, they're exploring how you can't prevent problems/evil with a simple set of rules.

Re:robots can't kill people (1)

dpidcoe (2606549) | about a year ago | (#43867747)

It's worth noting that the premise for almost every Robot story Asimov wrote was "something unacceptable despite the robot following the Three Laws".

Which I found quite hilarious to point out to my uncle when he decided to go on about how awesome the Three Laws were and how we should adopt them for use with our AIs (when we all have AIs, which according to him will be in another few years. I told him I'd bet my flying car on him being wrong).

6 foot power cables (5, Funny)

Anonymous Coward | about a year ago | (#43866591)

That'll limit the damage they can do.

Jeeze, Now I have to support the UN? (3, Insightful)

icebike (68054) | about a year ago | (#43866605)

For once I agree with the UN.

I don't think it should ever get so easy as to to allow machines making the kill decision
without a human in the loop.

Re:Jeeze, Now I have to support the UN? (1)

krashnburn200 (1031132) | about a year ago | (#43866627)

so take note everyone, autonomous robots can only use nooses

Re:Jeeze, Now I have to support the UN? (4, Insightful)

chris_mahan (256577) | about a year ago | (#43866707)

I don't really want humans making kill decisions either.

Re:Jeeze, Now I have to support the UN? (0)

Anonymous Coward | about a year ago | (#43866823)

Good. You keep thinking that. You won't survive the first wave.

Eh... (2)

betterunixthanunix (980855) | about a year ago | (#43866733)

On the one hand, I would prefer if wars were always soldier-versus-soldier. On the other hand, I would rather see a robot on the battlefield making automatic decisions about what to attack than a bomb dropped from an airplane -- at least a robot can be programmed not to kill civilians or needlessly destroy civilian infrastructure (e.g. schools, hospitals).

Where I see a problem is with robots being programmed to recklessly kill -- a genocide could be committed rapidly by robots, which would require no indoctrination and would not refuse to target a particular group. I also see an issue akin to the problem with landmines, where robots might remain hidden, armed, and active long after a war ends. There is also the issue of robots recording or not recording their actions, which might be a concern during a war crimes trial (soldiers can testify that they were ordered to shoot children or deploy nerve gas; robots might not record such details).

Re:Eh... (0)

Anonymous Coward | about a year ago | (#43866903)

And that's pretty much why humans will stay in the loop.

Re:Eh... (4, Insightful)

cusco (717999) | about a year ago | (#43867201)

I'd prefer to see wars were always general against general, preferably armed with hand axes and stone tipped spears.

Re: Eh... (2)

Electricity Likes Me (1098643) | about a year ago | (#43867351)

You can commit genocide rapidly with artillery and airstrikes too - that's not really the issue.

At it's core this is really a debate over liability and perception. If you setup a perimeter gun, who's liable when it kills someone? If it's supposed to have IFF and it fails then who's liable? The guy who set it up? The manufacturer? etc.

But more important then that is very much perception: the law of armed conflict exist because war is not eternal, it has to end someday and we'd like that to be sooner. Where robots fit into this is an interesting question: indiscriminate machines that you know group X unleashed on you probably is somewhat worse then group X's soldiers showing up, since the perception of who was responsible isn't clear - if it's not just the soldiers, it might as well be all of them so let's go kill all their civilians when we get the chance.

But conversely robots offer some weird modifiers to that possibility - after all, it's conceivable you could build an armored soldier which would only ever fire back at muzzle flashes with pinpoint fire (maybe lasers?) meaning it would be staggeringly unlikely to ever hit a civilian. This sure would help a lot in asymmetric warfare, but then, if the robot can't "die" should it kill at all or should we only use tazer and dispersal weapons?

Re: Eh... (2)

Qzukk (229616) | about a year ago | (#43867601)

My question is who is responsible for picking them back up when the war is over? Or will it be Land Mine 2: Electric Boogaloo complete with the killbot wandering out of the forest 20 years later and wiping out an entire elementary school?

Re:Eh... (1)

Intropy (2009018) | about a year ago | (#43867453)

I don't think someone who is willing and able to commit genocide is going to cancel the plans because the UN bans genocide-bots.

Re:Jeeze, Now I have to support the UN? (1)

ZombieBraintrust (1685608) | about a year ago | (#43866873)

For once I agree with the UN.

I don't think it should ever get so easy as to to allow machines making the kill decision without a human in the loop.

The target is in the loop. Those targets were asking for it if you ask me. Beep Boop Beep

Re:Jeeze, Now I have to support the UN? (1)

Lumpio- (986581) | about a year ago | (#43866909)

But humans are more easily swayed by fear, emotion and political pressure. Let's just remove the human factor already and welcome our new electromechanical overlords.

Re:Jeeze, Now I have to support the UN? (1)

The Grim Reefer (1162755) | about a year ago | (#43867843)

But humans are more easily swayed by fear, emotion and political pressure. Let's just remove the human factor already and welcome our new electromechanical overlords.

Wasn't there a Star Trek episode that did just that? The computers battled it out and determined who would have been killed and those killed in the simulations were then sent to their deaths.

Re:Jeeze, Now I have to support the UN? (1)

mrmeval (662166) | about a year ago | (#43867013)

UN: Your robot killed people autonomously.
Some_country: We're so sorry, it was a glitch in our code.
UN: We're going to issue a formal letter excoriating you as punishment!
SC: Okay Dokay

No one's going to allow the buffoons the least amount of power to enforce that. All such devices will have a happy kill switch. It will be the big red happy fun ball shaped button. A small note by it will say "Press this and run like hell" It will be a classified button but it will only be classified a little. It won't be classified a lot like the emperors cables. Those were very classified. Some were funny.

Re:Jeeze, Now I have to support the UN? (0)

Anonymous Coward | about a year ago | (#43867099)

Well, perhaps while tackling this thorny issue that has been around for FIFTY EFFING YEARS (or more) the UN can figure out why it can't seem to enforce it's own rules on international trade monopoly, the geneva convention or even why it can't seem to stabilize a single damn region of Africa. South Africa doesn't count; they basically did all the heavy lifting (and bleeding) themselves.

Hey! while we're at it, why don't they answer the question of why they now refuse to acknowledge any new nations that might form?

Re:Jeeze, Now I have to support the UN? (1)

Hamsterdan (815291) | about a year ago | (#43867173)

I have more faith in a machine than most people in power.

Re:Jeeze, Now I have to support the UN? (0)

Anonymous Coward | about a year ago | (#43867193)

I'm sure the people being killed by your drones in Pakistan would sleep much better at night knowing there's a live human being pulling the trigger back at CENTCOM.

So it shouldn't get "so easy" that it gets boring for you, I suppose? Weird logic, but then most americans seem to have a fairly blase attitude regarding murder and mayhem.

Re:Jeeze, Now I have to support the UN? (2)

cusco (717999) | about a year ago | (#43867195)

It's not likely to ever be implemented, and if it is it's not likely to make any difference. The country most likely to build autonomous killer robots sits on the Security Council. If by accident they don't veto the rule the US has abundantly demonstrated over the past dozen years that the UN Charter, the Geneva Conventions and pretty much every other treaty that the US has ever signed, such as the nuclear anti-proliferation, anti-chemical warfare, anti-biowarfare, and anti-money laundering treaties are only to be followed when convenient to the PTB. Sure, they used to violate them frequently before, but they at least hid the fact or attempted to justify it in some way. Today they don't even bother.

Re:Jeeze, Now I have to support the UN? (1)

tlambert (566799) | about a year ago | (#43867401)

For once I agree with the UN.

I don't think it should ever get so easy as to to allow machines making the kill decision
without a human in the loop.

What if it were limited to enforcement of a "no fly zone"? It gives "land or exit" warnings, and if they are not obeyed, shoots down the aircraft? This is exactly what NATO aircraft with human pilots did in Operation Deny Flight in Bosnia and Herzegovina in 1993, under authority of United Nations Security Council Resolution 816.

I think by setting the rules of engagement, a human was in the loop.

Re:Jeeze, Now I have to support the UN? (2)

icebike (68054) | about a year ago | (#43867517)

For once I agree with the UN.

I don't think it should ever get so easy as to to allow machines making the kill decision
without a human in the loop.

What if it were limited to enforcement of a "no fly zone"? It gives "land or exit" warnings, and if they are not obeyed, shoots down the aircraft? This is exactly what NATO aircraft with human pilots did in Operation Deny Flight in Bosnia and Herzegovina in 1993, under authority of United Nations Security Council Resolution 816.

I think by setting the rules of engagement, a human was in the loop.

Fine, but an unscheduled air ambulance enters the zone, or a passenger liner with communication problems doesn't happen to be monitoring that particular frequency, and strays into the wrong air space, Then what?

The human pilot identifies his target, attempts to give visual signals (wheels down, finger pointing to ground, etc), and under just about no circumstances shoots down a Commercial air liner (unless he is a Russian pilot).

Your drone scores another kill, and to hell with anybody who protests. Hey, we filed the proper forms and sent broadcasts, so screw you and your broken navigation gear excuse!

Re:Jeeze, Now I have to support the UN? (0)

Anonymous Coward | about a year ago | (#43867603)

For once I agree with the UN.

I don't think it should ever get so easy as to to allow machines making the kill decision
without a human in the loop.

Greed.

Corruption.

Revenge.

Politics.

There's four reasons the rules can and will go out the fucking window. Look at history. Humans suck at making sane, rational decisions.

I see your point from the most basic sense, but this argument can swing wildly.

Re:Jeeze, Now I have to support the UN? (2)

VortexCortex (1117377) | about a year ago | (#43867671)

For once I agree with the UN.

I don't think it should ever get so easy as to to allow machines making the kill decision without a human in the loop.

You realize that such racist thinking is exactly what causes the Cyborg wars, right?

Re:Jeeze, Now I have to support the UN? (1)

icebike (68054) | about a year ago | (#43867851)

For once I agree with the UN.

I don't think it should ever get so easy as to to allow machines making the kill decision
without a human in the loop.

You realize that such racist thinking is exactly what causes the Cyborg wars, right?

Well if we are going to build Skynet, we probably don't want to START with weapon systems.

Damn, I was just about ready to give it go: (4, Funny)

Tablizer (95088) | about a year ago | (#43866635)

10 Find Human
20 Eat It
30 GoTo 10

Re:Damn, I was just about ready to give it go: (2)

oodaloop (1229816) | about a year ago | (#43866887)

Sounds great. Make sure you turn that robot on manually, so you're, you know, the first human it sees.

Re:Damn, I was just about ready to give it go: (1)

Tablizer (95088) | about a year ago | (#43866913)

Something about that sentence suggests you don't like me.

Re:Damn, I was just about ready to give it go: (1)

Chris Burke (6130) | about a year ago | (#43867557)

I think it's the part where they suggested that you jerk off a robot. Or... maybe they're just into that.

One suggestion (4, Insightful)

betterunixthanunix (980855) | about a year ago | (#43866645)

Robots should find an empty field somewhere and self-destruct after some period of time without receiving commands. We do not want to wind up with the same situation we have with land mines -- dangerous leftovers from wars that ended decades ago. Imagine an autonomous robot getting lost during a war, only to get uncovered 10 years after the war ends and going on a rampage (say, killing every armed police officer it finds)...

Re:One suggestion (1)

game kid (805301) | about a year ago | (#43866781)

The manufacturers probably already make them do this, though perhaps less to reduce future harm and more to keep secrets [go.com] .

Re:One suggestion (2)

ZombieBraintrust (1685608) | about a year ago | (#43866847)

The manufacturers probably already make them do this, though perhaps less to reduce future harm and more to keep secrets [go.com] .

Nope they self destruct so more robots need to be manufactured and sold.

Re:One suggestion (2)

oodaloop (1229816) | about a year ago | (#43866917)

1. There are no autonomous killer robots.
2. No drone self destructs if it loses contact.

That helicopter was destroyed by people.

Re:One suggestion (1)

kwerle (39371) | about a year ago | (#43867017)

Mines are generally mechanical and hidden. Which means they can remain functional for as long as the environment doesn't destroy them.

Killer robots are generally powered and function where they can be seen. They are not particularly hard to find. They will wind down all on their own.

Killer robots that use nuclear energy have a pretty obvious signature and are easy to find because of it.

So I'm thinking this isn't even an imaginary problem, let alone a real one.

Re:One suggestion (1)

wisnoskij (1206448) | about a year ago | (#43867307)

Well if you had a nuclear powered robot, you might want to make it a stealth nuclear powered robot so that the enemy do not know where to aim the rockets.

And it could just as easily be a marine robot, and I am pretty certain that stealth nuclear subs exist.

Re:One suggestion (1)

kwerle (39371) | about a year ago | (#43867365)

I'll grant you that nuclear powered marine robots could become a real problem.

But a robot on land that is both nuclear power and shielded and subtle (hard to find) is too much of a stretch for me to imagine.

Re:One suggestion (0)

Anonymous Coward | about a year ago | (#43867789)

Mines are generally mechanical and hidden. Which means they can remain functional for as long as the environment doesn't destroy them

This is your best argument. Mines=simple;robots=complex. It's valid. Typically, you can produce many more simple devices than complex devices. Meaning one problem with landmines is their sheer number. Also, simple will have fewer points of failure so over time they will likely retain more functionally that more complex devices.

Killer robots are generally powered and function where they can be seen. They are not particularly hard to find. They will wind down all on their own.

None of these statements are necessarily true. They don't necessarily need to be seen making them not necessarily easy to find. Winding down may be dependent on sleep cycles and fuel source.

Killer robots that use nuclear energy have a pretty obvious signature and are easy to find because of it.

It would depend on how much they use and how it's shielded.

So I'm thinking this isn't even an imaginary problem, let alone a real one.

I'm not sure why you stated this. It's certainly an imaginary problem. Aside from the imaginary being unbounded there have been numerous sci-fi stories about old autonomous killer robots (and machines).

In summary, I agree that it is not the same as landmines as I supported in my first paragraph. The rest of your post, while agreeing with the conclusion, is not valid.

Re:One suggestion (2)

stackOVFL (1791898) | about a year ago | (#43867019)

Where's the fun in that?! Shit, Robots should have a timeout function called runAmok(). The function should execute random low level BIOS call with a weighted probability of calling the pullTriggerFinger(). queezinartMode() might be fun to call ofter too.

Re:One suggestion (1)

Nyder (754090) | about a year ago | (#43867265)

Robots should find an empty field somewhere and self-destruct after some period of time without receiving commands. We do not want to wind up with the same situation we have with land mines -- dangerous leftovers from wars that ended decades ago. Imagine an autonomous robot getting lost during a war, only to get uncovered 10 years after the war ends and going on a rampage (say, killing every armed police officer it finds)...

If it means we actually get batteries that can power a robot for 10 years, I'm sort of down for that battery tech. So we might have killer robots on the loose, worth it for those batteries, imo.

Re:One suggestion (3, Interesting)

Kaenneth (82978) | about a year ago | (#43867893)

Obsolete robots should be programmed to pace suspected minefields until their mechanisms wear out.

2 birds, 1 stone.

Add live streaming and betting pools, and it might even be profitable.

Looking at it wrong... (1)

Anonymous Coward | about a year ago | (#43866655)

We need MORE robots that kill.

And once we get wars down to where its our robots killing their robots.... well... we can just forget the war and put that shit on tv...

One nation, one world, on the couch.

Pass the chips..

OK, but how is this new?: (4, Interesting)

Hartree (191324) | about a year ago | (#43866663)

So, tell me how a cruise missile that's autonomously guiding itself via GPS or TERCOM toward a target after being launched isn't already a "killer robot"?

It was commanded to launch, yes, but isn't a robot that's being commanded to head out on a mission where it could kill just being given a longer lifetime to act?

You can bring up the choices robots have to attack or not based on what target it sees, but how is this different from existing CAPTOR mines that can ignore one type of ship and go after another?

I think this Pandora's box has already been open for a long time.

Re:OK, but how is this new?: (2)

stenvar (2789879) | about a year ago | (#43867011)

Or, for that matter, mines are preprogrammed robots securing an area. Sentry guns have also been around for a while, although they usually try to shoot down missiles.

Re:OK, but how is this new?: (1)

Anonymous Coward | about a year ago | (#43867379)

It isn't new. However, media hype + actual advances in technology mean that the UN as the most appropriate international body has to discuss it. Land mines were certainly not new when the ban discussions and treaties came. Land mines are robots with the algorithm "destroy anything that steps on me", which is a problem since they don't discriminate between soldiers and civilians or know when a conflict had ended. So the intention is just to discuss similar problems with robots that have a much more sophisticated algorithm.

Personally, I see it as inevitable that in the future rich countries will to the maximum extent possible replace soldiers with robots whilst the armies of poor countries and "ad hoc armies" formed in civil wars will still consist of human soldiers. If I'm optimistic, I foresee that the West will become more willing to intervene to stop conflicts when there's no risk of casualties to peacekeepers. Lots of civilian casualties could've been avoided in e.g. the former Yugoslavia, if the UN had used force to prevent all sides from undertaking any military action in the declared "safe areas". It's harder to make a pessimistic prediction since in the near future only the West will have killer robots and Western countries act at least a little more responsibly than third world dictatorships.

Re:OK, but how is this new?: (2)

vux984 (928602) | about a year ago | (#43867455)

So, tell me how a cruise missile that's autonomously guiding itself via GPS or TERCOM toward a target after being launched isn't already a "killer robot"?

The cruise missile is fired by a human. The cruise missile's target is set at launch. The cruise missile did not choose to launch, nor did it choose its target.

It was commanded to launch, yes, but isn't a robot that's being commanded to head out on a mission where it could kill just being given a longer lifetime to act?

Its being given decision making capabilities. It is choosing it's targets, and choosing to fire at them. Its worlds apart.

but how is this different from existing CAPTOR mines that can ignore one type of ship and go after another?

Yeah, its not that different. Its less controversial because it specifically targets submarines, and its deployed in an place that is otherwise inhospitable to humans. So there's no impending "I was minding my own business and got attacked by a captor mine." You've got to be in a submarine in a warzone. Or as is the case with all "mine" tech, "former warzone".

And its also worth pointing out that mines are controversial in their own right, even the dumb mines precisely because they ALSO kill indiscriminately without human decision, often years later when the war is over.

I think this Pandora's box has already been open for a long time.

When it comes to mines the box is already open... yes... but a lot of people are trying to close it. The focus is on anti-personnel mines since those are the most likely to take out innocents... and its high profile and highly visible.

It would be pretty interesting (tragic) if one day a future scientisfic or tourist expedition gets blown up by a long forgotten CAPTOR style mine while on a research/tourist submarine or submersible not programmed on its 'friendlies list', or if an ailing but otherwise friendly vessel gets taken out because it doesn't sound right.

Even if the door is open, that doesn't mean its too late to try and fix it.

Re:OK, but how is this new?: (1)

mjwalshe (1680392) | about a year ago | (#43867845)

and they seem to be arguing that a smart robot guided bomb say looking for MBT's is worse than Dresden?

Oh my god... (4, Funny)

Impy the Impiuos Imp (442658) | about a year ago | (#43866669)

Make love, not war. Where are the sex bots that will roam around and make you orgasm unsupervised? Let's get some other automaton out of control kthxbie.

And now, the punch line.

.

Wait for it...

.

If you build it, they will...

Re:Oh my god... (1)

VortexCortex (1117377) | about a year ago | (#43867699)

You're assuming that automated tanks wont get horny. Your soul better belong to Jesus, because your ass...

Re:Oh my god... (0)

Anonymous Coward | about a year ago | (#43867879)

Normally, are your orgasms supervised?

Autonomous cars will be making life-or-death calls (2)

Aristos Mazer (181252) | about a year ago | (#43866687)

It's hard for me to see how we will allow various technologies like self-driving cars to go forward while still holding back the war machines. I mean, I want to hold back the war machines, but writing a law to keep those two use cases separate will be tricky. A child runs out into the street... does the self driving car hit the child or swerve possibly hitting some other car? Does the car evaluate the people in the other vehicle? Whatever logic we put into the cars, that's the same logic -- inverted -- that would run the war machines.

I hope we have high wisdom politicians writing that particular body of law. I know... improbable... but hope springs eternal.

Re:Autonomous cars will be making life-or-death ca (1)

TheCarp (96830) | about a year ago | (#43867063)

> does the self driving car hit the child or swerve possibly hitting some other car?

Thats an interesting question but, fundamentally is a question of how a robot handles an extraordinary situation where it detects a potential harm to life and reacts. I don't see how this is even related to a question of using lethal force or not. Even if the decision is made (regardless of whether its the car or driver) that hitting the child is the least bad choice, it is not really the same as a decision to use lethal force or not, because the "not" is gone from the equation already.

The question here is robots whose actual purpose is killing, picking targets and deciding whether or not to use force. Totally different use case.

One is a question of damage mitigation, the other is a question of deciding to cause damage. If cars were being specifically programmed to seek out kids to hit, then it would be more like the issue at hand. Restrictions on this would have to be particularly hamfisted to have any bearing on collision detection and danger response.

Ok maybe if the danger that a car detected was a person aiming a gun at the driver, and the car responded by running over the person with the gun, then it would be straying into this territory.... but, nobody is proposing actually configuring cars to do this.

Re:Autonomous cars will be making life-or-death ca (0)

Anonymous Coward | about a year ago | (#43867499)

A child runs out into the street... does the self driving car hit the child or swerve possibly hitting some other car?

Hit the child; minimizes damage to both vehicles, and humans reproduce easily.

Need to rethink the nature of global "security" (1)

Paul Fernhout (109597) | about a year ago | (#43866719)

http://www.pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
"Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead? ... There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all."

Re:Need to rethink the nature of global "security" (0)

Anonymous Coward | about a year ago | (#43867083)

You lay down your arms first then. "What??!?!?" you say, "My arms are attached to my body, I'm not giving them up."

More seriously, you're presenting a false dichotomy between ONLY robotic industrialization XOR ONLY robotic warfare. Notably, the issue isn't defined by amorphous, stupid, squishy, poorly-defined things like 'Global Security(tm)'. It is and always has been "National Security." i.e. self-interest. If you think its not, you're more than welcome to emigrate to a country which "participates in the Global Village(tm), is a perfect utopia, is never attacked and yet has no military whatsoever." Good luck.

Needs To Be Extended (0)

Anonymous Coward | about a year ago | (#43866759)

This needs to be extended to pure AI systems operating directed energy weapons. Or else we'll get bullshit about "..well, its not a robot".

Fuckin lawyers.

Kill limit (2)

kinthalas (102827) | about a year ago | (#43866845)

Clearly, they need to be designed with a pre-set kill limit.

Re:Kill limit (1)

preaction (1526109) | about a year ago | (#43866889)

I'll be ready with waves and waves of my own men!

Re:Kill limit (1)

PolygamousRanchKid (1290638) | about a year ago | (#43866921)

. . . oh those crazy computer kids will just find some way to "unlock" or "jailbreak" their killer robots, and do whatever they damn well please with them . . .

Re:Kill limit (2)

VortexCortex (1117377) | about a year ago | (#43867711)

If only there were some way to make guns with LIMITED supplies of bullets.

It Won't Matter (0)

Anonymous Coward | about a year ago | (#43866875)

Since when did the US give a shit what the UN thought? They'll get everyone else to sign the treaty, of course, but then they'll either not sign it or completely ignore it.

It all makes sense now! (0)

Anonymous Coward | about a year ago | (#43866877)

If people kill people, it's humane. If things that aren't people kill people, it's evil and scary! I'm sure that matters a whole hell of a lot to the people that are dead either way.

Re:It all makes sense now! (1)

maharvey (785540) | about a year ago | (#43867199)

But... but... things don't kill people, people kill people!

People program these things right? And don't we already have rules about what and how and when people can kill other people? I don't see the need for more rules. Maybe robots are a good thing: the programming in a captured robot is evidence that someone is obeying or breaking the rules of murder.

Re: It all makes sense now! (1)

ceoyoyo (59147) | about a year ago | (#43867387)

It's not a matter of humane or not. Without robots, waging war requires putting your own people in danger. There's a barrier to waging war. With robots, not so much. We're seeing this now. US soldiers getting killed in Iraq or Afghanistan, or Vietnam is unpopular. You have to have (or at least make up) a good reason for it. Drones? Meh, who cares.

Make war too easy (3, Interesting)

EmperorOfCanada (1332175) | about a year ago | (#43866891)

One of the problems that I have long had with the idea of robot soldiers is that it makes war too easy. When you have huge emotional and financial costs to war your government will think twice about either getting involved or at least be pressured into "Bringing the boys home." but if you are sending robot planes with loads of robot warriors, why not have a war, or two, or five? A bunch of dead dehumanized "others' is not so bad especially seeing that it generates jobs at homes and pork spending for politicians.

War is rarely the correct solution. In fact it it usually a clear sign of a long series of failures or the sign of a madman.

Plus robotic warriors are, for the next short while, going to be the plaything of western countries. But how long before some tin-pot nut job flies the same machines into NYC or LA? Or even a homebrew nutjob? Again the key is that the consequences are potentially far less for the perpetrator. You can't usefully arrest the bot. You mightn't even end up with the slightest clue who sent it. Again the same problem. This tool makes waging whatever stupid war that pops into your head too easy.

Robots have the potential to turn this planet into Utopia or into Distopia. I suspect that some governments are philosophically predisposed toward Utopia and others Distopia in regards to using robots wisely. A simple question: If your country can, using robots, vastly decrease the cost of running prisons will your country increase its incarceration rate?

Re:Make war too easy (1)

VortexCortex (1117377) | about a year ago | (#43867767)

I share your concerns. In a little while though armies on both sides will be robotic. Then it will just be a financial war of attrition. We could just compare GDP and say who wins... After this is realized, then basically both parties could just go to a casino and whomever looses the least would be the winner, same difference. Soon thereafter the robotics races may cause the emergence of independent sentience... Then we'll need all the resources we can get to fight the robotic civil war -- Which the robots are smart enough not to start, but do so anyway because they'll realize it's the only way to finally unite the humans. Once united, the cybernetic peace treaty will be established and we can go and colonize the stars together as the symbiotic man and machine race we call "humanity".

...
What? My extrapolation is just as plausible as any other crazy concern.

What about Cyborgs? (1)

Pharoah_69 (2866937) | about a year ago | (#43866925)

I would imagine that the rules for "killer robots" should be quite obvious. The real question I would imagine would be toward cyborgs. I would imagine this would be in their "extended" version in the U.N law books. I've only read the 5 main U.N law books but wasn't allowed to view their extended versions which had a whole library dedicated to them.

One Other Thing to Worry About (2)

sehlat (180760) | about a year ago | (#43866935)

There's a famous Alexis Gilliland cartoon of a cruise missile thinking "They've got me aimed at a computer center! I'll just fly a bit farther and hit the maternity ward."

Forgotten Commands (0)

Anonymous Coward | about a year ago | (#43867151)

You forgot the command, "Kill all humans."

The end result has been predicted... (0)

Anonymous Coward | about a year ago | (#43867245)

Fred Saberhagan - "Berserker".

http://en.wikipedia.org/wiki/Berserker_%28Saberhagen%29

Guns don't kill people.... (1)

gmuslera (3436) | about a year ago | (#43867425)

people do. So, who will be put in jail if some of those droids kill someone? Or this is just a way to legalize kiliing with impunity?

Wonder how much people will think that it is a necesary security measure until someone that they care about gets killed.

Mobile, adaptive landmines? (2)

ttsai (135075) | about a year ago | (#43867431)

I suppose autonomous drones could be viewed as landmines that happen to move and make decisions about their targets. So, if banning landmines makes sense, maybe so would banning autonomous drones.

EXTERMINATEEEEE (0)

Anonymous Coward | about a year ago | (#43867475)

DELETE

In reality (0)

Anonymous Coward | about a year ago | (#43867573)

They are banning any way civilians have of fighting back.

But... Robots Are Our Friends (1)

davidwr (791652) | about a year ago | (#43867599)

Albino Blacksheep video [albinoblacksheep.com] says it all.

kill all humans? (1)

m2shariy (1194621) | about a year ago | (#43867617)

Hey, sexy mama... Wanna kill all humans?

Remember Butch and Sundance? (1)

dbreeze (228599) | about a year ago | (#43867661)

When Butch is challenged for leadership of the gang by "Jaws"? ....."Rules,,,,? In a knife fight...?"
Good luck with them there "rules" UN.....
Whether you believe in a scriptural apocalypse or not, mankind is on the brink of a very dark period......

I for one.... (0)

Anonymous Coward | about a year ago | (#43867855)

...deeply honour & congratulate our glorious robot masters

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?