Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

UN to Debate Use of Fully Autonomous Weapons, New Report Released

samzenpus posted about 5 months ago | from the whit-a-push-of-a-button dept.

Government 180

concertina226 (2447056) writes "The United Nations will debate the use of killer robots for the first time at the UN Convention on Certain Conventional Weapons (CCW) this week, but human rights activists are calling for the robots to be banned. Human Rights Watch and Harvard Law School's International Human Rights Clinic have published a new report entitled 'Shaking the Foundations: The Human Rights Implications of Killer Robots', which calls for killer robots to be banned to prevent a potential arms race between countries. Killer robots, or fully autonomous weapons, do not yet exist but would be the next step after remote-controlled armed drones used by the US military today. Fully autonomous weapons would have the ability to identify and fire on targets without human intervention, putting compliance with international humanitarian laws in doubt. Among the problems with killer robots highlighted in the report is the risk of criminal liability for a military officer, programmer or weapons manufacturer who created or used an autonomous weapon with intent to kill. If a robot killed arbitrarily, it would be difficult to hold anyone accountable."

cancel ×

180 comments

Sorry! There are no comments related to the filter you selected.

Thou shalt not kill. (1)

Anonymous Coward | about 5 months ago | (#46978269)

Like humans killing by remote is better. Michael Hayden: "We kill people based on metadata".

Re:Thou shalt not kill. (1)

supertrooper (2073218) | about 5 months ago | (#46978715)

I do agree with what you're saying, but at the same time I'm thinking that maybe, just maybe, someone killing an innocent person based on metadata can be held responsible. This also creates much worse system for exploitation.

Re:Thou shalt not kill. (2)

ShieldW0lf (601553) | about 5 months ago | (#46979399)

I like killer robots. Know why? Cause I know how to make them, muthafuckers! Mwhahahahahahha!!!

"Do not yet exist"? (4, Insightful)

srussia (884021) | about 5 months ago | (#46978289)

Don't mines qualify as "autonomous weapons"?

Re:"Do not yet exist"? (5, Informative)

kruach aum (1934852) | about 5 months ago | (#46978325)

Not according to the definition used in the summary, which specifies that fully autonomous weapons have the ability to identify targets. Mines fire indiscriminately whenever they're triggered, whether they're stepped on, something falls on them, they fall on something else, whatever.

Re:"Do not yet exist"? (1)

barlevg (2111272) | about 5 months ago | (#46978351)

What if you attached an IFF [wikipedia.org] system to the mine?

Re:"Do not yet exist"? (1)

kruach aum (1934852) | about 5 months ago | (#46978591)

Then they would have the ability to discriminate, but I would still hesitate to call them robots, because they don't exhibit agency. They passively trigger, they don't actively kill the way RoboCop does.

Re:"Do not yet exist"? (1)

Anonymous Coward | about 5 months ago | (#46978713)

Then they would have the ability to discriminate, but I would still hesitate to call them robots, because they don't exhibit agency. They passively trigger, they don't actively kill the way RoboCop does.

I have no idea what use of the word "agency" is that you are attempting here.

Anyway, it appears that many people here have never heard of CAPTOR mines [fas.org] .

Re:"Do not yet exist"? (1)

Anonymous Coward | about 5 months ago | (#46978365)

That's a quantitative difference, not a qualitative difference. Mines are just not very good at identifying their targets (they do it mostly based on weight). Early autonomous killer robots will also be much better at killing than at identifying: If it moves and doesn't have an adequate friend-or-foe transponder, shoot at it.

Re:"Do not yet exist"? (1)

Drethon (1445051) | about 5 months ago | (#46978373)

They identify targets just fine, it move == target.

Not how they define it I'm sure but whatever.

Re:"Do not yet exist"? (1)

kruach aum (1934852) | about 5 months ago | (#46978655)

That's not identification. Identification entails a differentiated response to a stimulus (in the case of mines, the pressure plate being triggered). When a mine is triggered, it can only explode, it cannot differentially not explode.

Re:"Do not yet exist"? (1)

Anonymous Coward | about 5 months ago | (#46978457)

Mines can be arbitrarily intelligent. Simple persons identify persons vs vehicles. More complex mines can have whatever you want as the trigger algorithm.

Re:"Do not yet exist"? (4, Informative)

ShanghaiBill (739463) | about 5 months ago | (#46978507)

Don't mines qualify as "autonomous weapons"?

Most countries have already agreed to ban landmines, by signing the Ottawa Treaty [wikipedia.org] .

Re:"Do not yet exist"? (0)

Anonymous Coward | about 5 months ago | (#46978527)

I was thinking the same thing.

Re:"Do not yet exist"? (1)

Sockatume (732728) | about 5 months ago | (#46978593)

They have some autonomy, but no more than barbed wire or a cloud of mustard gas, and we have legal frameworks for those sort of autonomous weapons. The autonomy at issue lately is in target acquisition. Ironically the sort of thing that previous autonomous weapons lacked - the ability to distinguish a target from a non-target - is exactly the thing that raises ethical questions.

Re:"Do not yet exist"? (0)

Anonymous Coward | about 5 months ago | (#46978727)

They have some autonomy, but no more than barbed wire or a cloud of mustard gas, and we have legal frameworks for those sort of autonomous weapons. The autonomy at issue lately is in target acquisition. Ironically the sort of thing that previous autonomous weapons lacked - the ability to distinguish a target from a non-target - is exactly the thing that raises ethical questions.

Naval mines distinguish targets from non-targets. CAPTOR mines [fas.org] come to mind... only existed for 35 years old...!

Re:"Do not yet exist"? (4, Informative)

Entropius (188861) | about 5 months ago | (#46978955)

There are mines that have a lot more autonomy than that: there are anti-submarine mines called CAPTOR mines that contain a torpedo and a sonar unit; the sonar unit will launch the torpedo at submarines, but not at ships.

Re:"Do not yet exist"? (1)

Anonymous Coward | about 5 months ago | (#46978783)

What about anti-aircraft or missile defense systems? Once armed, those are fully autonomous, and have been killing humans for decades.

Ban them all you want (4, Insightful)

EmagGeek (574360) | about 5 months ago | (#46978291)

Bans will not only not prevent them being developed, probably even by a technologically advanced State that is a signatory to the treaty, but it will also not prevent them being used by rogue or puppet states who don't care about bans, or who use them at the behest of a signatory state that is just using them to do their dirty work.

Arms race (0)

Anonymous Coward | about 5 months ago | (#46978377)

You're right. Clearly having any regulation of war whatsoever is foolish because "rogue" states will not abide by them anyway.

Now explain the holocaust, please.

Re:Arms race (4, Interesting)

CreatureComfort (741652) | about 5 months ago | (#46978617)

Well, you make his point eloquently.

The holocaust was conducted clearly by an advanced state, signatory to many treaties and international obligations and "laws", none of which served to make any difference whatsoever, when that state decided they didn't care what the rest of the world thought.

But why stop there? Rwanda, Stalin's purges, China's Cultural Revolution, Kashmir, Iraq-Iran, until the U.S. got actively involved, all the U.S. wars against brown people, etc., etc., etc. When has international law, regulations, or even opinion, ever changed the conduct of an aggressor nation when they decided to go to war? The reason nukes haven't been used since Nagasaki is only because everyone who has them is afraid if they used them in aggression, it would trigger a much higher escalation, and has nothing to do with any treaties, laws, or world opinions.

Re:Arms race (0)

Anonymous Coward | about 5 months ago | (#46978633)

Mr. President, we must not allow a Halocaust gap!

explaining the holocaust (1)

Anonymous Coward | about 5 months ago | (#46979175)

Now explain the holocaust, please

That was the planned crime-against-humanity that was considered by the Nazis, but then right before they did it, one of Hitler's lawyers said, "Hey wait. According to this, the proposed action will be against the law." Since Hitler was top dog, he realized that he would have to prosecute himself, and that opened up such terrifying vistas of self-reference and paradox, that he was totally horrified. So he hastily abandoned the plan. Unfortunately, the memo cancelling the operation had typoes in the return address ("hitlOR" at "naTzi" dot gov), and the receiving servers (upon checking SPF and DKIM (I can't remember which)) thought it was spam, so it got filtered, so lots of innocent people got killed anyway. But that was a failure of anti-spam tech, not a failure to try to abide by law.

Re:Ban them all you want (4, Insightful)

MozeeToby (1163751) | about 5 months ago | (#46978405)

What will happen is that the defense contractors will develop autonomous less-lethal robots that can scout, identify targets, and engage with less lethal weapons. But you know... for flexibility purposes... we'll just make sure the weapon hardpoints are as modular as possible. Hey! I know! We'll make them be adaptable to any standard infantry fir... errrrr, less-lethal weapon.

Re:Ban them all you want (1)

Anonymous Coward | about 5 months ago | (#46978735)

we'll just make sure the weapon hardpoints are as modular as possible.

Omni slots? Kick ass.

I'll take a rack of LRM20s and a Clan LBX-20.

Re:Ban them all you want (1)

misexistentialist (1537887) | about 5 months ago | (#46979071)

Which will be bad enough in itself since the police will get them, and you'll be tased for jaywalking

Re:Ban them all you want (0)

Anonymous Coward | about 5 months ago | (#46978425)

Or by "secret" or "black" ops in signatories to the treaty.

Re:Ban them all you want (3, Insightful)

buchner.johannes (1139593) | about 5 months ago | (#46978465)

Bans will not only not prevent them being developed, probably even by a technologically advanced State that is a signatory to the treaty, but it will also not prevent them being used by rogue or puppet states who don't care about bans, or who use them at the behest of a signatory state that is just using them to do their dirty work.

Any state today is dependent on trade from the international community. If the US and the EU (or any other large fraction of the international community) decide not to trade with a country, and not grant bank transfers to that country, that has a huge effect on their economy. The countries able to withstand this are countable on one hand. Of course, trade sanctions are not a plan, but the lack of a plan.

It is always better though to help the particular country address their actual problems rather than supporting their approach. For example, perceived threats can be thwarted by establishing a neutral buffer zone controlled by a third party.

So no, contrary to the common opinion on Slashdot, I think collectively agreeing to not use a certain, dangerous technology can be useful, and is also enforceable.

Re:Ban them all you want (1)

thedonger (1317951) | about 5 months ago | (#46978605)

So no, contrary to the common opinion on Slashdot, I think collectively agreeing to not use a certain, dangerous technology can be useful, and is also enforceable.

Last I checked, the Slashdot community was more likely to be on the side of supporting a ban. Regardless, how enforceable is such a ban? We can look for signs that a country is developing nuclear capability because of the unique nature of the technology involved. Autonomous, lethal robots, however, are made up of relatively benign or not suspicious parts, so we would have to rely on direct observation to determine if a country were developing such technology.

Re:Ban them all you want (1)

CreatureComfort (741652) | about 5 months ago | (#46978875)

Because it's working so well against North Korea, Iran, Cuba, Syria, Russia...

Re:Ban them all you want (1)

bluefoxlucid (723572) | about 5 months ago | (#46978481)

That's okay. I'll just defeat the killbots by sending wave after wave of my own men into battle. Killbots have a preset kill maximum before they shut down.

Re:Ban them all you want (0)

Anonymous Coward | about 5 months ago | (#46978521)

Yes, so let's remove the ban on chemical and biological warfare too.

Re:Ban them all you want (2)

thedonger (1317951) | about 5 months ago | (#46978571)

Yes, so let's remove the ban on chemical and biological warfare too.

Right, because that stops people from using them. Oh wait, no it doesn't. And "Gun Free Zone" stops people from bringing guns into them. Nope.

Re:Ban them all you want (0)

Anonymous Coward | about 5 months ago | (#46978587)

You think so? It's not like America will sign up for this or if it does actually very honour \ implement it. Your nation is founded on not honouring treaty obligations.

Re:Ban them all you want (1)

Sockatume (732728) | about 5 months ago | (#46978609)

That doesn't eliminate the moral imperative for those nations that actually do want to act humanely.

"There is a problem with the law, so ban scientifi (2)

kruach aum (1934852) | about 5 months ago | (#46978301)

c development!"

When I read things like this I wonder how these people even function in daily life without eating pebbles and glue sandwiches. The fact that the law is not currently equipped to assign guilt in the case of the malfunction of an autonomous robot is not a good enough incentive to stop scientific progress. First of all, robots can't kill arbitrarily, they can only kill who their programming specifies they should kill, even if that programming encounters a bug or operates in a manner unforeseen by its programmers. Arbitrarily would be without reference to a standard, randomly, like an earthquake or lightning. Second, banning killer robots will not prevent an arms race. It will simply hamper the combat effectiveness of the side who holds itself to the treaty. Third, it would be much more effective if the money spent on ethicists worrying about how scary science is to them went to the scientists instead, so that it could go into development and research of the very thing the ethicists are so afraid of, to make it better understood and less scary.

Re:"There is a problem with the law, so ban scient (1)

Anonymous Coward | about 5 months ago | (#46978381)

The fact that the law is not currently equipped to assign guilt in the case of the malfunction of an autonomous robot

But it is. In the case where an autonomous industrial robot kills someone it is possible to assign guilt. In those cases the paper trail is followed until a step can be found where the safety standards for industrial robots where violated. The blame will be put on the developer that didn't take necessary safety precautions. If no such flaw can be found the guilt is put on the person that disconnected or sidestepped the safety critical components. (Most of the time, but not always, this is the person that got hurt.)

Those standards naturally doesn't apply to weapons that were designed to kill people.

Re:"There is a problem with the law, so ban scient (0)

Anonymous Coward | about 5 months ago | (#46978449)

c development!"

When I read things like this I wonder how these people even function in daily life without eating pebbles and glue sandwiches. The fact that the law is not currently equipped to assign guilt in the case of the malfunction of an autonomous robot is not a good enough incentive to stop scientific progress.

Well, I'm glad there's at least one of us here who thinks designing something for a singular purpose (killing) is somehow scientific "progress". Who would want to miss out on the next atom bomb, or anthrax. After all, we've had so many things to be proud of in this area of "advancement" as a race.

First of all, robots can't kill arbitrarily, they can only kill who their programming specifies they should kill, even if that programming encounters a bug or operates in a manner unforeseen by its programmers.

Speaking of arbitrary, ever heard of AI? Yeah, I guarantee those who are building these fucking things have.

And I love how you dismiss a "bug" here as if lives didn't just end as a result of said bug. These are robots designed for a single purpose. When they fuck up, it's kind of huge. I'm fearful stepping into a car with seatbelts and air bags under automated control for fear of bugs, and you dismiss bugs in a killing machine as "oh well, Patch Tuesday is soon."

I'm not sure what is scarier, the research itself, or mentalities like yours.

Re:"There is a problem with the law, so ban scient (1)

kruach aum (1934852) | about 5 months ago | (#46978621)

The scientific progress lies in the identification of targets for energy transfer. That they can or will be used to kill people is completely irrelevant because pretty much every scientific advancement of the last hundred years can be used to kill people, whether that means flying a plane into a skyscraper or dying from chemo therapy and radiation before the cancer kills you.

Re:"There is a problem with the law, so ban scient (1)

thedonger (1317951) | about 5 months ago | (#46978693)

The scientific progress lies in the identification of targets for energy transfer. That they can or will be used to kill people is completely irrelevant because pretty much every scientific advancement of the last hundred years can be used to kill people, whether that means flying a plane into a skyscraper or dying from chemo therapy and radiation before the cancer kills you.

Don't forget, "or taking Cialis and dying from (complications due to) a 6 hour erection."

Re:"There is a problem with the law, so ban scient (1)

Sockatume (732728) | about 5 months ago | (#46978625)

AFAIK any mandate wouldn't restrict the development of these weapons, just the deployment. It's not like the complete irrelevance of a technology in a battlefield setting ever stopped DARPA before.

Re:"There is a problem with the law, so ban scient (1)

Electricity Likes Me (1098643) | about 5 months ago | (#46978725)

Uh...what?

I'm pretty sure everything DARPA works on has huge battlefield relevance. It's not like cold-fusion powered tanks wouldn't be a huge game-changer.

Re:"There is a problem with the law, so ban scient (2)

nine-times (778537) | about 5 months ago | (#46978885)

Beyond that, I see another problem with the idea of banning development of autonomous weapons: most of the technology involved would probably be developed anyway because it would be widely applicable.

Think about it. If you were going to make a killer robotic soldier, what technology would be hard to develop? It's difficult to make a robot that can easily traverse diverse terrain, but we'll work on that for other reasons. Making an AI that can accurately identify people by facial features, clothing, and speech patterns would be hard. We'd also do that for other reasons. There are a million reasons to develop and intelligent free-roaming robot who can identify people and interact with them.

Once you have a robot that can do those kinds of things, turning the "interaction" into "fire a weapon at that person" is easy. It's not hard for a computer to aim once it has its target. It's not hard for a computer to trigger the weapon itself.

Re:"There is a problem with the law, so ban scient (1)

FrozenToothbrush (3466403) | about 5 months ago | (#46979307)

Human beings need to understand themselves better; the machines will only be as good as their creators. We have no right and absolutely no need to build killer robots at this time. This further disconnects people from the reality of life and death.

Re:"There is a problem with the law, so ban scient (1)

pr0fessor (1940368) | about 5 months ago | (#46979413)

I would rather have an autonomous lawn mower for less than $1k than cylons.

3 laws deleted (1)

BlazingATrail (3112385) | about 5 months ago | (#46978303)

Since it has to kill enemies, they have already deleted the 3 laws. The robot has firewall to any outside control and runs on nuclear power, requiring no recharging. What could possibly go wrong?

Re:3 laws deleted (1)

kruach aum (1934852) | about 5 months ago | (#46978353)

The three laws of robotics are something Asimov thought up once to present a reasonable system of non-humanoid governance in, and this is key, a fictional world. We shouldn't hold the world we actually live in to standards developed for something meant to entertain, and thus referring to Asimov's three laws of robotics as if they're some authoritative source of "should-be" is unproductive in any discussion involving how we should interact with robots in the real world.

Re:3 laws deleted (4, Insightful)

RobinH (124750) | about 5 months ago | (#46978361)

Stop with the "3 laws" nonsense. Asimov's "laws" were never intended as actual laws, they were a plot device, and they're certainly not something you "delete" because they were never there in the first place. We already have regulations about machine safety (I work with them every day). The laws govern the control of hazardous energy in a system, with various guarding and interlocks being required to protect humans from injury when they interact with the system, and design constraints determined by how likely certain safety critical component failure is, and redundancy, etc.

Nobody building a killer robot is going to be worrying about any laws, pretend or otherwise. They're worried about how many units they can sell.

Re:3 laws deleted (1)

nine-times (778537) | about 5 months ago | (#46978925)

Yeah, I always find it funny when people cite the "3 laws" as though they were brilliant fail-safe mechanisms that we should of course put into robots in reality. Did nobody actually read Asimov's stories? They're a catalog of examples of how the 3 laws would fail, or at least of how they'd be too ambiguous.

Re:3 laws deleted (2)

MozeeToby (1163751) | about 5 months ago | (#46979291)

Actually, part of the purpose of the 3 laws stories was to show that even if you built robots from the ground up to not harm humans, you can still end up in situations where robots are dangerous to humans. Almost every 3 laws story revolves around trying to determine why the three laws failed. This becomes more and more true as the robots become more and more sophisticated; primitive robots cause minor hassles, more advanced robots risk death and serious injury, more advanced yet take over the planet to reduce the total harm to humanity in general (yes, the stupid movie plot is, in fact, based (loosely) on one Asimov's stories, though in Asimov's story the takeover was completely non-violent).

What is there to debate? (2, Interesting)

Anonymous Coward | about 5 months ago | (#46978319)

Auto-targeting weapons are only a matter of time. If a college student can make a gun that spits out paintballs with high accuracy, then the best and brightest likely have items far superior.

Yes, the UN will debate it, but it will be like the debate on land mines. A lot of hand wringing, but nothing really getting done, and the belligerent parties will still make them.

Right now, it is only a matter of perfecting manufacturing. I wouldn't be surprised to see in 5-10 years that sentry robots, which shoot at anything that doesn't have some form of friendly transponder, will become the norm on not any military post, be it Russian, Chinese, Saudi Arabian, or any other place that needs area denial.

Lets be real here... a couple independently active robots with high RPM machine guns are a lot more reliable than soldiers/guards, have no moral issues, have no morale issues, and will "just work". Someone takes one out with a rocket, another can easily return fire.

Add sentry UACVs to the mix, and a rocket attack would be responded in kind.

I wouldn't be surprised to see even civilian warehouses (a data center in a rural area) protected by autonomous firing machines soon. Might makes right, and SCOTUS has shown that money is speech, so any casualties from these would have no criminal/civil consequences ("there was a warning sign".) I would also not be surprised to see this on train tracks and other places, where there isn't a need for it, but the fear of being gunned down by a robot will keep kids from putting pennies on tracks.

Look how tasers are overused. Expect the same thing with these.

Re:What is there to debate? (1)

meta-monkey (321000) | about 5 months ago | (#46978627)

To be honest, civilian sentry robots with non-lethal weapons would be cool. Rubber bullets, bean bags, paintballs, whatever.

Re:What is there to debate? (1)

Sockatume (732728) | about 5 months ago | (#46978659)

The preferred term is "less-lethal".

Re:What is there to debate? (1)

Electricity Likes Me (1098643) | about 5 months ago | (#46978753)

The real goal is to build a robot which can outrun a human and then just holds onto them until the authorities arrive. The magic of robotics is really going to be the ability to let the robot take the first, second and subsequent shots and keep going.

Re:What is there to debate? (1)

gurps_npc (621217) | about 5 months ago | (#46979001)

As I posted earlier, they are not a matter of time, they are a matter of already built. The USA may have no desire to build or use them, but South Korea sits on the border with North Korea and has built them and installed Super Aegis II armed robots on the border.

Nothing will happen (0)

Anonymous Coward | about 5 months ago | (#46978323)

This is a pointless debate, the US sits on the security council and will veto any resolution against the use of their new, yet to be developed, toys.

let me predict how this will go: (-1, Troll)

Anonymous Coward | about 5 months ago | (#46978327)

US, Russia, South Korea, Japan, and China... some of the only countries capable of fielding such weapons, will not sign the agreement.
Only the US will get shit for it.
The rest of the world, who is utterly incapable of ever fielding such weapons will sign the agreement.
Kinda like Kyoto.

Here's the deal. "International Law" aren't laws. They are simply agreements between a few countries. The agreements don't apply to non-signatory countries. Most of the world didn't sign the Hague or Geneva conventions. So stop talking about "banned weapons" or international law. It is all bullshit, unless the US and the UK get into a war. And the PRIMARY reason "rules of war" were created was NOT to protect innocent or reduce suffering, but to allow for wars to end without requiring complete destruction of one side. (Ruses are allowed, but faking surrender is not allowed, because if the institution of surrender is destroyed then wars must be fought to complete decision.)

The constant civilizing of war only makes it more palatable and common. War should be hell, lest it become a sport.

Browbeat the western world into another ridiculous agreement, China and Russia will do whatever they want.

BTW, homing missiles (infrared, active radar, anti-radiation) are autonomous weapons and have been around for decades.

Re:let me predict how this will go: (1)

sTERNKERN (1290626) | about 5 months ago | (#46978671)

Homing missiles are fired by a person. We are talking about machines here which can have only a vague order like "protect this facility" or "advance to this point" and during the execution of these commands they can identify encountered people as hostile based on some criterias and can "decide" to eliminate them. That does not even come close to a homing missile.

Why would anyone want killer robots? (0)

Anonymous Coward | about 5 months ago | (#46978341)

Didn't they ever watch that documentary starring John Conner and Sarah Conner?

If the goal is to prevent the robot apocalypse... (0)

Anonymous Coward | about 5 months ago | (#46978363)

Robot soldiers more civilian friendly than humans? (4, Insightful)

swb (14022) | about 5 months ago | (#46978387)

I don't know how robot soldiers identify targets, but presuming they have some mechanism whereby they only kill armed combatants it's not hard to see some advantages over human soldiers at least with respect to civilian noncombatants.

More accurate fire -- ability to use the minimal firepower to engage a target due to superior capabilities. Fire back only when fired upon -- presumably robots would be able to withstand some small arms fire and thus wouldn't necessarily need to shoot first and wouldn't shoot civilians.

Emotionally detached -- they wouldn't get upset when Unit #266478 is disabled by sniper fire from a village and decide to kill the villagers and burn the village. You don't see robots engaging in a My Lai-type massacre.

They also wouldn't commit atrocities against civilians, wonton destruction, killing livestock, rape, beatings, etc. Robots won't rape and pillage.

Re:Robot soldiers more civilian friendly than huma (0)

Anonymous Coward | about 5 months ago | (#46978435)

wonton destruction

Leave my dumplings alone!

Re:Robot soldiers more civilian friendly than huma (1)

cortcomp (2798707) | about 5 months ago | (#46978555)

You don't see robots engaging in a My Lai-type massacre.

They also wouldn't commit atrocities against civilians, wonton destruction, killing livestock, rape, beatings, etc. Robots won't rape and pillage.

Of course they wouldn't, why would you release the whole feature set at once?! What kind of long term value are you giving to shareholders? V1 identifies targets and shoots, the 1.5 upgrade (which is really a bufix) is free to certain level customers but a small (50% of entire robot price) to everyone else. V2 will commit SOME atrocities, but there will be atrocity incompatibility with other models. No upgrade, just have to buy new. Robot Infinity (break away from model numbers for marketing, even though firmware says V3 all over the non-secure telnet command prompt with default password of 0000) will have coordinated atrocity mode (so it can work with drones and other automated craft to just atrocitize EVERYTHING! ALL NEW VERSION!!!) Finally, livestock killing (enemy supply elimination mode) and beatings (enemy non-lethal submission) and rape (enemy supporter kinetic repetition mode) all come after Robot version Infinity^2 fails and they hurry up and rush windows 7 errr Robot 7 out the door that finally does everything that previous robot versions have been promised they will do for years. Don't even get me started on Robot Server: "Robot Server makes administrating robots easy by making everything point and click GUI and no more scripts and command lines! That's why we're better than Robot *nix and competitor x!" Robot Server 2008+ "GUI is dumb, command shell robot administrating is the way to go! Scripts where one command updates a 2000 robots! No more selecting and clicking! No more security support for Robot XP, if those go off killing people from lack of security updates, hey, you should have upgraded, even if your robot did all the raping and pillaging you needed."

Re:Robot soldiers more civilian friendly than huma (0)

Anonymous Coward | about 5 months ago | (#46978679)

They are like Unsullied?

Re:Robot soldiers more civilian friendly than huma (0)

Anonymous Coward | about 5 months ago | (#46978815)

They also wouldn't commit atrocities against civilians, wonton destruction, killing livestock, rape, beatings, etc. Robots won't rape and pillage.

Why not? Conflict throughout history has always used all of the above as weapons of war. If a robot is given the ability and programming to, it'll do all of the above. And an advanced AI could well decide they help towards its goals of winning a war, either through spreading terror among the enemy or destroying their food supply.

Re:Robot soldiers more civilian friendly than huma (1)

bmajik (96670) | about 5 months ago | (#46979325)

Well, for the specific case of rape, its hard to see what would be politically gained by a society advanced enough to deploy these. I view some atrocities as a by product of turning real humans into killers in a war setting.

The discussion is about fully autonomous devices. It makes no sense to program an autonomous device to project power as sexual violence instead of just violence.

Now, if we're talking about remote controlled machines..i fear we replace one kind of dynamic with another. We protect women from sexual violence from predatory men who are in theater, but introduce to the problem invincible machines being remotely controlled by guys who shoot the prostitutes in grand theft auto.

Almost nothing could be worse than to let humans control these machines remotely with live audio/video feeds. If that happens, You WILL see women stripping for the machines like cam girls, under threat, in fear....before being killed anyway after the operator has caused enough humiliation and anguish.

Re:Robot soldiers more civilian friendly than huma (0)

Anonymous Coward | about 5 months ago | (#46979489)

Its not the ones that shoot the prostitutes you need to worry about, Its the ones that use bats.

Re:Robot soldiers more civilian friendly than huma (2)

Sibko (1036168) | about 5 months ago | (#46979321)

You don't see robots engaging in a My Lai-type massacre.

They also wouldn't commit atrocities against civilians, wonton destruction, killing livestock, rape, beatings, etc. Robots won't rape and pillage.

Well... You won't see them independently decide to do something like that. But orders are literally orders to a robot. You tell them to burn a city to the ground, shoot anyone who tries to flee, and they will burn that city to the ground and shoot everyone who flees. Without remorse, without second guessing orders, without a moment of any hesitation.

Which frankly, worries me a bit more. Because the upper levels of command in just about every model of human hierarchy always seems to have worrying numbers of psychopaths/sociopaths beyond what you'd expect in a normal pool of the population. On top of that - they're physically removed from the carnage. It's a lot easier to order the leveling of a rebel-occupied village when you will never personally see the slaughter of innocents that result.

That's not to say humans never do these things. Just that, humans are capable of refusing to do these things. Robots aren't.

Sad but true (1)

slashmydots (2189826) | about 5 months ago | (#46978389)

I hate to be the pessimistic voice of reality here but terrorists and bad governments like N Korea and Syria would use them if they had them. They would use nuclear weapons, radiation bombs, chemical warfare, etc if they had them. Some already have. So hit them with robots now. This is the same stupidity as putting up a sign outside a theater that says no guns allowed inside. What's the result? Criminals and crazy people carry guns in and regular people don't have them. My point is, a ban will do nothing to people who won't follow the rules anyway.

Now that is a problem, but soon will be a feature. (1)

rallycellie (1031068) | about 5 months ago | (#46978399)

"......If a robot killed arbitrarily, it would be difficult to hold anyone accountable."
That is the golden goose...

Assume only responsible for wilful acts? (1)

redelm (54142) | about 5 months ago | (#46978421)

The difficulties are only in people's minds -- especially those who seek justification to push projects. If someone deploys a weapon, they are responsible for all foreseeable consequences. Whether that weapon is a slug of dumb lead, smart missile or robot.

Even if the weaponeer did not intend the effects they are still responsible, perhaps as manslaughter rather than murder. The capabilities and risks are hardly concealed. OTOH, if they were careless nor negligent, then their responsibility increases. Nothing new here, just salesmen trying to assuage and belay responsibility of the buyers.

Ottawa Treaty, Part Deux (4, Informative)

xxxJonBoyxxx (565205) | about 5 months ago | (#46978433)

I expect this will be as successful as the UN's 1990-era anti-mine treaty (the Ottawa Treaty - http://en.wikipedia.org/wiki/O... [wikipedia.org] ), with over a hundred signatories, but not Russia, China or the United States. http://en.wikipedia.org/wiki/L... [wikipedia.org]

Re:Ottawa Treaty, Part Deux (1)

silas_moeckel (234313) | about 5 months ago | (#46978677)

Possibly because it's asinine? Mines are cheap and effective weapons. They would have been far better off requiring that mines have some form of self destruct when not used in a designated area.

Re:Ottawa Treaty, Part Deux (1)

Electricity Likes Me (1098643) | about 5 months ago | (#46978801)

The problem is the self-destruct on the mines was expected to have about a 3% failure rate (these were actually developed). Leaving 3% of your mines in the ground and potentially active after the conflict means you're still left with mindfields about as large as they were during the war.

Re:Ottawa Treaty, Part Deux (1)

silas_moeckel (234313) | about 5 months ago | (#46979341)

Reducing the number of mines to be cleared by 97% is a huge improvement. There was research into biodegradable explosives as well.

PS the treaty only covers antipersonnel mines, antitank mines are also dangerous post war to the civilian population.

Re:Ottawa Treaty, Part Deux (0)

Anonymous Coward | about 5 months ago | (#46978853)

Which wouldn't help at all with cleanup. You need them to have a self-destruct mechanism by remote control so you can visit the location, clear the area of any people, then broadcast a destruct command.

Technically it'll increase the cost but be entirely possible. The only issue is how to secure it, since it's pointless to have the enemy have the key and the manufacturer may well be on the enemy's side in future.

Re:Ottawa Treaty, Part Deux (1)

CanHasDIY (1672858) | about 5 months ago | (#46978915)

They would have been far better off requiring that mines have some form of self destruct when not used in a designated area.

... and it would be equally as effective as requiring that mines be made from rainbows and unicorn farts.

Let's face reality here: The sort of people who start wars,plant mines, and want armies of automated killing machines don't really give a shit how many children they cripple over the next couple of decades, because they know it won't be their children getting crippled.

Re:Ottawa Treaty, Part Deux (0)

Anonymous Coward | about 5 months ago | (#46979087)

It would be better to make them battery powered, once the battery dies they can't go boom.

Re:Ottawa Treaty, Part Deux (1)

jittles (1613415) | about 5 months ago | (#46978877)

I expect this will be as successful as the UN's 1990-era anti-mine treaty (the Ottawa Treaty - http://en.wikipedia.org/wiki/O... [wikipedia.org] ), with over a hundred signatories, but not Russia, China or the United States. http://en.wikipedia.org/wiki/L... [wikipedia.org]

Don't worry. I've been playing Minesweeper almost every waking moment since the signing of the Ottawa Treaty. By my calculations, the earth should be mine free in another decade or two.

Re:Ottawa Treaty, Part Deux (1)

DougF (1117261) | about 5 months ago | (#46978893)

Maybe because 50,000 of them separate North from South Korea, are much cheaper than 50,000 soldiers in their place, and you don't have to send body bags and letters home to widows?

It's not tricky (1)

wyr_taliesin (1000725) | about 5 months ago | (#46978473)

Let states develop and use them, and make it legally binding that the responsibility for deploying them and for anything they do is the personal legal responsibility of the head of government, and with no 'sovereign immunity' get-outs.

Fraught with danger... (2)

GrpA (691294) | about 5 months ago | (#46978485)

Taking such action really is a bad idea. An autonomous killing machine could be as complicated as as a military drone with hellfire missiles or as simple as a car loaded with autonomous weapons designed to engage any anything that move, with a GPS pre-determined route and self-driving capability, sitting like a mobile minefield in an abandoned house long after the occupants have left, waiting to be activated.

I think the appropriate course of action would be to feed international condemnation of such tactics until they are treated with ruthlessness by the international community against any involved in use of such weapons, for any infraction. Just like the use of chemical weapons should have been...

Autonomous weapons are far more frightening that WMDs... And nowhere is safe.

Then again, I wrote a book on the creation of a universal standard for determining if an autonomous weapon could be trusted with the decision to kill, so perhaps I am somewhat hypocritical there.

GrpA

Autonomous weapons could be good (1)

guytoronto (956941) | about 5 months ago | (#46978675)

In a ground combat scenario, autonomous weapons could be a good thing. Right now, soldiers are tasked with protect others as well as themselves, and in most situations the safest resolution is to kill the antagonist. A machine or robot wouldn't suffer from emotional lapses in judgment (anger, hostility). A robot may have better weapons skills, so instead of a kill shot, may only need to wound. A robot would be more willing to put itself in harms way to protect a living person.

The programming required for such a machine would be incredibly complex, but controllable with defined precision. A human soldier can't be controlled or programmed, and history shows that humans make a lot of bad decisions when it comes to the use of deadly force.

Re:Autonomous weapons could be good (1)

Anonymous Coward | about 5 months ago | (#46978759)

Only problem is that it can be hacked... and with the fact that there are no US chip fabs (any CPU work is done overseas), even at the hardware die level, there can be backdoors. So, that autogunner with tank treads can be easily turned around if there is a Sino-European engagement.

The issue of computer security will be even more of one with autonomous weapons... and I'm sorry, it doesn't seem that US and European companies are not up to snuff when it comes to actual security where lives are at stake. Hopefully I'll be proven wrong, but I do worry.

Almost entirely valueless discussion... (2)

argStyopa (232550) | about 5 months ago | (#46978697)

...as with most technological weapon issues, those with them, or with a reasonable chance of developing them will defend the idea.

Those without will roundly condemn it using a great deal of moral and ethical language, but their base issue is that they cheerfully condemn the use of any weapons that they cannot yet field.

The UN as a clearinghouse organization for multinational efforts does a massive amount of good that would otherwise be difficult to enable.
The UN's general chambers are worthless talking shops where inconsequential states get to criticize significant, powerful states for acting in their own narrow self-interest ... for reasons based entirely on their OWN narrow self-interests. (Not to mention its main actual value: a way for the favored scions of grubby tinpot regimes to be prostitute-frequenting scofflaws in a place far nicer than their own pestilential capitals.)

Don't they already exist? (1)

Kjella (173770) | about 5 months ago | (#46978745)

Now if you were a kamikaze pilot and everybody was sleeping on the job and you went for a final dive against a modern battleship or aircraft carrier, wouldn't you already be blased to bits by an autonomous defense system? I imagine the same goes for tanks, planes, helicopters and even indivdual robot-soldiers, you'll never wait until you're blasted to bits to say "yup, that was an enemy". Even if they don't go on their own search & destroy missions I doubt they'll avoid being used as sentries, convoy escort and other defensive purposes. Or even "defensive on the offensive" weapons like if you point an RPG at a tank. Not to mention self-defense backups if the communication to the mothership is jammed, nobody will leave high tech military equipment stranded and defenseless by a simple communcations jammer. Not going to happen.

gas (0)

AndyKron (937105) | about 5 months ago | (#46978775)

Poison gas kills without human intervention too

Features (0)

Ginger Unicorn (952287) | about 5 months ago | (#46978817)

Do they come with Lotus Notes and a machine gun?

If a robot killed arbitrarily, would it be any... (0)

Anonymous Coward | about 5 months ago | (#46978843)

more difficult to hold anyone responsible than it is now, when drone operators arbitrarily murder civilians based on flawed intelligence, via signature threat (oh, there's a couple people with guns, let's blow them up!) on the way to a target based on flawed intelligence, or just plain negligence on the part of the operator? How about firing on wounded civilians being evacuated? War crimes? Please. Amerika is so above reproach... Don't make me put you on a terrorist watch list and detain you indefinitely while I waterboard the truth out of you.

South Korea has them SuperAegis II (2)

gurps_npc (621217) | about 5 months ago | (#46978881)

My understanding is that South Korea has robotic guns set up on the border with North Korea. While they can be human over-ridden, when fully activated, they fire at anything that attempts to cross the border.

The Super Aegis II has a 12.7 mm machine gun and a grenade launcher. laser and infrared sensors that see 3 km in the day, 2 at night. But the gun probably can't shoot that far - it just sees that far.

Vicious Cycle (1)

Pollux (102520) | about 5 months ago | (#46978891)

Killer robots, or fully autonomous weapons, do not yet exist but would be the next step after remote-controlled armed drones used by the US military today.

Weapons contractors make their living imagining new weapons, sharing their visions with the public, then advocating that the US Military develop those weapons to avoid "the enemy" from making them first. Then once the weapon is invented, new weapons need to be created to defend against the weapon that already exists. Wash, rinse, repeat.

And people wonder why so much money is spent on defense spending.

Re:Vicious Cycle (0)

Anonymous Coward | about 5 months ago | (#46979145)

I know what you're doing. You're trying to stir up class warfare with your always attacking the the job creators.

Why? (1)

Mike Frett (2811077) | about 5 months ago | (#46978907)

We need all of this why? Wanting Weapons and Wars just shows how non-advanced and uncivilized Humans are and how we haven't even moved out of our caves yet.

If we are to survive well into the future, we need to learn to disregard our primitive ways and start thinking about others instead of only ourselves. The whole idea of separate Countries and separate people disgust me deeply. People, we are not separate, we all live on a tiny Blue Planet in the middle of an unexplored ocean of awesomeness. If we can't rid ourselves of our primitive nature, maybe we are long overdue for extinction.

Re:Why? (1)

clovis (4684) | about 5 months ago | (#46979127)

We need all of this why? Wanting Weapons and Wars just shows how non-advanced and uncivilized Humans are and how we haven't even moved out of our caves yet.

If we are to survive well into the future, we need to learn to disregard our primitive ways and start thinking about others instead of only ourselves. The whole idea of separate Countries and separate people disgust me deeply. People, we are not separate, we all live on a tiny Blue Planet in the middle of an unexplored ocean of awesomeness. If we can't rid ourselves of our primitive nature, maybe we are long overdue for extinction.

Well said.
All we need to do is put everyone who refuses to disregard their primitive ways into re-education camps where they can grow into rational modern humans.
But, in the past this has caused some hostility between the people being rounded up and those who have been tasked with finding them.
But, it has a simple solution. All we need to do is make numbers of autonomous robots to do the gathering.
It will be easy to identify the primitive humans, because they'll be the ones trying to resist re-education. Sadly, it may require that the autonomous robots be armed.

Why the condemnation? (1)

Entropius (188861) | about 5 months ago | (#46978993)

People condemn autonomous killing robots because they might screw up and kill something that shouldn't be killed.

How is this any different from what humans in charge of deciding who to kill do? (exhibit A: the Iraq war)

Soldiers with no human faults? (1)

penguinoid (724646) | about 5 months ago | (#46979065)

I'm outraged that anyone is even considering building soldiers that have no intrinsic sense of self-preservation, adrenaline, aggression, revenge. Imagine a soldier that would allow itself to be destroyed rather than fire when ordered not to, eg if there would be civilian casualties or if the target is not definitively identified. Obviously that sort of thing can't be allowed, because if we don't kill innocent bystanders how can we spawn new enemies to fight? Sadly, I suspect that robot soldiers won't actually be built with the goal of avoiding human faults.

Not accountable? Bullshit. (1)

UnknownSoldier (67820) | about 5 months ago | (#46979137)

> it would be difficult to hold anyone accountable.

Whoever built the hardware, and programmed the software would be accountable.

In shootings we hold the shooter responsible because it was a human who committed the crime. Since there is no human on the end, we can simply go back up the "chain" to find the people who ARE responsible.

This isn't rocket science.

Knee Jerk Yo Yo (1)

Jim Sadler (3430529) | about 5 months ago | (#46979197)

Think about the size of an army that China can deploy. We have zero ability to even dream of a one on one combat situation with a traditional Chinese army. Fielding mechanical warriors of various types would be our real hope other than using nuclear bombs are other weapons of mass destruction. Or the US could put two million soldiers on the line and watch them be swarmed over as a trivially, small force. Then there is the problem of cost. And it is not just for the full military responses. An effective border control is an economic back breaker. We could easily put automated devices on our border with Mexico which would shout out a freeze until a custodial officer arrives and if movement continues simply execute the target. We could make it next to impossible to violate our border and save millions in patrol costs continually. The same could be done at sea to protect restricted waters from foreign exploitation by whaling or fishing vessels. And then there is the drive by shooter issue so common in parts of California. Imagine punks firing into a home when suddenly a robotic warrior pops up with a very good weapon blazing at the offenders car. I know, all you who whine about guns, I repeat the problem is only that the wrong people are getting shot. Anything that gets the right people shot is a blessing.

The Navy has been using them for years. (1)

technosaurus (1704630) | about 5 months ago | (#46979373)

The are called CWIS aka r2d2.  http://en.wikipedia.org/wiki/Phalanx_CIWS

Slippery slope (1)

OpenSourced (323149) | about 5 months ago | (#46979493)

If a robot killed arbitrarily, it would be difficult to hold anyone accountable.

Not like now, when every time that an Afghan peasant is killed in error, heads roll.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?