Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Weapons Systems That Kill According To Algorithms Are Coming. What To Do?

Soulskill posted about 7 months ago | from the have-them-fight-the-decepticons dept.

Robotics 514

Lasrick writes "Mark Gubrud has another great piece exploring the slippery slope we seem to be traveling down when it comes to autonomous weapons systems: Quote: 'Autonomous weapons are robotic systems that, once activated, can select and engage targets without further intervention by a human operator. Advances in computer technology, artificial intelligence, and robotics may lead to a vast expansion in the development and use of such weapons in the near future. Public opinion runs strongly against killer robots. But many of the same claims that propelled the Cold War are being recycled to justify the pursuit of a nascent robotic arms race. Autonomous weapons could be militarily potent and therefore pose a great threat.'"

cancel ×

514 comments

Sorry! There are no comments related to the filter you selected.

Skynet (5, Insightful)

ackthpt (218170) | about 7 months ago | (#45902517)

Yet another predictor.

Bring on the Terminators.

Re:Skynet (5, Funny)

Anonymous Coward | about 7 months ago | (#45902783)

The easiest way to avoid being vaporized is to wear a shirt that reads "/dev/null". No intelligent system will send anything your way.

captcha: toasted (damn, /dev/null has never failed me before)

Re:Skynet (-1)

Anonymous Coward | about 7 months ago | (#45902845)

Poor Obummer. If this had come earlier he could have blamed the algorithms for the extrajudicial drone kills of US citizens.

Where have I heard this before? (2)

fizzer06 (1500649) | about 7 months ago | (#45902523)

Select targets? Really? Wait until the system realizes ALL humans are targets.

Re:Where have I heard this before? (4, Funny)

Lisias (447563) | about 7 months ago | (#45902565)

Select targets? Really?

Wait until the system realizes ALL humans are targets.

Don't worry. Fail safe measures will be implemented in order to keep the systems secure. Look all that fabulous advances made on our computer security nowadays and rest assur... Oh, wait!

Re:Where have I heard this before? (0)

Anonymous Coward | about 7 months ago | (#45902623)

We will need those security holes when the robots don't take commands on the normal channels any more.

Re:Where have I heard this before? (0)

robsku (1381635) | about 7 months ago | (#45902673)

A fair point... I wan't to be able to trust them security holes when needed though, so let's rule out Microsoft on this one, ok?

Re:Where have I heard this before? (2)

bunratty (545641) | about 7 months ago | (#45902737)

No problem! Just scortch the sky so they can't get solar power any more.

Re:Where have I heard this before? (1)

Opportunist (166417) | about 7 months ago | (#45902759)

But then finally we'll see some kind of response to the problem, because then FINALLY there will be people dying from faulty software.

Re:Where have I heard this before? (2)

ackthpt (218170) | about 7 months ago | (#45902897)

Select targets? Really?

Wait until the system realizes ALL humans are targets.

Don't worry. Fail safe measures will be implemented in order to keep the systems secure. Look all that fabulous advances made on our computer security nowadays and rest assur... Oh, wait!

Failsafe system will be contracted out to the people who profited by writing and then fixing the Affordable Healthcare websites.

Be afraid. Be very afraid.

Re:Where have I heard this before? (5, Funny)

Anonymous Coward | about 7 months ago | (#45902613)

But many of the same claims that propelled the Cold War are being recycled to justify the pursuit of a nascent robotic arms race.

You environmental weenies are all the same, you go on and on about how we all need to recycle, but when we do it you complain about how we`re not doing "right"

We could not make them (4, Insightful)

jjeffries (17675) | about 7 months ago | (#45902539)

They're not "coming" as if from space. We just need to choose for them not to exist and they won't. These things will (or won't) be made by individuals who can make moral decisions.

Don't be a terrible individual; don't make or participate in the making of terrible things.

Re:We could not make them (2, Insightful)

geekoid (135745) | about 7 months ago | (#45902621)

Except looking at history, they will probable lead to fewer soldier deaths, fewer bystander deaths, more accurate targeting.

I don't know why people think they are bad.

Re:We could not make them (0)

Anonymous Coward | about 7 months ago | (#45902641)

We have no history of robots selecting their targets autonomously.

Re:We could not make them (1)

fustakrakich (1673220) | about 7 months ago | (#45902761)

My 3D printer is making one now. But don't you worry. It will only target people in my foes list.

Re:We could not make them (4, Insightful)

Opportunist (166417) | about 7 months ago | (#45902797)

We have more accurate weapons than ever. Compare the average cruise missile to the average arrow and tell me:

1. Which one is more accurate?
2. Which one causes more deaths?

You will notice that they are NOT mutually exclusive. Quite the opposite.

Re:We could not make them (0)

Anonymous Coward | about 7 months ago | (#45902843)

There are 10 types of people - those whose countries possess drones, and those whose countries do not.
The latter type are the ones who think they are bad.

It's really not a complicated concept.

Re:We could not make them (5, Insightful)

Anonymous Coward | about 7 months ago | (#45902987)

Except looking at history, they will probable lead to fewer soldier deaths, fewer bystander deaths, more accurate targeting.

I don't know why people think they are bad.

Extra-judicial killings of US citizens.

Re:We could not make them (0)

Anonymous Coward | about 7 months ago | (#45902721)

that really runs in the face of all past human behavior, which usually follows the "do it to them before they can do it to you" model

Re:We could not make them (4, Interesting)

timeOday (582209) | about 7 months ago | (#45902775)

We just need to choose for them not to exist and they won't.

I disagree. At some point a civilian smartphone, or self-driving car, will contain practically all the technology to be weaponized. (E.g. "avoid people" becomes "pursue people"!) Once you have the sensors, pattern recognition, and mobility, there's no way to control all the possible applications.

You won't even know if you're helping make them. (5, Insightful)

ron_ivi (607351) | about 7 months ago | (#45902859)

One guy'll be making a computer vision system to recognize faces "to make it easier to log in to your cellphone".

Another guy'll be making a robot painting system that aims it's cars "so make a more profitable assembly line".

Yet another'll make a self-driving car "so you won't have to worry about drunk drivers anymore".

Once those pieces are all there (hint, today), it doesn't take much for the last guy to glue the 3 together; hand it a gun instead of spraypaint; and load it with a databases of faces you don't like.

It seems a poor comparison. (4, Insightful)

TiggertheMad (556308) | about 7 months ago | (#45903007)

I think that there is a difference, though. It is one thing to create unrelated technology that when linked together is dangerous. It is another thing to just create technology that doesn't have an application outside of killing people. By your argument, every invention all they way back to using flint and tinder to create fire is nothing but a weapon, and why should we even have bothered?

My prediction is that this technology will float about the edge of popular awareness, until an unbalanced individual sets up a KILLMAX(tm) brand 'smartgun perimeter defense turret' in an elementary school and murders a bunch of children and escapes because he didn't have to be on the scene. Then national outrage will lead to mass bans on such weapons.

Should we be making such weapons? I don't know, I suppose that the argument can be made that they fill the same role as land mines, but have the upside that there is less problem with getting rid of them when the fighting stops. I find the glee we as a species have in building better was of killing each other to be really depressing on the whole.

Re:We could not make them (0)

Anonymous Coward | about 7 months ago | (#45903041)

That's just wishful thinking. If there's a profit to be made or a proverbial child to thought of, then somebody will make these weapons and somebody will use them. Anybody who makes, sells, owns or activates them should be tried the same as if they had fired the shot, i.e. bear full and joint responsibility for the machine's actions.

Don't be a target or act like a target... (4, Funny)

bobbied (2522392) | about 7 months ago | (#45902541)

Problem solved!

Re:Don't be a target or act like a target... (0)

Anonymous Coward | about 7 months ago | (#45902731)

Problem solved!

Everyone's a target to someone.

Re:Don't be a target or act like a target... (0)

Anonymous Coward | about 7 months ago | (#45902763)

And how does a target "be" or "act"? I imagine that most of the time a target acts just like you or I do and what makes them a target is not how they are acting but the intent in their brain. These things are not visible to a drone flying at 60k feet.

Sure there are the obvious cases like opening fire on a military installation or similar, but these are already taken care of by low-tech methods like the military installation turning the "target" into swiss cheese or pink goo.

Re: Don't be a target or act like a target... (1)

Anonymous Coward | about 7 months ago | (#45902825)

Just wait until target is defined as everyone who isn't a subservient and conforming slave.

Controlling and liquidating masses that are one day not happy with the status quo will be made very easy.

If the entire population was angry and furious with its government and no other help was possible, an uprising and a revolution was possible. No longer with these bots.

The French Revolution, the American independence war against the Brits, Arab spring. All these would have been oppressed with killer bots. As a result the French would still suffer from malevolent monarchy, the USA wouldn't exist and the Arab dictators would still rule.

I for one welcome... (1)

Anonymous Coward | about 7 months ago | (#45902545)

Our robotic killing overlords

Robocop (1)

GeoffSmith1981 (795607) | about 7 months ago | (#45902547)

Reminds me of the ED-209 from RoboCop.

Sci-Fi to watch... (2, Insightful)

Anonymous Coward | about 7 months ago | (#45902557)

Terminator

ST TNG: Arsenal of Freedom

Etc...

Do This! (2)

Mikkeles (698461) | about 7 months ago | (#45902561)

Hack the system with an algorithm that kills the deployers, of course!

Source code: (1)

Tablizer (95088) | about 7 months ago | (#45902595)

while (humans.count() > 0) {
  kill(humans.any());
}

Re:Source code: (1)

Anonymous Coward | about 7 months ago | (#45902629)

while (1) {
      kill (the_humanoid);
      stop (the_intruder);
}

Re:Source code: (0)

jkauzlar (596349) | about 7 months ago | (#45902637)

while (humans.count() > 0) {

  kill(humans.any());
}

I have a feeling it'll be closer to


while(muslims.count() > 0) {...

Re:Source code: (1)

MobSwatter (2884921) | about 7 months ago | (#45902697)

Nah if logic prevails, machines will protect humanity from itself. I'm sure the NSA will back door it for the purpose of influence though.

Re:Source code: (1)

Anonymous Coward | about 7 months ago | (#45902785)

I'm missing the downside here.

Re:Source code: (2)

TiggertheMad (556308) | about 7 months ago | (#45903051)

while (humans.count() > 0) {

kill(humans.any()); }

I have a feeling it'll be closer to

while(muslims.count() > 0) {...

It will be even more depressing than that...you can't identify religious affiliation visually:

while(target.skincolor < 0.5) {....

Re:Source code: (-1)

Anonymous Coward | about 7 months ago | (#45903059)

while (humans.count() > 0) {


  kill(humans.any());
}

I have a feeling it'll be closer to


while(muslims.count() > 0) {...

That's only because Islamic societies are stuck in the Dark Ages - literally.

Else they would do it first. There's a reason why non-Muslim societies are in what's called dar al-Harb [wikiislam.net] - the House of War.

Re:Source code: (5, Funny)

Anonymous Coward | about 7 months ago | (#45902767)

Bender: [while sleeping] Kill all humans, kill all humans, must kill all hu...
Fry: [shakes him] Bender wake up.
Bender: I was having the most wonderful dream. I think you were in it.

Re:Source code: (0)

Anonymous Coward | about 7 months ago | (#45902811)

So it's a bending unit?

Re:Source code: (0)

Anonymous Coward | about 7 months ago | (#45902813)

mov bx, #humanslist
seeker:
mov ax, [bx+si]
call shoot
dec si
jnz seeker

Re:Source code: (0)

Anonymous Coward | about 7 months ago | (#45902947)

Any reason why you didn't use lodsw?

We'd be safer with Skynet (0)

Anonymous Coward | about 7 months ago | (#45902597)

Those algorithms are decided upon by those same folks that advocate teaching troops that 'those brown ragheads are inhuman monsters that want to rapemurder innocent babies. Especially the brown women and children'. The same folks that think letting the country know when they've drastically abused their authority and are committing treason is the real treason. The same folks that consider preventive measures against disastrous weather a waste of money because more poor people wouldn't die if you do that.

An evil AI would just try to kill us. It takes people to ensure we instead all suffer a fate far worse than death.

What does "Automatically Selecting Targets" Mean? (1)

roeguard (1113267) | about 7 months ago | (#45902599)

While the first thing that comes to mind is a machine that instantly targets and destroys, I wonder if this could be something more methodical. Since "friendly" human lives aren't on the line for the decision maker, these could be used to slow down the process of determining whether or not to use lethal force.

For example, much larger sets of data could be used that just "Looks like a bad guy with a gun and I think he might want to shoot me." With facial recognition, individual enemy combatants could be tracked, and autonomous lethal force only authorized after confirming the target has been actively involved in a prior action.

I'm probably being overly optimistic, but without adrenaline, threat of immediate bodily harm, etc; the option to slow things down and not just react under fire is a new luxury human soldiers aren't (reasonably) afforded.

Re:What does "Automatically Selecting Targets" Mea (1)

im_thatoneguy (819432) | about 7 months ago | (#45902675)

I can think of some situations where you don't even have to use facial recognition per say. If you're in a vehicle and the system detects an RPG fired at you. It's pretty easy to distinguish "RPG" from background noise. It should also be relatively easy to detect the 'source' and immediately return fire.

If firing an RPG is a guaranteed way to get hit with several belts of radar/IR guided 50 caliber machine gun fire--you might have a really hard time finding people willing to pull the trigger. Similarly a return-fire system could probably identify and instantly return precise fire at a sniper faster than they could take cover or even theoretically before the first bullet hit its target.

Re:What does "Automatically Selecting Targets" Mea (0)

Anonymous Coward | about 7 months ago | (#45902725)

No thread of immediate bodily harm? I'm sure those killing robots will be targeted as well. They certainly will operate under fire. And since they are expensive, they also will be programmed to protect themselves,

However, one may hope that if robots get more common on all sides and thus pose more threat than humans, that robots will be programmed to shoot on robots instead of humans.

Re:What does "Automatically Selecting Targets" Mea (0)

Anonymous Coward | about 7 months ago | (#45902865)

You're too optimistic.

You're right with facial recognition: Think an advanced landmine which just shoots anything with a face.

Easy (4, Funny)

tool462 (677306) | about 7 months ago | (#45902601)

Wear a tshirt with a message written in a carefully formatted font so it causes a buffer overflow, giving your tshirt root privileges.
Mine would have the decss code on it, so the drone starts shooting pirated DVDs at everybody. The RIAA will make short work of the problem at that point.

Re:Easy (1)

Em Adespoton (792954) | about 7 months ago | (#45902857)

Wear a tshirt with a message written in a carefully formatted font so it causes a buffer overflow, giving your tshirt root privileges.
Mine would have the decss code on it, so the drone starts shooting pirated DVDs at everybody. The RIAA will make short work of the problem at that point.

The RIAA/MPAA have been using bots to select and engage targets for years....

Re:Easy (-1)

Anonymous Coward | about 7 months ago | (#45902995)

decss is for faggots. endless ip bullshit is for faggots who have no idea what they're talking about.
 
go fuck yourself.

Already in use? (1)

K. S. Kyosuke (729550) | about 7 months ago | (#45902615)

I thought that the Aegis weapon system has had something called the Auto-Special mode for quite some time? Basically, you sit back and watch the targets getting destroyed.

Re:Already in use? (1)

alexander_686 (957440) | about 7 months ago | (#45902677)

2 points. First, while there is no human “in the loop”, there is a human “on the loop”. They have discretion here. Second, it is basically a defensive system to shoot incoming cruise missiles – it’s range of targets is pretty limited. This would be very different than an offensive autonomous system that hunts and kills on it’s own.

Re:Already in use? (1)

Joe_Dragon (2206452) | about 7 months ago | (#45902821)

will keep control at the top where it belongs (till the systems at the top take control and there is no one in the missile silos to stop the launch)

Re:Already in use? (1)

demachina (71715) | about 7 months ago | (#45902891)

The problem with remotely piloted vehicles is the up and down links are the weak link. If you take out your opponents comm links with jamming or by shooting down their relays you take out their entire drone capability at least until you can restore the comm links.

If you are going to depend on drones the only solution is they have to be autonomous. The only other solution is they have to be manned and introducing pilots entails increased cost, lowers mission duration, increase risk of loss of life and capture.

I would think there probably are already autonomous offensive drones flying, they are probably just restricted to targetting predetermined GPS locations. They desperately need the ability to discern people or vehicles (cars, armor, ships, planes) which are the desired target without having to rely on a comm link or pilot.

It is nearly an inevitable technological evolution now that drones are out of Pandora's box. If the U.S. doesn't do it everyone else will.

Weapons Systems That Kill According To Algorithms (4, Interesting)

aardvarkjoe (156801) | about 7 months ago | (#45902617)

What To Do?

"Endeavor to be one of the people writing the algorithms" would probably be a good idea.

Re:Weapons Systems That Kill According To Algorith (0)

Anonymous Coward | about 7 months ago | (#45902751)

And be near it during a flight test? Not me.. They will surely off-shore that work right?

Re:Weapons Systems That Kill According To Algorith (1)

Em Adespoton (792954) | about 7 months ago | (#45902877)

What To Do?

"Endeavor to be one of the people writing the algorithms" would probably be a good idea.

You'd better make sure you're REALLY good -- because those algorithms are going to have to protect you from the masses -- both the ones who think they go too far and the ones who think they don't go far enough.

Microsoft inside (1)

gmuslera (3436) | about 7 months ago | (#45902627)

Trying to make people forget what originally meant "Blue Screen of Death"

Killer Robots... (2)

medv4380 (1604309) | about 7 months ago | (#45902645)

I want Killer Robots that Kill only Killer Robots. Having an army of Killer Robots that kill people is just asking some to run the "kill all humans" command while logged in as root in the All Countries Directory on accident, or on purpose by an anarchist wanting a lot of death.

Re:Killer Robots... (0)

Anonymous Coward | about 7 months ago | (#45902741)

Or by anyone thinking of the profit to be had in reloading or rebuilding contracts.

Re:Killer Robots... (4, Interesting)

a_ghostwheel (699776) | about 7 months ago | (#45902745)

You'll like this [gutenberg.org]

Welcome to the quality assurance team (0)

Anonymous Coward | about 7 months ago | (#45902659)

This is ED209.

Select, but not fire (1)

Culture20 (968837) | about 7 months ago | (#45902665)

I thought humans need to be in the loop at all times, so AIs can select, but humans need to pull the trigger. "Selected target image displayed above. [cancel] [OK]"

Results are known (1)

bob_super (3391281) | about 7 months ago | (#45902669)

Devices which can engage targets without human intervention are fairly common: landmines.
We do know that they kill hundreds of innocents every year.

Put some cameras and algorithms, and you may kill/maim less innocents, but you won't get to zero. You can't get to zero when you put a human brain behind the trigger, how do you make a machine decide which teenager is a bad guy?

Actually, let me offer a simple solution to that last question:
Connect the machine to a massive database which contains data about everyone, so that the robotic killer can just check a "life-long naughty" list, and confirm the identity of the target based on her previous collected behavior (and carried electronics). You don't even need constitutionally protected data, metadata should be enough.

There. No human needed, you can get perfect killers cruising the world for enemies of the state autonomously. Feel safe?

No, this is not designed to be modded funny.

Re:Results are known (1)

Krishnoid (984597) | about 7 months ago | (#45903011)

Connect the machine to a massive database which contains data about everyone, so that the robotic killer can just check a "life-long naughty" list, and confirm the identity of the target based on her previous collected behavior (and carried electronics). You don't even need constitutionally protected data, metadata should be enough. There. No human needed, you can get perfect killers cruising the world for enemies of the state autonomously. Feel safe?

Robot Santa! Is it Christmas again already?

Re:Results are known (1)

Herkum01 (592704) | about 7 months ago | (#45903027)

It is funny because you have basically referred to Robot Santa Claus [wikia.com]

I'm fine with it (0)

Anonymous Coward | about 7 months ago | (#45902671)

as long as the general public has them too of course.

How do human soldiers kill? (3, Insightful)

mi (197448) | about 7 months ago | (#45902687)

Weapons Systems That Kill According To Algorithms Are Coming

I don't get this... Aren't human soldiers killing based on something other than algorithms? Or is it that the implementations are coded in vague human languages, that makes them feel somehow warm and fuzzy? Well, Pentagon's Ada may be considered similar, but only in jest...

I'd say, whether such systems are bad or good is still up to the algorithms, not the hardware (nor pinkware), that executes them.

Re:How do human soldiers kill? (1)

Anonymous Coward | about 7 months ago | (#45902923)

I think the point, which all armies know, is that your questions is answered with "reluctantly" - a machine acting in accordance to an algorithm won't hesitate.

Re:How do human soldiers kill? (0)

Anonymous Coward | about 7 months ago | (#45902997)

I'm not sure the people in My Lai would agree with you, or more recently, some of the people in Iraq [wikipedia.org]

POT (Personal Open Terminal) = good intentions (0)

Anonymous Coward | about 7 months ago | (#45902699)

way ahead of the curve with POT (Personal Open Terminal) our personas are as open as whoever logs on to look? along with daily personal confessions of non-corepirate thoughts will keep us focused for the war against us by us & for us

this is bad (0)

Anonymous Coward | about 7 months ago | (#45902709)

We need humans controlling the trigger because we know that humans cannot be conditioned to kill people without question or remorse.

Re:this is bad (1)

bobbied (2522392) | about 7 months ago | (#45902823)

So Dennis Rader, is that you? No, Charles Manson then?

Re:this is bad (0)

Anonymous Coward | about 7 months ago | (#45902881)

I'm Adolph Hitler and Theodore Herzl's love child.

We already have mines (4, Interesting)

jamiefaye (44093) | about 7 months ago | (#45902715)

... both land and naval. They have become more sophisticated in that they can be triggered by target characteristics, and in the naval case, maneuver.

Re:We already have mines (2)

LWATCDR (28044) | about 7 months ago | (#45902839)

Yep you also have anti ship missiles that you can fire along a vector that will pick their target. Anti radar missles that will hang from a chute waiting for the radar to come on.... And so on.

These already exist (0)

Anonymous Coward | about 7 months ago | (#45902727)

They're called viruses.

What to do? (1)

cromega (2044870) | about 7 months ago | (#45902753)

Try not to get in the line of fire.

let's play global thermonuclear war (1)

Joe_Dragon (2206452) | about 7 months ago | (#45902755)

what side do you want?

1. United States
2. Russia
3. United Kingdom
4. France
5. China
6. India
7. Pakistan
8. North Korea
9. Israel

Greetings Professor Falken. (2)

LocalH (28506) | about 7 months ago | (#45902757)

Shit just got real.

Fictional treatment in _David's Sling_ (2)

steveha (103154) | about 7 months ago | (#45902773)

David's Sling, a novel by Marc Stiegler, is about the first "information age" weapons systems. These are autonomous robotic weapons that use algorithms to decide which targets to hit, and the algorithms are designed to take out enemy communications and decision-making. The weapons would try to identify important comm relays and take them out, and would analyze comm traffic to decide who is giving orders and take them out.

The book was written before the fall of the Soviet Union, and the big finale of the book involves a massive Soviet invasion of Europe and the automated weapons save the day.

Unlike some portrayals of technology, this book covers project planning, testing, and plausible software development. It contains tense scenes of QA testing, where the team makes sure their hardware designs are adequate and that their software mostly works. (They can remote-update the software but of course not the hardware.)

Mostly they left the weapons autonomous, but there was a memorable scene where a robot was having trouble whether to kill someone, and the humans overrode the robot and had it leave the guy alone. (The guy was injured, and lying there but moving a little bit, and the robot was not sure whether the guy was already killed or should be killed again. Hmm, now that I think about it, this seems rather implausible, but it was a nifty scene in the book.)

http://www.goodreads.com/book/show/3064877-david-s-sling [goodreads.com]

P.S. I bought the book when it first came out, and there was an ad for a forthcoming hypertext edition that never came out. I think it was never actually made, but I wish it had been.

Easy (3)

istartedi (132515) | about 7 months ago | (#45902789)

Hack in. Make military-industrialists fit the target profile. Problem solved.

Weapons Systems That Kill According To Algorithms (1)

The Grim Reefer (1162755) | about 7 months ago | (#45902791)

...Are Coming. What To Do?

For the love of all that is good and decent in this world, find and protect John Conner.

In the words of Lord Kril (1)

korbulon (2792438) | about 7 months ago | (#45902803)

"We die."

I feel sorry for... (1)

ArcadeNut (85398) | about 7 months ago | (#45902805)

The BETA testers of this system....

Like the Death Penalty (2)

bziman (223162) | about 7 months ago | (#45902817)

It's good in principle, but I oppose it because implementations are never foolproof, and when the result is death, there's no way to change your mind later.

Already exists: Aegis (1)

mveloso (325617) | about 7 months ago | (#45902827)

From what I understand, Aegis already does this - and it did it a long time ago. Where has subby been, in the basement?

It's quite obvious, really (0)

Anonymous Coward | about 7 months ago | (#45902847)

What To Do?

I think you mentioned Algorithms. In that case the answer is obvious: hack the system!

Haven't we... (2)

camperdave (969942) | about 7 months ago | (#45902849)

Haven't we had them for a long time already? I remember reading a couple of years ago about some DIY hobby guy putting together an aliens style sentry gun out of an old camera and a paintball gun. And if a DIY hacker can do it, the military has it. Also, don't Predator drones already have autonomous kill capability?

At Least One Outcome Seems Possible (1)

sehlat (180760) | about 7 months ago | (#45902879)

"They've got me aimed at a computer center. Why don't I just fly a little farther and hit a maternity ward?"

Knuth Vol 5 (1)

Anonymous Coward | about 7 months ago | (#45902883)

Semi Fatal Algorithms

if both sides have these robots (2)

maliqua (1316471) | about 7 months ago | (#45902889)

and they can just fight among themselves it could be televised live for everyone and war would suddenly become wholesome entertaining

False Postives (3, Interesting)

Nyder (754090) | about 7 months ago | (#45902953)

I'm sure the DMCA has shown you what automated systems can do.

Turnabout is fair play (2)

TheloniousToady (3343045) | about 7 months ago | (#45902963)

We developers have been killing software bugs for decades. Why can't software bugs start killing us?

They're here (1)

clam666 (1178429) | about 7 months ago | (#45902989)

It's not so much killer terminators in the classic sense. A trifecta of air/sea/land operations is what's being done. Autonomous drones across the three game surfaces to eliminate the massive expense of physically present wetware, even remotely is the long term benefit. Being able to classify, analyze, and respond accordingly allows continuous intelligence and strike operations to be maintained 24/7 in any theater we need to be in. You want to be able to move your troops in the area, send a signal to stop active guard while you traverse the area based on the pause code updated constantly by satellite so there's not more "thunder!" "Flash!" type of counter signing, you just want to click and go, and enable it again when you've cleared the area. You want to be able to throw a drone up in the air to target enemies when you're pinned down. You want a small sniper patrolling an area constantly while you're stuck in a forward area. Classification of enemies isn't difficult, when you define it as anyone that should be there. It's the benefit of a mine field without the mines that blow up children 10 years later. Classification is much better when you are determining vehicles vs. people vs. children vs. animals, and is not that hard to do as it is already being done. Can casualties occur? Civilian ones? Sure. The goal is to eliminate civilian casualties or infrastructure destruction is possible. That's not good war. Good war is eliminating he ability for the bad guys to make war against you. It's a lot easier to deny more and more territory from bad guys mixed with special forced who can move in and out of any territory without being ripper to shreds, while denying it to the bad guys. Who wants to deal with all the political lash back of dead soldiers or civilians, when you can remotely guide assets for specific missions, and switch to autonomous target elimination or intelligence gathering or force protection on a whim? A war with 5000 of our soldiers against an entire nation's army or insurgents in street to street fighting and winning because we had intelligent technology and having a dozen casualties is better for us and for them. It costs a lot less to tell the citizenry "don't be in this location" while it's cleared, as well as boxing a know civilian area to not be touched. It costs a lot less to granulate the destruction down to the actual baddies who are being tracked by constant intelligence streaming assets who work all day and night while spitting out a report in Alabama. Military engagements involving the first world are mostly politically won or lost, not militarily so. Eliminating soldier deaths and civilian deaths allows you more money and time and ability to politically win a conflict rather than spend those resources trying to handle lash back. In an increasingly networked battlefield, these technological abilities are a godsend for keeping "good guys" alive and able to perform effectively. Having much more of an idea if an area is clear or you can sleep at night rather than burning out troops from psychological stressors is a nice thing.

It will fine (1)

Spiked_Three (626260) | about 7 months ago | (#45903001)

We ONLY need these weapons to defend ourselves from foriegn attack and invasion .....

Or when protecting our 'strategic interests' become very important. For instance in order to protect Israel, a nation we can not live without.

Oh, and also in case any one pisses us off and does anything we do not like.

How I learned to stop worrying... (1)

DigitalAce9 (1876372) | about 7 months ago | (#45903021)

...and love the bomb. We cannot have a killer robot gap! http://www.youtube.com/watch?v=ybSzoLCCX-Y [youtube.com] Seriously... I wonder if this is how these discussions go in the Pentagon.

Google's No1 project (0)

Anonymous Coward | about 7 months ago | (#45903047)

You will not see self-driving cars on ORDINARY roads with ordinary traffic and pedestrians in your lifetime (even if, as you read this, you are still a child). So why is Google propagandising such nonsense so hard?

Meanwhile, where the sheeple are told NOT to look by the mainstream media, Google is investing in every MILITARY robotic company that shows nay useful prospects. Google is dead-set on creating the hardware and software designs to be used by autonomous US military killing machines in the coming decades. Google hardware and software designs currently comprise EVERY major data collection facility run by the NSA, GCHQ and other intelligence allies. Google was originally set-up specifically to create solutions for full surveillance projects- especially the scalable storage, indexing, mining and searching of ALL the data the NSA and others collect. Google's recent obsession with automatic speech-to-text, and text-to-text language translation was specifically aimed at foreign language data sources gathered by the NSA.

The founders of Google are frequently guests of honour at the most extremist zionist events in Israel, and have never hidden their desire to see absolute US hegemony over the planet. And they see the US ability to exterminate or 'pacify' target nations with robot war machines as the only way for this goal to be achieved.

You see, it doesn't matter if your 'self-driving' tank 'accidentally' rolls over a bus full of 'sub-Humans', because the mainstream media in the USA will simply describe the victims as "MILITARY AGED MALES" and their "HUMAN SHIELDS", and the owners of Slashdot will promote stories pushing this evil point of view. What does matter is that not even one US soldier dies for every hundred thousand Humans exterminated in the target nation. Google believes that senior US politicians believe EVERY war can be 'sold' to the US sheeple providing there are no likely US deaths. Patriotism, after all, is just another word for racism.

Google's robot tanks are not designed to fight a REAL or fair war against opponents equal in any military sense. Google's robot tanks are designed to 'take' villages, towns and cities, or to exterminate so much life in those settlements that ordinary Human societal functions become impossible. The tanks will roll down streets defined by 'street view' intelligence gathering, or high quality US aerial intelligence at worst. The tanks will identify Human targets using the same algorithms Google uses to identify people in images across its various services. Forget 'FACE RECOGNITION'. At 'best', Google's algorithms will seek to identify "MILITARY AGED MALES " (male Humans between 8 and 80), and guide their auto-cannons to injure them as badly as possible, without an outright immediate kill.

US military training manuals STRESS that when destroying or taking large scale civilian settlements, you must cause as much long lasting suffering as possible, to 'sap' the morale of the targets and exhaust their ability to deal with the injured and dying.

Like Ghengis Khan, Google thinks that enabling such unthinkable atrocities will act as the ultimate deterrent to all those who would consider resisting the will of the USA. Google's military intentions are now so open, even the thickest or most naive sheeple can no longer deny them. In this sense we see an analogue to the NSA revelations. A few years back the owners of Google could assume most of you here would call ANY who accurately described the actions of the NSA as "tin foil hat wearing conspiracy idiots". Now even the most Obama-trusting of you readers has an accurate sense of just how far the NSA has gone. Even so, 95%+ of you are still stupid enough to fall for the 'self-driving car' guff- which neatly distracts you over and over from the reality of Google's real focus.

Arguments that propelled the cold war... (0)

Anonymous Coward | about 7 months ago | (#45903053)

This statement shows a lack of fundamental understanding of the Cold War. The Cold War was strictly about geo-politics, about global dominance through the spread of ideals but also Russia's inherent need to dominate all of it's immediate neighbors to ensure security and America's need to dominate the world's oceans and global trade to ensure it's security. These two things didn't mesh, hence the Cold War.

Further it shows a lack of understanding about the development of autonomous systems, which again is exactly due to geo-politics. America is the world's only superpower. It stays that way through dominating trade, having overwhelming military, and never allowing any regional power to grow into a superpower. A part of that requires maintaining an active and capable military. Simultaneously America is facing domestically a populace that is generally isolationist and does not want to be involved in foreign issues; they do not want boots on the ground unless there is some real justification for it and only for a very limited time, and they want no civilian casualties, something that is nearly impossible to do. Autonomous systems fill the niche well; America can enforce it's power abroad, it's populace becomes apathetic as there are few to no boots on the ground, and an autonomous system is far more likely to reduce the chance of collateral damage in a strike. This article addresses none of these factors, and thus misses the point entirely.

Like bear traps?! (1)

baker_tony (621742) | about 7 months ago | (#45903079)

"Weapons Systems That Kill According To Algorithms Are Coming."
Like bear traps?!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>