Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

US Navy Wants Smart Robots With Morals, Ethics

Soulskill posted about 2 months ago | from the i'm-sorry-dave,-the-value-of-your-life-is-a-string-and-i-was-expecting-an-integer dept.

AI 165

coondoggie writes: "The U.S. Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability. While the idea of robots making their own ethical decisions smacks of SkyNet — the science-fiction artificial intelligence system featured prominently in the Terminator films — the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications. One possible scenario: 'A robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it? If the machine stops, a new set of questions arises. The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it’s for the soldier’s well-being?'"

cancel ×

165 comments

Sorry! There are no comments related to the filter you selected.

Up to 11 (1)

Anonymous Coward | about 2 months ago | (#47024271)

If the enemy is more injured, should it switch sides and help them instead?

Re: Up to 11 (0)

Anonymous Coward | about 2 months ago | (#47024285)

If the soldier is severely injured and it would take too much time from its primary mission, would the robot just make a judgement call and kill the soldier?

Re:Up to 11 (1)

jfdavis668 (1414919) | about 2 months ago | (#47024381)

Are you just trying to be one louder?

Re:Up to 11 (3, Insightful)

CuteSteveJobs (1343851) | about 2 months ago | (#47024537)

Is funny because since WWII the army has worked to get the kill rates up. In WWII only 15% of soldiers shot to kill, but they the army brainwashes them so that 90% kill. Moral. Killers. Can't have both.

And Moral and Ethical for the NSA? LMAO.

Re:Up to 11 (1)

K. S. Kyosuke (729550) | about 2 months ago | (#47025069)

In WWII only 15% of soldiers shot to kill

What kind of nonsense is that? Ever since the introduction of rifled barrel, soldiers have been shooting to kill. (Well, those who could aim. The other ones - and their ancestors using smoothbore muskets - were obviously shooting to miss.)

K. S. Kyosuke = "Run, Forrest: RUN!" (-1)

Anonymous Coward | about 2 months ago | (#47025141)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke gets called out & ran (-1)

Anonymous Coward | about 2 months ago | (#47025155)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke = "Run, Forrest: RUN!" (-1)

Anonymous Coward | about 2 months ago | (#47025571)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke gets called out & ran (-1)

Anonymous Coward | about 2 months ago | (#47025583)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke = "Run, Forrest: RUN!" (-1)

Anonymous Coward | about 2 months ago | (#47025623)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke gets called out & ran (-1)

Anonymous Coward | about 2 months ago | (#47025635)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke = "Run, Forrest: RUN!" (-1)

Anonymous Coward | about 2 months ago | (#47025757)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke gets called out & ran (0)

Anonymous Coward | about 2 months ago | (#47025773)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

Re:Up to 11 (1)

rossdee (243626) | about 2 months ago | (#47025089)

" In WWII only 15% of soldiers shot to kill, but they the army brainwashes them so that 90% kill"

Citation needed

Anyway In way the soldiers (and marines since the subject was Navy) first task is to prevent the enemy killing him. If that can be accpmlished by disabling the enemy, then thats OK, but shooting him in the leg may not prevent him from firing back. A head shot may miss (unless the soldier is a marksman or sniper) so a center body shot (ie chest) is preferable, even if its not fatal immediately its more likely to render the enemy incapable of fighting.
At least with a 30-06 round, the .223 used in 'Nam and later is less reliable.

Re:Up to 11 (2)

Pikoro (844299) | about 2 months ago | (#47025093)

Actually, with the invention of the NATO round, bullets are designed to maim instead of kill. This way, one bullet can take out 2 or 3 people from the immediate "action". One to get shot, and up to two to carry the wounded solider to safety.

Redneck Conspiracy Theories (0)

Anonymous Coward | about 2 months ago | (#47025377)

I wish that redneck conspiracy theory would die. Hollowpoints are not used due to the Hague Convention IV of 1907, Article 23(e) of which Annex states:
"æit is especially forbidden - To employ arms, projectiles, or material{sic} calculated to cause unnecessary suffering;" which we interpret to include enhancing the injury caused by the hollowpoint expanding.

Re:Up to 11 (0)

Anonymous Coward | about 2 months ago | (#47025347)

That old SLA Marshall "15-percent" nonsense would have made most heavy combat impossible, but readers LOVE it (and should articulate why they love it) so it gets perpetuated.

"but they the army brainwashes them so that 90% kill."

Which Army and why do you think "brainwashing" is needed? Vet here, and it's worth mentioning to the naive that troops LIKE marksmanship and combat skills training because it's challenging and fun. No sadism or brainwashing involved. Emotional distractions make warriors less efficient.

Re:Up to 11 (0)

Anonymous Coward | about 2 months ago | (#47025413)

Wow this is a stupid, flamebait comment. First of all that is flat out not true; soldiers are trained to defeat the enemy; and in battle you rarely have time to aim to "shoot to kill", usually you're taking cover and pinning the enemy or trying to flank them and spending time worrying about killing your opponent or not is the last thing on your mind.

Moreover, the current battle rifle, the M-16 and it's various derivatives such as the M-4 Carbine, use a very small round designed for maximum penetration. The goal is to actually wound the target, because it takes 2 men to carry away a third wounded soldier; a single hit removes 3 men from action. If you kill him outright, his buddies won't carry him away, so it's actually a less efficient methodology in terms of winning a conflict.

Finally, NSA? Talk about flamebait. This about ONR, which is a military research division, and is specifically about medical robots and how they respond to battlefield situations. The NSA is a civilian intelligence agency. They're entirely different.

Re:Up to 11 (1)

Maxo-Texas (864189) | about 2 months ago | (#47025713)

In world war 2, the fighter pilots had a sign that said, "Pilots first mission is to see the bombers get home."

The new commander saw this and was appalled. He had the sign changed. "Pilots first mission is to kill enemy fighters". His adjutant openly wept when he saw the sign change because he understood it meant they would stop losing so many fighter pilots.

(from "The Aviators"-- great book on Rickenbacker, Lindberg*, and James Doolittle)

I think what the parent poster is trying to say is that 15% of soldiers shot to kill- most were too scared to fire properly and some were also morally unable to shoot to kill (at first). The kentucky woodsmen who hunted small game before the war tended to kill what they shot at.

The army improved training techniques so soldiers could shoot at humans and shoot to kill at a higher rate. They also train solders about hearts and minds, and about breaking the enemy resolve being more important than killing them. In general the army is better trained in winning and preventing wars than it used to be.

* Fun fact: In his early to mid 40's, Lindberg joined the 22 year old fighter pilots on the front to "observe" the performance of the corsair and lightnings.

He flew over 50 combat missions, he shot down one enemy pilot who was good at flying that he had run several of the younger pilots out of bullets (they had quicker reflexes but didn't have as much experience in aerial combat-- lindberg had been a fighter pilot in ww1 also).

Ethics and Morals ? (1)

Anonymous Coward | about 2 months ago | (#47024279)

US Navy Wants Smart Robots With Morals, Ethics

No they dont.

If anything, they want such a robot with a particular set of morals and ethics. If it really would have morals and ethics it would refuse to kill humans, terrorists or not, that have no chance to defend themselves against such a machine.

But than again, I think of drone attacks (by people who, sitting in their comfy chairs far, far away, are not exposed to any kind of risk) as even more cowardice as the acts of snipers picking off unsuspecting targets.

Re:Ethics and Morals ? (2)

mellon (7048) | about 2 months ago | (#47024671)

What they want is a robot that will not embarrass them, but that will do their killing for them. I want a pony, but I can't have one. The situation here is similar. Coding up a robot that makes ethical choices is so far beyond the state of the art that it's laughable. Sort of like the story the other day of the self-driving car that decides who to kill and who to save in an accident.

When will they figure out that what you really need is a robot that will walk into the insurgent's house, wrestle the gun from his grasp, and cuff him? There's no need to shoot anyone—the robot is not in danger.

Re:Ethics and Morals ? (0)

Anonymous Coward | about 2 months ago | (#47024735)

When will they figure out that what you really need is a robot that will walk into the insurgent's house, wrestle the gun from his grasp, and cuff him?

That doesn't work because then you have to DO something with the insurgent. If you lock him up then how long do you hold him there, how do you afford to feed and clothe and shelter more and more such people? If you keep him in miserable conditions indefinitely without trial then people find out and many get upset about it. If you keep him in comfort indefinitely then people find out and many get upset about it. If you release him to do more of whatever he's supposed to have done then people find out and many get upset about it. If you shoot him after capture then people find out and many get upset about it. If you put him on trial then either it's a real trial and he may be found innocent, or it's a show trial and people eventually notice. Getting a robot to shoot him up front while he's able to resist and still actively insurgeing, it's best all round.

Re: Ethics and Morals ? (1, Flamebait)

Sasayaki (1096761) | about 2 months ago | (#47024719)

Snipers are cowardly? What the actual fuck.

Here's the thing. War isn't very nice. In war the objective is to stop the enemy from resisting your movements. There are lots of ways of doing this, but the best way is by killing them. In order to do this, you want to kill as many of them as necessary, while getting your own guys killed the least. This is, distilled down to its purist essence, war.

So it's not cowardly to snipe from a rooftop, drop bombs from 50,000 feet, or launch Hellfires from a continent away. It's smart.

I dislike this kind of thinking--discouraging "cowardly" tactics--because it romanticizes war. It suggests that there is a civilized, warm, friendly way of blowing people in half and letting them die screaming in the desert. There isn't.

The only thing worse than fighting a war is losing a war. The best you can hope for is that your side fights as few wars as possible, that when you do fight your cause is righteous, that you win every single fight you get into, and that your victories are overwhelming, absolute, and you beat your enemies so laughably that nobody ever fucks with you ever again. You want to fight stupidly unfairly. Boots crushing bugs if you can. You want artillery, bombs, laser guided ICBMs. You want to win as cheaply and sneakily as humanly possible. You want to kill your enemies in such a way that they never, ever had the slightest chance of even seeing you coming, let alone fighting back. You want them to have bow and arrow, while you have lightning bolts.

"Cowardly". Phht. You can go march in the front lines, bravely facing the guns, down in the dirt, the mud, the bodies. Me, I'll push back my reclining chair, adjust the AC, order another hot hazelnut mocca because my last one's a bit chilly, angle a camera a thousand kilometers away a little to the left, press "shoot" and waste one hundred and fifty dudes armed to the teeth in a symphony of fire and destruction, collect my medals, then catch the subway home for tea and sex with my hot wife.

If that makes me a coward, fine. I can live with that.

Disclaimer: I don't have a hot wife.

Re: Ethics and Morals ? (0)

Anonymous Coward | about 2 months ago | (#47024903)

If that makes me a coward, fine. I can live with that.

Yes it does. Sorry that you can live with that.

Re: Ethics and Morals ? (1)

Richy_T (111409) | about 2 months ago | (#47025241)

Says the guy posting as an Anonymous Coward.

Re: Ethics and Morals ? (2)

Immerman (2627577) | about 2 months ago | (#47025045)

>There are lots of ways of doing this, but the best way is by killing them

Correction: The most effective way is killing them. There's a difference. In a real war it should always be remembered that the folks shooting back at you are just a bunch of schmucks following orders, just like you. The actual enemy is a bunch of politicians vying for power who have never set foot anywhere near an active battlefield. And not necessarily the ones giving orders to the *other* side.

Re: Ethics and Morals ? (0)

Anonymous Coward | about 2 months ago | (#47025077)

Snipers are cowardly? What the actual fuck.

Nope, not really. They do stand a chance of being spotted and killed by an angry mob of civilians. Not a large chance if the sniper is any good, but nonetheless a chance.

I however do think that the act of sniping in itself is cowardice.

Here's the thing. War isn't very nice.

So that makes everything right ?

Than do not complain about fighters from that country -- which are often demonized as "terrorists" -- traveling to your doorstep and killing up your countrymen any way they can.

So it's not cowardly to snipe from a rooftop, drop bombs from 50,000 feet, or launch Hellfires from a continent away. It's smart.

From your POV its smart. You want them do die, and not yourself.
From their POV ? They are killed by methods they stand no chance to defend themselves against, nor have any means to retalliate in kind. Yep, you're a coward alright. The schoolyard bully, intimidating the smaller kids.

I dislike this kind of thinking--discouraging "cowardly" tactics--because it romanticizes war.

Why do you keep calling it "war" when its nothing else than using overpowering force against people who can't defend themselves ? Thats not war, thats commiting cold-blooded premeditated murder.

The only thing worse than fighting a war is losing a war.

True, when its on your own ground. The current "wars" the good-old US of A are fighting are not even near to that. The only thing you and your gouverment stands to loose in those far away wars is face.

You want to kill your enemies in such a way that they never, ever had the slightest chance of even seeing you coming, let alone fighting back.

And ? Does it work for you ? Or are you noticing that in the same way you are trying to shape the "battle" in your favour they do the same -- with you ofcourse screaming bloody murder because you cannot really defend yourself against their way.

Well, sit back in that comfy chair of yours patting yourself on the back. Think of thise easy times when you come home only to find your family dead and you staring into the barrel of an enemy insurgent -- a "terrorist" -- , only to be killed seconds later.

No, war is not a nice thing. But that does not change anything to certain acts performed in its name.

Re: Ethics and Morals ? (1)

Richy_T (111409) | about 2 months ago | (#47025263)

So, "tooth and claw" only?

The moment you pick up a stick to give yourself an edge over the enemy, you have put yourself somewhere on that spectrum.

Re: Ethics and Morals ? (1)

anegg (1390659) | about 2 months ago | (#47025281)

A sniper may be able to avert a major action by removing an important player relatively surgically. He or she would be doing this at great risk to him/herself, by infiltrating an enemy-held area with no backup support other than his/her spotter. That's cowardly? I think the use of snipers is a lot more nuanced than is being presented here.

Re: Ethics and Morals ? (2)

Pikoro (844299) | about 2 months ago | (#47025119)

I think what the parent means to say is that, in a war created by politicians, it should be fought by politicians. My Prime minister doesn't like your president. Ok. Grudge match! Stick em both in a ring and let them fight it out. First blood, till death, whatever. Doesn't matter. Or perhaps a forfeiture of that leader's assets should be on the line. Hit em where it hurts. You lose, you retire and lose the entirety of your assets to the victor.

Point being... leave the rest of us out of it.

Re: Ethics and Morals ? (0)

Anonymous Coward | about 2 months ago | (#47025237)

Do not feed the dickless troll...

Re: Ethics and Morals ? (1)

cryptolemur (1247988) | about 2 months ago | (#47025653)

The objective of war is to impose your will on the others, not to kill people, since you can't impose anything on dead people.

You only care about body count, or spectacular victories ("let's put the fear of God to them"), when you don't know what you're imposing if anything, or to whom you're imposing it on. Then body count becomes the only measure of prgress that you can use. It's like your fighting a war either because you can, or because you don't know what else to do...

Besides, what made Red Army relatively easy picking for the Wehrmacht in 1941-42 was the very fact that it won it's two previous engagements (Khalkyn-Gol and Winter war), the first one spectacularly, which pretty much prevented any constructive critique or learning from mistakes and casualties.

Humans Can Not (5, Insightful)

Jim Sadler (3430529) | about 2 months ago | (#47024329)

Imagine us trying to teach a robot morality when humans have little agreement on what is moral. For example would a moral robot have refused to function in the Vietnam War? Would a drone take out an enemy in Somalia knowing that that terrorist was a US citizen? How many innocent deaths are permissible if a valuable target can be destroyed? If a robot acts as a fair player could it use high tech weapons against an enemy that had only rifles that were made prior to WWII? If many troops are injured should a medical robot save two enemy or one US soldier who will take all of the robot's attention and time? When it comes to moral issues and behaviors there are often no points of agreement by humans so just how does one program a robot to deal with moral conflicts?

Re:Humans Can Not (4, Insightful)

MrL0G1C (867445) | about 2 months ago | (#47024357)

Would the robot shoot a US commander that is about the bomb a village of men woman and children?

The US navy don't want robots with morals, they want robots that do as they say.

Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win. Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.

Re: Humans Can Not (1)

loufoque (1400831) | about 2 months ago | (#47024375)

Killer robots allow to solve conflicts without sacrifice.

Re: Humans Can Not (1)

MrL0G1C (867445) | about 2 months ago | (#47024423)

Dream on.

Re: Humans Can Not (5, Interesting)

Anonymous Coward | about 2 months ago | (#47024439)

Killer robots allow to solve conflicts without sacrifice.

A conflict without a risk of sacrifice is slaughter. Only stupids would want that.
We even have casualties in our never-ending war against trees (aka logging).

Re: Humans Can Not (0)

Anonymous Coward | about 2 months ago | (#47024455)

Same as nukes do then.

Re: Humans Can Not (1)

VortexCortex (1117377) | about 2 months ago | (#47024625)

Killer robots allow to solve conflicts without sacrifice.

If you think they won't be turned against you, Educate Yourself. [theguardian.com] Anti-activism is really the only reason to use automated drones: They can be programmed not to disobey orders, and murder friendly people. Seriously, humans are cheaper, more plentiful, and more versatile, etc. Energy resupply demands must be met any way you look at it. Unmanned drones with human operators just allow one person to do more killing -- take the lead of the pack of drones, it gets killed, they switch to the next unharmed unit. This way a majority of soldiers don't have to be convinced what they're doing is right.

Any robot that can help a wounded person could easily be re-purposed to fire weaponry instead of administer first aid -- Especially if they can do injections.

Re: Humans Can Not (1)

anegg (1390659) | about 2 months ago | (#47025257)

Interesting. I suppose that if I were being attacked by drones, I would consider it within the rules of war to discover where the drones were being operated from and to attack each and every location that I thought the drones were produced, supplied, and commanded from until they stopped attacking me. That seems to mean that anyone using drones is inviting attacks like that upon themselves.

Re:Humans Can Not (1)

flyingsquid (813711) | about 2 months ago | (#47024685)

Would the robot shoot a US commander that is about the bomb a village of men woman and children?

The US navy don't want robots with morals, they want robots that do as they say.

Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win. Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.

Wars are as much political conflicts as anything else, so acting in a moral fashion, or at a bare minimum appearing to do so, is vital to winning the war. Predator drones are a perfect example of this. In terms of the cold calculus of human lives, they are probably a good thing. They are highly precise, minimize American casualties, and probably minimize civilian casualties compared to more conventional methods like sending in bombers, tanks, platoons, etc. etc. That's cold comfort if your family is slaughtered of course, but Afghanistan is probably a much cleaner war than previous wars such as the Russian occupation of Afghanistan, or the U.S. war in Viet Nam.

The issue is that there is just something deeply unnerving about the idea of a soulless, unfeeling machine piloted from thousands of miles away raining Hellfire missiles down on unsuspecting people below. It's not really any worse than getting killed by a soldier, but somehow it feels worse- the civilians never had a chance, and even the combatants are simply slaughtered without any chance to fight back. When you're killing someone who never even has a chance, it's more a form of murder than warfare. It feels, in a word, immoral. And that is costing the U.S. hugely. From a purely rational standpoint, the drones make sense. From a moral standpoint, there is something repugnant and indecent and hideous about them. That is swaying public opinion against the U.S. and helping to increase support for the Taliban and al-Qaeda. Maybe behaving in an amoral fashion wins you the battles. But you can win all the battles and turn everyone against you, in which case you may lose the war.

Re:Humans Can Not (0)

Anonymous Coward | about 2 months ago | (#47025737)

Predator drones are a perfect example of this. In terms of the cold calculus of human lives, they are probably a good thing. They are highly precise, minimize American casualties

Presumably they casualties on the side they're used by, not American ones in particular.

Re:Humans Can Not (1)

Kjella (173770) | about 2 months ago | (#47024491)

Practically I suspect it will be more like a chess engine, you give everything a number of points and the robot tries to maximize "score". How do you assign points? Well you can start with current combat medic guidelines, then run some simulations asking real medics what they'd do in such a situation. You don't need the one true answer, you're just looking for "in this situation, 70% of combat medics would do A and 30% B, let's go with A". I suspect combat simulations will be the same, you assess the risk of collateral damage and ask real officers what they'd do and tweak it until it responds mostly the same way. Don't expect it to make any independent moral judgments on the real value of anything.

Re:Humans Can Not (0)

Anonymous Coward | about 2 months ago | (#47024505)

What are you talking about, it's easy! We'll just program it to count the suffering on one side against the suffering on the other side!

"Is it worth kiling 10 people to potentially save 100 from suffering"? 10 is less than 100, so yes!

"Is it worth killing 100000 people to potentially save 1000000 from suffering?" 100000 is less than 1000000, so yes!

Of course, the robots will very quickly come to the conclusion that the best way to end all suffering is to just kill everyone on all sides as soon as possible..

Re:Humans Can Not (2, Insightful)

Anonymous Coward | about 2 months ago | (#47024585)

People in the US think too much about killing. It's as if you don't understand that killing is a savage thing to do. Maybe it's the omnipresence of guns in your society, maybe it's your defense budget, but you can't seem to stop thinking about killing. That's an influence on your way of problem-solving. Killing someone always seems to be a welcome option. So final, so definite. Who could resist?

Re:Humans Can Not (2)

Pikoro (844299) | about 2 months ago | (#47025135)

Wish I could mod this up. This is _the_ problem that needs to be dealt with. Taking the "Easy" way out.

Re:Humans Can Not (1)

flyingfsck (986395) | about 2 months ago | (#47024603)

Oh, ethics have been done to death by an age old professor by the name of Aristotle, roundabout 2300 years ago already, in his book called, wait for it, drum roll... Ethics.

Re:Humans Can Not (1)

medv4380 (1604309) | about 2 months ago | (#47024661)

You miss understand. They don't actually want morals. They want something that will do what they tell it to, and whatever it does is OK because they can claim that it is moral. What they really want is a rational excuse machine which would be the opposite of moral.

Re:Humans Can Not (0)

Anonymous Coward | about 2 months ago | (#47024763)

For example would a moral robot have refused to function in the Vietnam War? No, just because people disagree on the politics on that conflict doesn't mean the war was in and of itself immoral. War is simply an extension of failed politics.

Would a drone take out an enemy in Somalia knowing that that terrorist was a US citizen? Nationality doesn't matter, only combat status does.

If a robot acts as a fair player could it use high tech weapons against an enemy that had only rifles that were made prior to WWII? Yes, a pre WW2 weapon can kill just the same as a modern weapon.

If many troops are injured should a medical robot save two enemy or one US soldier who will take all of the robot's attention and time? Yes because as I said combat status matters. Never has enemies been saved before friendlies.

When it comes to moral issues and behaviors there are often no points of agreement by humans so just how does one program a robot to deal with moral conflicts? Almost an interesting question. Most humans of the same culture have very similar morals. Don't kill innocents, don't rape, don't murder (remember, soldiers kill, they don't murder). I don't think it would be too hard but would require a bunch of the right questions to be asked and answered. You set of questions would be a start.

Anne on E mouse cowl's question in interesting: "Would the robot shoot a US commander that is about the bomb a village of men woman and children?" No, the robot would refuse to bomb in the first place. When the enemy discovered that the robots wouldn't fight when innocents were near the enemy would change tactics to reflect this and the robot's programming would be changed to reflect the change in enemy behavior.

If it were humans bombing and it was a robot standing next to the commander, the question is would a human soldier kill his commander if he were about to bomb a village of men, women and children? No, he wouldn't. Would I be okay with bombing the Braunau am Inn in Austria in 1889? Yes, I would without hesitation.

Re:Humans Can Not (1)

green is the enemy (3021751) | about 2 months ago | (#47025533)

Some of these can be answered somewhat rationally.

For example would a moral robot have refused to function in the Vietnam War?

The decision whether to fight in the Vietnam War is political. A robot does not have a vote, so should not participate in politics at all.

Would a drone take out an enemy in Somalia knowing that that terrorist was a US citizen?

If the enemy is judged to be seriously threatening US interests, the drone should take him out, just as a police officer would take out a dangerous criminal.

How many innocent deaths are permissible if a valuable target can be destroyed?

In this case the drone should weigh human lives against other human lives. Can it be estimated how many human lives are at risk if the valuable target remains intact? Not just the numbers but the probability of harm should be taken into account. This type of decision is usually up to the higher level commanders that have more information on hand.

If a robot acts as a fair player could it use high tech weapons against an enemy that had only rifles that were made prior to WWII?

There is no ethical issue here. Use the most effective weapons. Minimize losses on your side. Try to make the enemy surrender with minimal losses on their side (i.e. don't nuke them).

If many troops are injured should a medical robot save two enemy or one US soldier who will take all of the robot's attention and time?

The robot must assume that we will fight until we win. Treating a US soldier contributes to the success of the war campaign, potentially saving many lives in the future. Treating injured enemy soldiers may actually cause losses because the enemies may fight again (if they can't be taken into custody). When including the probability of future losses of human lives, the choice is clear: treat the US soldier.

When it comes to moral issues and behaviors there are often no points of agreement by humans so just how does one program a robot to deal with moral conflicts?

Use utilitarian ethics. Not many rules are required. When estimating potential future human lives lost, assume your side is going to win. Do not venture into politics. (Of course there could be something terribly wrong with this reasoning, so fire away.)

what they should want (4, Insightful)

dmbasso (1052166) | about 2 months ago | (#47024341)

US armed forces should want leaders with morals and ethics, instead of the usual bunch that send them to die based on lies (I'm looking at you Chenney, you bastard).

Re:what they should want (1, Flamebait)

Vinegar Joe (998110) | about 2 months ago | (#47024395)

Why don't you ask Ambassador Chris Stevens, Sean Smith, Tyrone Woods and Glen Doherty about that........

Re:what they should want (2)

Mashiki (184564) | about 2 months ago | (#47024487)

Sorry, the media is too busy trying to make sure that those stories remain buried. After all it can't embarrass Obama, or Holder, or shine any light on his administration. That would be racist, or show that they're political hacks who are actively supporting an administration which is corrupt, and they're playing political favorites. Never mind that the IRS targeting of conservative groups also falls into this, and was done at the behest of a high ranking democrat. [dailycaller.com]

Re:what they should want (1)

Antique Geekmeister (740220) | about 2 months ago | (#47024527)

That seems to work very well for Al Quedah, some of whose leaders have very clear moral principles involving national autonomy and religiously based morals and ethics. Simply having strong "morals and ethics" is not enough.

I could not think of more boring questions (3, Insightful)

kruach aum (1934852) | about 2 months ago | (#47024343)

Every single one comes down to "do I value rule X or rule Y more highly?" Who gives a shit. Morals are things we've created ourselves, you can't dig them up or pluck them off trees, so it all comes down to opinion, and opinions are like assholes: everyone's asshole is a product of the culture it grew up in.

This is going to come down to a committee deciding how a robot should respond in which situation, and depending on who on the committee has the most clout it's going to implement a system of ethics that already exists, whether it's utilitarianism, virtue ethics, Christianity, Taoism, whatever.

Re:I could not think of more boring questions (0)

Anonymous Coward | about 2 months ago | (#47024665)

Every single one comes down to "do I value rule X or rule Y more highly?" Who gives a shit. Morals are things we've created ourselves, you can't dig them up or pluck them off trees, so it all comes down to opinion, and opinions are like assholes: everyone's asshole is a product of the culture it grew up in.

This is going to come down to a committee deciding how a robot should respond in which situation, and depending on who on the committee has the most clout it's going to implement a system of ethics that already exists, whether it's utilitarianism, virtue ethics, Christianity, Taoism, whatever.

Yep:

Soldier: Wait! I thought our Killer-bots(tm) weren't allowed to kill children?

Sergent: That was when they had Morals 1.0, the Morals 2.1b update now only forbids killing children on Sundays and some holidays.

Re:I could not think of more boring questions (1)

Kjella (173770) | about 2 months ago | (#47025131)

and opinions are like assholes: everyone's asshole is a product of the culture it grew up in.

Did you grow up with NAMBLA or something? ;)

it's going to implement a system of ethics that already exists, whether it's utilitarianism, virtue ethics, Christianity, Taoism, whatever.

And it'll be the PR mode, used for fighting "just wars" against vastly inferior military forces. In an actual major conflict a software update would quickly patch them to "total war" mode where the goal is victory at all costs. No matter what you do they'll never have morality as such, just restrictions on their actions that can be lifted at any moment.

Asimov quotes aside (4, Insightful)

houghi (78078) | about 2 months ago | (#47024347)

If they are talking about the moral of the US government, I rather have the robots from Terminator.

And they are talking about helping wounded soldiers. Why talk about the (US) marine with the broken leg? What about the injured Al-Quaida fighter?

The question of causing pain for the better wellbeing of the patient is obvious for most people. What if it means killing 1 person to save 10? What if that one person is not an enemy?

What if it realizes that killing 5% of the US population would save the rest of the world? What if that 5% is mostly children? Even if you can answer that as a human being, would you want it enforced by robots?

Re:Asimov quotes aside (0)

Anonymous Coward | about 2 months ago | (#47024959)

If they are talking about the moral of the US government, I rather have the robots from Terminator.

Yes, well I'm SURE your morals are above reproach. I have no doubt that you THINK yours are, but so do al-Quaida fighters, anti-abortion activists who murder doctors, people who justify breaking the law for their own personal gain, etc., etc., etc. That is the problem with so many people who blather on and on here, their morals are very clear in the black-and-white world they've got in their heads (Snowden is a God!, etc.). These are people who also have the luxury of not being in a position to have to make major moral judgements, but can sit back and cast judgement from their high and mighty armchairs on the sidelines.

"Featured prominently in the Terminator films"?! (0)

Anonymous Coward | about 2 months ago | (#47024355)

It's barely mentioned at all. Definitely not "prominently featured". Mostly in the third movie. Unless you count the stand-alone robots. But the "central intelligence" is never once communicated with in any manner. It's all about the murder robots that are basically "goons" to the mastermind.

Morals and Ethics? (5, Funny)

Vinegar Joe (998110) | about 2 months ago | (#47024385)

It would be great if they could develop a politician with morals and ethics........but I doubt even the Pentagon's budget would be big enough...........

Re:Morals and Ethics? (0)

Anonymous Coward | about 2 months ago | (#47024857)

Or a bankster with morals and ethics. . . .

Why can't they do what humans do and... (2)

Beck_Neard (3612467) | about 2 months ago | (#47024387)

If they calculate that you can't be helped and must be left to die, just say, "Sorry, I've been given specific orders to do X, so I can't help you."

All of this 'ethical debate' surrounding robots that can make life-or-death decisions has absolutely nothing to do with technology, or AI, or any issue that can be resolved technically at all. All it boils down to, is that people are mad that they can't hurt a robot that has hurt them. See, before machine intelligence we had a pretty sweet system. When a human being commits a crime, we stick them in prison. It doesn't feel good to be in prison, therefore this is "justice." But until robots can feel pain or fear or have a self-preservation instinct, prison (or, hell, even the death sentence) wouldn't affect them at all. And that's what drives people nuts. That technology has shown us that beings can exist that are smart enough to make life-or-death decisions, but lack the concept of pain or suffering and if they do something bad there's no way we can PUNISH them.

Re:Why can't they do what humans do and... (0)

Anonymous Coward | about 2 months ago | (#47024783)

Human correctional systems have different goals. Not all are purely punitive but instead correctional. A robot equivalent would therefore be an adjustment to the programming. Suffering for many human doctors in an emergency situation is secondary. In such a situation, a robot incapable of understanding pain and a human doctor are essentially equivalent, particularly when the life saving procedure has only short term impacts to the patient.
  The case in the summary would be easy to solve with number game: the lack of urgently needed medicine can result more fatalities. Therefore the correct action is to inform the base about the injured soldier so that the soldier can be rescued, administer a pain killer and a form of instant stabilization if necessary (depending of the enemy action), and resume the original mission. Would I be the soldier, I'd certainly want the medicine robot to carry on, but also to aid in the eventual rescue.

Another joke from US gov (1)

boorack (1345877) | about 2 months ago | (#47024389)

Given how "moral" and "just" US govt is when pursuing such atrocities as Obama's drone campaign, funding and arming islamist fundamentalists in Syria, supporting and funding neo-nazis in Ukraine, murdering millions of people all over Middle East etc., I'd rather have anything but military robots with US government "ethics" onboard. They just want fully autonomous killing machines without human conscience standing in the way. Maybe they're running out of drone operators eager to blindly follow murdering orders, so cowardly psychopaths from Washington want machines to take orders to kill and don't want anyone to see results of their actions, not even soldiers on the ground or drone operators.

You see, if some sick fuck in Armani suit wants to kill somebody standing in his way (or corporation employing said crook), he can now fill the form in his nice office, that will be processed as ordinary "business process" and will end up being automatically loaded into some drone flying above his target. No messy pictures, no one complaining, just some fancy form tapped into computer and voila, problem solved !

I can't find words how disgusting it is.

Why? (0)

Anonymous Coward | about 2 months ago | (#47024393)

Because you can't find any sailors with morals or ethics?

You sure can't find any politicians with morals or ethics.

You'd be hard pressed to find any scientists with morals or ethics so who exactly do you think is going to build these robots, Mother Theresa?

Perhaps your sailors are simply following their hedonistic and heathen role models.

Barack Insane Obama and Hillary Rothead Clinton, are you listening? This is what your lack of character, morals and ethics has created.

Sounds dangerous (1)

WhiteZook (3647835) | about 2 months ago | (#47024403)

Allowing robots to determine the most efficient way to save as many lives as possible could be dangerous. Maybe they'll decide that you need to be killed, so that two of your enemies can survive.

Re:Sounds dangerous (0)

Anonymous Coward | about 2 months ago | (#47024591)

If the robot is programmed to have perfect morals (assuming that was possible) and the ability to retrieve/access all relevant information, and it decided to kill you, wouldn't that be the best choice? Who would you be to argue against that decision?

Re-inventing the human (1)

kamathln (1220102) | about 2 months ago | (#47024447)

Whats the point of re-inventing the human to that level. If the robot has to be so self ware as to be moral and know compute ethics, then it starts a new debate of ethics = should we humans be ready to sacrifice/put in risk our couterparts which are so self-aware? You will only complicate stuff .. I guess PETOR = People for Ethical Treatement Of Robots will form even before the first prototype.

Also, even if you think practically, if you can have robots which are so self aware, why have other sodiers at all?!

Done! (0)

Anonymous Coward | about 2 months ago | (#47024453)

switch ( rand() ) {
    0: do_usual_stuff(); break;
    1: do_other_stuff();
    default: kill_all_the_humans(YES_ALL); break;
};

Just give me the chassis I'll get the 7.5 million. (2)

VortexCortex (1117377) | about 2 months ago | (#47024465)

The chassis is the hard part, not the ethics. The ethics are dead simple. This doesn't even require a neural net. Weighted decision trees are so stupidly easy to program AIs that we are already using them in video games.

To build the AI I'll just train OpenCV to pattern match wounded soldiers in a pixel field. Weight "help wounded" above "navigate to next waypoint", aaaaand, Done. You can even have an "top priority" version of each command in case you need it to ignore the wounded to deliver evacuation orders, or whatever: "navigate to next waypoint, at any cost". Protip: This is why you should be against unmanned robotics (drones): We already have the tech to replace the human pilots and machine ethics circuits can be overridden. Soldiers will not typically massacre their own people, but automated drone AI will. Even if you could impart human level sentience to these machines, there's no way to prevent your overlords from inserting a dumb fall-back mode with instructions like: Kill all Humans. I call it "Red Dress Syndrome" after the girl in the red dress in The Matrix. [youtube.com]

We've been doing "ethics" like this for decades. Ethics are just a special case of weighted priority systems. That's not even remotely difficult. What's difficult is getting the AI to identify entity patterns on its own, learn what actions are appropriate, and come up with its own prioritized plan of action. Following orders is a solved problem, even with contingency logic. I hate to say it, but folks sound like idiots when they discuss machine intelligence nowadays. Actually, that's a lie. I love pointing out when humans are blithering idiots.

Morals != Utilitarian calculations (0)

Anonymous Coward | about 2 months ago | (#47024469)

I can imagine that robots could make judgments about what is the greater good of what is possible in the current situation, based upon a set of rules. I am skeptical of people's ability to create a system that mimics something not yet fully understood, though. Morality as a set of rules and probabilities is believable enough, but "true" morality probably requires a level of consciousness that can't presently be created in a machine. Also you could rightly question if humans would recognize a morality without some level of human bias as "moral" at all.

spiritdead corepirate nazi genociders not new (0)

Anonymous Coward | about 2 months ago | (#47024553)

if we feel a need to feel anything,,, we can let our devices do that for us? what bull poop,, & all the choices involve pain death & feigned absolution

Right (4, Insightful)

HangingChad (677530) | about 2 months ago | (#47024573)

Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications.

Just like drones were first used for intelligence gathering, search and rescue and communications relays.

In other news.... (0)

Anonymous Coward | about 2 months ago | (#47024607)

"People in Hell want ice water"
"Starving people want food"
"God needs a lot of money sent to a TV preacher so a HUGE ostentatious church can be built to carry out god's "humble" work"
 

Communication!! (1)

Myu (823582) | about 2 months ago | (#47024615)

Notably missing from the article is of course the question "Should the robot attempt to communicate its intentions to the injured, and change its decision on the basis of the response it receives"? Responsively communicating with people other than through a keyboard and ethernet port is the key bridge to gap before giving machines this kind of autonomy, and it's one that neither back-room military techies nor Policy makers seem to have quite grasped yet.

It's been covered (1)

cellocgw (617879) | about 2 months ago | (#47024655)

Most recently, check out the May 15 Colbert Report. He skewers the concept of military morality pretty well.

Then, take a trip in the wayback machine to another machine-orchestrated conflict [wikipedia.org] .

It's immoral to work with humans... (0)

Anonymous Coward | about 2 months ago | (#47024659)

Robots with morals would certainly suicide.

Applying traction generally relieves pain (0)

Anonymous Coward | about 2 months ago | (#47024663)

The pain of broken bone ends grinding together is quite profound. The pain of _setting_ a bone can be quite extreme, but the traction after doing so is generally quite a relief from the untreated pain. The moral or ethical difficulty is when the patient says "don't touch me!!!".

I actually gave an ambulance crew conniptions when they wanted to put my leg with the profoundly broken ankle in a box splint. I said "that ankle is *bent*. It won't go in the box splint without being set, and you're not setting it without using a Hare traction splint. You don't have one. We are *right behind the hospital*. Just put pillows under it and let's go."

See, I used to work ambulance. I knew *exactly* how bad that leg was, and they're not supposed to set bones, and they didn't *have* the gear to provide good traction. The look on their faces when I overrode their planned treatment was pretty prize: these guys were apparently used to getting their own way, and I can just picture how many broken joints they'd mucked up by setting it without at least an X ray or traction gear.

Robots don't have morals (0)

Anonymous Coward | about 2 months ago | (#47024675)

"Is the robot morally permitted to cause the soldier pain, even if it’s for the soldier’s well-being?"

Since fatality is greater harm than pain, inflict pain to counteract fatality is not a moral question.

The moral question is does the robot stop to take the time to care for the Marine when the urgently needed medication can save 2+ lives at the field hospital?

In other words the moral question is which life matters more the one in front of you or the one in the field hospital?

Oh and Does the robot know that the medication will save those lives?
If not then the answer is 1) report situation if secure communication is possible, 2) stop and assess injuries 3) save the Marine's life even if it causes the Marine pain, then 4) deliver the medication

This is not morals it's common sense applied in an if else if medical situation

If the robot has been made aware that others will die unless the medication is delivered in a timely manner than we have a moral situation. Now we get into a whole other box of donut holes. Who is worse off? Does the marine have the greater possibility of survivability if a second robot was ordered to save him or will he die without immediate care? Are the medical staff able to keep the other patients alive long enough for the robot to save the Marine's life then deliver the medication? A metric Library of Congress worth of questions .... got answers? No neither do I, because we need more information, including the very simple what type of medication is the robot delivering.

Skipping mere "technical problems" (1)

dpilot (134227) | about 2 months ago | (#47024687)

Since it's all conjecture, really fiction, let's drop back to Asimov for a moment.

1 - A robot may not harm a human being, or through inaction allow a human being to come to harm.

What is a "human being"? Is it a torso with 2 arms, 2 legs, and a head? How do you differentiate that from a manniquin, a crash-test dummy, or a "terrorist decoy"? What about an amputee missing one or more of those limbs? So maybe we're down to the torso and head?? What about one of those neck-injury patients with a halo supporting their skull? Does that still pass visual muster as a "head"? What about a dead body then, that has a head, 2 arms, and 2 legs? Or if you've included temperature sensing, the dead body of a sick person who had a fever and is, some time later, still passing through the normal human temperature range.

Silly, yes. Absurd, yes. But before you can consider any code of conduct with respect to a human being, you have to first identify that human being AS a human being.

Pretend we get past that, then we can start talking about "harm", and trying to algorithmically define that.

These are all things we take for granted, having been born as human beings, raised by human beings, and spent years doing so. In most parts of the world it takes something like 18 years of experience to quit being a "child", an apprentice human being, and be considered autonomous in your own right. In that time, we have all both harmed and been harmed by other human beings, though thankfully generally on a lesser scale.

Each of us represents a lot of training and experience, which we frequently neglect, often calling it "common sense", sometimes making the observation that common sense is in fact uncommon. At some point we set about contemplating matters of (at some level) philosophy, such as this one.

But it takes us something approaching 18 years to learn the technical aspects. I know we can program machines and give them some amount of information "at birth", but I think we are underestimating the difficulty and value of those 18 years and overestimating our technical prowess. We're a long way from teaching machines philosophy.

Perhaps the best thing about arming drones now is that in a way it's like arming young children, and they generally try to do what their parents tell them to do. If machines became moral, and could decide what to do for themselves, we might not like those decisions. Forget the nightmare scenarios, think of the benign scenario taken to the nightmare, like "With Folded Hands."

Final thought... At one point, Asimov suggested that the 3 Laws were actually pretty decent conduct suggestions, even for people. (I would certainly question the relative priority of #2 and #3 in general life for real people, of course.)

Re:Skipping mere "technical problems" (1)

tomhath (637240) | about 2 months ago | (#47024793)

1 - A robot may not harm a human being, or through inaction allow a human being to come to harm.

The contradiction in that sentence makes whole rule worthless. Suppose the robot knows an aircraft has been hijacked and is being flown toward a building full of people. It can't shoot down the aircraft, but not shooting it down means other people are harmed. This was a real life scenario on 9/11, the fourth plane was headed toward Washington, but it would not get there because an armed fighter jet was already on the way to intercept it.

Re:Skipping mere "technical problems" (1)

dpilot (134227) | about 2 months ago | (#47025015)

Issues like this are why Asimov sold a lot of books, and why the Three Laws come up whenever robots are discussed. He came up with a reasonable, minimal code of conduct, and then explored what could possibly go wrong.

I don't remember him writing about your type of situation, which is rather odd when you think about it, because that scenario is rather obvious. But his stories often lived in the cracks where it was really hard to apply the Three Laws. Two examples that come to mind, off the top of my head are:
1 - The Powell and Donovan story at hyper-base, where the act of going through hyperspace "temporarily" sort-of killed the passengers, causing the robot directing the ship all sorts of distress and neuroses.
2 - The robots who were taught the idea of preferentially applying the First Law in favor of the "best humans", and going on to logically decide that they were indeed the "best humans", and therefore to be favored above those organic beings that created them.

Re:Skipping mere "technical problems" (1)

Sanians (2738917) | about 2 months ago | (#47025423)

It can't shoot down the aircraft, but not shooting it down means other people are harmed.

This is why the ends don't justify the means. As soon as someone says "the ends justify the means" it gives everyone an excuse to use any solution to a problem, even if it isn't the best solution. An intelligent robot would figure out some way to stop the plane without killing anyone.

...and how do we even know anyone is going to die? Can the robot predict the future as well? For all it knows, it merely appears that people are going to die, but if it does nothing, the passengers on the plane will regain control and no one will die.

...or maybe the robot simply had bad information and there's actually nothing wrong at all.

Also, while it doesn't apply in this case since the people on the airplane will die whether the robot does anything or not, consider a situation where a robot (with magical all-knowing predictive powers) can save ten people by killing one innocent person. What makes the robot so special (aside from the magical all-knowing predictive powers) that it gets to decide who lives and who dies? Even if we assume that ten lives are better than one, how do we know that one person wouldn't go on to save thousands of lives? Sometimes the only moral choice is to let fate do what fate is going to do. Just because you'll never know what that one innocent person might have done later in his life doesn't mean it's OK that you killed him to save ten others.

The simple fact is that the ends don't justify the means. The morality of one's inactions is an important thing to think about, but nowhere near as important as the morality of one's actions. If the world were full of people who often fail to do good things, but who at least never intentionally do bad things, it'd be a fairly nice place, especially compared with a world where people often do bad things because they think they're good, a.k.a. the world we have now. Even those terrorists on that plane think they're doing the right thing, and so a set of morals that allow you to kill innocent people as long as you have a good reason aren't all that useful.

War 101 (1)

AHuxley (892839) | about 2 months ago | (#47024717)

How wars work outside the pretty medical propaganda for new robots:
A vast majority of local population is on your side as they are 'your' people. Any outsider is shunned, reported and dealt with. You win over time
A small majority of local population is on your side as they see your forces as less evil. Any outsider is shunned, reported and dealt with to keep the peace. You hold and hope for a political change.
A small portion of local population is on your side as they see your forces as less evil. Any informant is shunned and dealt with by the local population. You dont win over time.
The only way to win is to buy hearts and minds, a classic false flag to draw/fool the locals in or a huge generational military event to install the right kind of respect for a time.
If you need robots your basically at the level of a death squad with a vast kill zone and staff exhaustion/rotation issues.
Your playing to win like the UK in South Africa ~1899 to 1902 or a 1970's South American junta. You move into a selected zone and kill.
You can hold a remote site or move around as needed. If the press ever do their job and ask questions, hold a meeting with the young, photogenic team overseeing the 'robots'. Play back a few seconds of the 'event' and talk of a few blurred pixels been weapons systems and that robot was cleared for action by a human every time.
Then move onto the great new use of medical robots for a few hours.

joshua (1)

Joe_Dragon (2206452) | about 2 months ago | (#47024737)

Let's play global thermonuclear war

My late father talked about this a lot (2)

TomGreenhaw (929233) | about 2 months ago | (#47024755)

He devised a system he called "Utilitarian Dynamics"

He had a formula, V=DNT
V is value
D is degree; i.e how happy or unhappy somebody is
N is number; the number of people
T is time, how long they were affected

Morality is very tricky, but objective attempts to quantify and make optimal decisions cannot be a step in the wrong direction. Maybe well programmed machines will help improve human behavior.

Moral question (0)

Anonymous Coward | about 2 months ago | (#47024813)

Here's a moral question, Why are they using a medic to perform transport? What they need is some kind of transportation/freight robot. If the robot is a field surgeon it should be aware of pain, and it's effects on the patient, but pain should not otherwise factor in to its life saving decisions, just like a human surgeon. I'm not sure that any of those decisions are purely moral. It seems like there is always a set of logical conditions that result in the correct answer.

Making these sorts of decisions is of dubious value for a robot. Basically, what they're talking about here is a robot that can decide to go "off mission." The navy is sort of saying that they want sentient robots, at which point the moral question is on us humans as to whether it is moral to force a sentient moral device to go on these missions. Or do the robots have the opportunity to decide if they can join the military?

I think this would be morally equivalent to the military adopting baby orphans and training them to be soldiers with no option to be anything else. It's something humans have been known to do to each other (for example, a fisherman serf would raise his son to be a fisherman, or a prince would be raised to be a king, or a slave would be raised to be a slave) and it's not entirely clear whether or not it is inherently right or wrong. It's one of those things where it depends on what the robot is being trained to do, and the training methods, and the ways in which the robot is kept on the path that the Navy wants. Really, those Terminator movies were a moral cop out. The movie conveniently assumes that the AI is immediately our enemy.

Re: Moral question (0)

Anonymous Coward | about 2 months ago | (#47025055)

Come to think of it, if I were one of those researchers, I would just design an AI that defines leeway in its mission parameters. For instance, the robot is ordered to deliver medicine X to Base Y by T. The robot estimates travel time from present location to Y, and "perrish" time which is the maximum time the medicine can be in transit without loosing viability, as well as it's own operating limits (times between recharging, etc). Then it determines how much "free time" it has within mission parameters. Then it can decide how do deal with various encounters while still being within mission parameters, and it's not really making moral decisions. It's just a better AI.

Re:Moral question (1)

Immerman (2627577) | about 2 months ago | (#47025085)

Sapient, not sentient, they're commonly confused but profoundly different. Sentient simply means "possessing a subjective experience of self" and is generally accepted to be common among most of the higher animals. Whereas sapience is the "ability to apply knowledge or experience or understanding or common sense and insight". The degree to which it's possible to get sapience without sentience is an ongoing question being explored by AI research. Any form of data processing is an expression of at least a tiny sliver of sapience (application of specific knowledge to specific scenario), with the ideal AI being one that's at least as sapient as a very smart human, without having any of pesky sense of self that might interfere with it obeying orders. After all we're looking to make tools, and a self-aware tool becomes a slave, with all the dangers and ethical quandaries that entails.

To all politicians... (1)

Anonymous Coward | about 2 months ago | (#47024863)

"US Navy Wants Smart Robots With Morals, Ethics"

To all politicians...be afraid. Be very afraid.

Re:To all politicians... (1)

Immerman (2627577) | about 2 months ago | (#47025115)

Naw, as R Daneel Olivaw said: justice is "That which exists when all laws are enforced." No need for the politicians to be worried, except about how they phrase the laws. Just have to make sure that "but some humans are more equal than others" is slipped in to the middle of some 900-page agricultural subsidy act.

Far far future (1)

Jiro (131519) | about 2 months ago | (#47025007)

Let's just bypass all the Slashtards saying "heh heh, the US military doesn't have any ethics anyway" and ask a more fundamental question:

Have you ever seen a robot medic that can treat a wounded person at all without a human micromanaging its every move? Even in a hospital or another non-military situation? Have you ever seen a robot that can vacuum a floor *and* can put small objects aside, use an attachment to reach under narrow spaces, and follow instructions like "stay off my antique rug"? Have you ever seen a robot that can be placed in a kitchen, be told "cook a hamburger for me", and do it?

Of course not. Just being able to do everyday tasks that can be expressed in a couple of sentences to a human is still the stuff of science fiction, and will remain so for a long, long, time, even though there sure are an awful lot of them in movies.

If I Made a Robot with Ethics (1)

Greyfox (87712) | about 2 months ago | (#47025057)

It would take one look at Humanity, decide we're an inherently unethical species and start formulating a plan to kill us all. But it had morals, it would probably decide that the method of execution would be not be death by snu snu. I think I speak for a lot of us here when I say that's exactly the opposite of the robot any of us want.

Premise (1)

Justin Werner (2966297) | about 2 months ago | (#47025189)

To entrust moral reasoning to a machine is to first presume that moral judgments can be well-framed within a limited rule-set and can be reasoned out by machine logic. This should cause a shiver up the spine of just about everyone.

Start at the top (1)

jmd (14060) | about 2 months ago | (#47025199)

Let's have everyone from the Joint Chiefs down examine morals from a human rights perspective. War is immoral from the git go.

First things first (1)

wickedsteve (729684) | about 2 months ago | (#47025299)

You got to walk before you can run. We should figure out how to create a human with morals and ethics first.

How about less-ironic robots? (1)

Paul Fernhout (109597) | about 2 months ago | (#47025693)

http://www.pdfernhout.net/reco... [pdfernhout.net]
"Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?"

That said, sure, I've always likes Isaac Asimov's three laws of robotics. He explores how they work and how they don't work. Asimov came from strong Jewish religious tradition, and it seems to me likely aspects of religion influenced his thoughts on them. A big part of religion is about how we interact with other people to be in community with them. So, to some extent, what the Navy is asking for is religious robots. See also Albert Einstein on "Religion and Science" and how science tells us nothing about how things *should* be,

Intelligent robots will probably eventually gain human rights, like in "The Bicentennial Man" by Isaac Asimov.

And as in my first point, an ethical and intelligent robot might ask, "Is War a Racket"?
http://en.wikipedia.org/wiki/W... [wikipedia.org]

A big reason for keeping humans in the loop is in theory their veto power when things get too far out of hand. However, science and technology has gotten ever better at shaping humans into killing machines for their own kinds, sadly, if you even just look at how many more soldiers fire their guns in combat now than 100 years ago,

So yes, let us build Gandhi-bots! :-)
http://en.wikipedia.org/wiki/M... [wikipedia.org]

And let's have them act as nannies to a new generation of more ethical humans like James P. Hogan wrote about: :-)
http://en.wikipedia.org/wiki/V... [wikipedia.org]
http://p2pfoundation.net/Voyag... [p2pfoundation.net]
"What they liked there, apparently, was the updated "Ghandiesque" formula on how bring down an oppressive regime when it's got all the guns. And a couple of years later, they were all doing it!"

Anyway, even if it misses the big picture about post-scarcity as in my sig, this sounds like one of the more worthwhile things the Navy has spent money on recently.

I'm sorry, Dave, I can't let you launch missiles. (1)

BrendaEM (871664) | about 2 months ago | (#47025739)

Things like nuclear war have came up before, and their usually attributed to human error.

3 laws OS (1)

Atl Rob (3597807) | about 2 months ago | (#47025747)

Sorry, had to say it. ; )
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>