Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Philosopher Patrick Lin On the Ethics of Military Robotics

timothy posted more than 2 years ago | from the computer-generated-fog-of-war dept.

Robotics 146

Runaway1956 writes "Last month, philosopher Patrick Lin delivered this briefing about the ethics of drones at an event hosted by In-Q-Tel, the CIA's venture-capital arm. It's a thorough and unnerving survey of what it might mean for the intelligence service to deploy different kinds of robots. This story is very definitely not like Asimov's robotic laws! As fine a mind as Isaac Asimov had, his Robot stories seem a bit naive, in view of where we are headed with robotics."

Sorry! There are no comments related to the filter you selected.

I don't think Asimov was naive (1)

Anonymous Coward | more than 2 years ago | (#38416520)

I don't at all think that Asimov was naive! I think he was concerned about what what robots could become and was trying to educate people about what was needed.

For example, look at the too narrow a definition of human or the weakening of the laws in other cases and the trouble they produced.

Re:I don't think Asimov was naive (3, Insightful)

TheLink (130905) | more than 2 years ago | (#38416830)

Asimov would be naive if he actually believed the laws could actually be implemented.

I claim that any entity capable of understanding the Asimov Laws AND _interpreting_ them to apply them in complex and diverse scenarios would also be capable of choosing not to follow them.

You can program stuff to not shoot when some definable condition is met or not met. But when you need the AI to realize what is "human", "orders", "action/inaction" and "harm" (and judge relative harms), you're talking about a different thing completely.

You can train (and breed) humans and other animals to do what you want, but it's not like your orders are some non-negotiable mathematical law. Same will go for the really autonomous AIs. Anyone trying to get those to strictly follow some Law of Robotics is naive.

Even humans that intentionally try to will have difficulty following the 3 Laws. Through my inaction it is possible that some child in Africa will die, or perhaps not. How many would know or even care? FWIW most humans just do what everyone else around them is doing. Only a minority are good, and another minority are evil (yes good and evil are subjective but go look up milgram experiment and stanford prison experiment for what I mean - the good people are those who choose to not do evil even when under pressure).

Re:I don't think Asimov was naive (0)

Anonymous Coward | more than 2 years ago | (#38417338)

I claim that any entity capable of understanding the Asimov Laws AND _interpreting_ them to apply them in complex and diverse scenarios would also be capable of choosing not to follow them.

The point of Friendly AI is to program AIs to want to do what is beneficial to humans. The key is wanting. I'm perfectly capable of jumping off a tall building, but why would I if I want to, assuming I want to live? Same with a robot/AI. Why would it harm a human/humanity if it doesn't want to?

You could argue that an AI could engineer itself to remove it's built-in desire to be friendly, but why would it want to? That's like saying if Gandi could have been enhanced, he would have chose to become a sociopath with superhuman abilities.

Re:I don't think Asimov was naive (1)

colinrichardday (768814) | more than 2 years ago | (#38418334)

The point of Friendly AI is to program AIs to want to do what is beneficial to humans.

Beneficial to which humans? Using robots in war means that they might harm some humans (the enemy).

Re:I don't think Asimov was naive (2, Interesting)

Anonymous Coward | more than 2 years ago | (#38417692)

I think people miss the whole point of his writing. It was all about unintended consequences. On the surface the laws seemed like a good idea, but they lead to exactly the problems they were intended to prevent! It's like people saying Darth Vader was "a bad guy". Yes, he did bad things, and for most of his life was a bad guy, but he didn't start OR end that way. I'm not trying to say on whole his life was balanced, but if you are talking about Vader at the end, he was a good guy at that point.

Re:I don't think Asimov was naive (1)

foobsr (693224) | more than 2 years ago | (#38417830)

I claim that any entity capable of understanding the Asimov Laws AND _interpreting_ them to apply them in complex and diverse scenarios would also be capable of choosing not to follow them.

Seems like if the question is if it is possible to implement an AI with a restricted 'Free Will' while it is still not clear whether Humans have such thing.

CC.

Re:I don't think Asimov was naive (1)

tragedy (27079) | more than 2 years ago | (#38418004)

TheLink wrote:

I claim that any entity capable of understanding the Asimov Laws AND _interpreting_ them to apply them in complex and diverse scenarios would also be capable of choosing not to follow them.

Of course, Asimov agreed with you on this. Hence the zeroth law [wikipedia.org] . Now, that "law" of robotics was still in the spirit of the other three laws, but involved robots choosing to violate the other laws. Asimov created the laws as a reasonably consistent guideline for ethical robotic behaviour. He did realize that it would be incredibly difficult if not impossible to actually implement something like that in real life. If you've actually read _I, Robot_, then you know that the book is pretty much a collection of fictional case studies of the three laws going wrong.

Re:I don't think Asimov was naive (1)

gl4ss (559668) | more than 2 years ago | (#38418164)

it's not much about education, they're a story device.

asimovs robots provided a nice setting for a bunch of stories. you see, the robots acted as actors which had pre-defined rules, the humans on the other hand had not. but several stories portrayed that those rules didn't matter if the robots lacked information about what their actions would lead to(like the one story about robots that formed a cult).

the stories in I, Robot are almost all detective stories of the sort where they're trying to find the motive. it's irrelevant if they were monks or machines.

asimovs positron brain robots in his stories couldn't be constructed without them obeying to the rules anyhow, they were a magic device.

now - drones have fuck all nothing to do with the sort of ai asimovs robots had. the ethics of military killer drones is about the same as cruise missiles anyhow.

Asimov naive? I don't think so. (5, Informative)

bungo (50628) | more than 2 years ago | (#38416588)

Isaac Asimov had, his Robot stories seem a bit naive

Are you sure you read the same Asimov Robot stories as everyone else? Asimov would set up his laws of robotics, and then go on to show how problems would occur by following those rules.

Remember when he added the 0th rule in one of his later books? Again is was because he was NOT naive and knew that the 3 rules were not enough.

Re:Asimov naive? I don't think so. (5, Insightful)

Anonymous Coward | more than 2 years ago | (#38416652)

I think this Patrick Lin is a bit naive if he thinks that Asimov made the 3 rules as some kind of guideline for how to build robots.
The 3 rules were just a device to explore unintended consequences of these kinds of things.

Re:Asimov naive? I don't think so. (5, Interesting)

TheLink (130905) | more than 2 years ago | (#38416878)

I wonder if this anecdote is true (or based on a true incident involving Asimov):

While watching Clarke's 2001, it soon became obvious that Hal was going to
be a killer. Asimov complained to a friend, "They're violating the Three
Laws!"
His friend said, "Why don't you smite them with a thunderbolt?"

Re:Asimov naive? I don't think so. (0)

Anonymous Coward | more than 2 years ago | (#38417130)

Actually, Patrick Lin did not mention Asimov at all. Please read the article first before commenting.

Well to be fair (1)

Sycraft-fu (314770) | more than 2 years ago | (#38417306)

Lots of geek types seem to take them as literal laws of robotics. I've seen people get all worked up because an autonomous military robot would "Violate the three laws of robotics." They liked the stories so much they decided that those laws are real.

Since they get bandied about like that all the time, I'm not surprised some journalist gets taken in by that.

Re:Well to be fair (1)

HiThere (15173) | more than 2 years ago | (#38418244)

It's qutie reasonable to get worked up about violating those laws. That's like running high voltage wires without insulation. Even *with* insulation you get into lots of trouble if you aren't careful, but without it...

(I'm not really talking about transmission lines. I don't know whether those are insulated, or just kept separated. I'm thinking about inside electronic devices, like CRTs.)

It you'll note on really old wiring, where the insulation wasn't that good (cloth wrapped around the wires.) the wiring style *required* the wires to be run along separated channels. Not double jacketed as one sees with plastic insulation. Violating that kind of basic safety measure is worth getting worked up about.

Similarly, violating the basic safety rules WRT robots is also worth getting worked up about. It's not *really* dangerous (except for social consequences) today, but the day after tomorrow, well, if you violate the basic safety rules there may not be any human around to see it.

Now given how self destructive humans are, I can see reasonable arguments that this is a good outcome, but as a human it's not one I feel we should work towards.

Re:Asimov naive? I don't think so. (1)

FlyingBishop (1293238) | more than 2 years ago | (#38417452)

I don't really think that is really an accurate description either. Rather, in Asimov's view autonomous robots would be completely uncontrollable without something like the three laws, and that even with the three laws there was still considerable danger to the operators.

Asimov's view was that if you tell a robot it can kill people, it's going to figure out a way to twist that into an order to kill you. You don't fuck around with telling robots to hurt people, it's just too dangerous due to the unintended consequences. You need to make sure that the robot is starting from purely altruistic intentions. I don't think it was a simple plot device, but a very considered belief.

Re:Asimov naive? I don't think so. (1)

koan (80826) | more than 2 years ago | (#38416706)

I agree with you Asimov wasn't naive and David wasn't Christian.

Re:Asimov naive? I don't think so. (2)

marcroelofs (797176) | more than 2 years ago | (#38416884)

I always assumed the suggestion that the laws were in place because Robotics would have been forbidden otherwise, and every unit had to have these laws burned in stone in it's BIOS, or it would be an illegal device.
Since there hasn't been any mention of forbidding robots yet, I doubt the 3 laws system will ever exist. Part of the CIA's exercise here seems to be to prevent the ethical discussion from halting the development. On the one hand I think it is good to see an Agency start a 'preemptive' discussion about a sensitive issue, otoh it makes you wonder what they're up to.

Re:Asimov naive? I don't think so. (0)

Anonymous Coward | more than 2 years ago | (#38417138)

To be fair, most robots these days don't have intelligent AI like seen in the movies. Instead they have very specialised software and AI that enables them to do those tasks they were designed for and only those tasks. When advanced Artificial General Intelligence is finally developed and AI becomes capable of intelligent thought and reasoning, and perhaps even emotion, personality and free will, then I think society will start to demand limits like these real fast. Though if I know politics it will not be just three laws but a hundred, and they will conflict in a thousand ways.

Now that I think of it, it doesn't even have to be robots. A rogue AI capable of breaking and entering networked systems all over the world could be orders of magnitude more dangerous than even the most skilled human hackers right now.

Re:Asimov naive? I don't think so. (0)

Anonymous Coward | more than 2 years ago | (#38417350)

I always assumed the suggestion that the laws were in place because Robotics would have been forbidden otherwise, and every unit had to have these laws burned in stone in it's BIOS, or it would be an illegal device.

IIRC, Asimov laid that out quite explicitly in the robot novels. The public feared intelligent robots, and demanded ironclad safeguards.

Incidentally, I've encountered at least two people over the last three decades or so who believed Asimov's laws were as real as Newton's, and it would be impossible to build a killer robot...;-)

rj

Re:Asimov naive? I don't think so. (1)

braindrainbahrain (874202) | more than 2 years ago | (#38417658)

As I recall, Asimov's 3 laws were a fundamental design feature of robot "positronic" brain, and he wrote that it would be a herculean effort to design a brain without them. This was just his way to eliminate the possibility that there were robots that would disobey the laws.

Isaac Asimov did not write those stories as a philosopher or ethicist. He was writing science fiction stories, or more precisely, detective stories thinly veneered as science fiction. In almost every story, robots were found to be (apparently) violating one of the laws, but after some clever detective work and logic, the characters found that the robots were following the laws after all.

Re:Asimov naive? I don't think so. (2)

HiThere (15173) | more than 2 years ago | (#38418328)

He did claim that they were a fundamental feature of the programs designed to operate the robot minds. And, yes, he said that modifying the program to be stable in the absence of those laws would be a herculean effort. But then so was writing the program in the first place.

There wasn't assumed to be anything in the way of a natural law that made it impossible, but that you'd almost need to start the design from scratch. And it represented years (or, depending on the story, decades) of work.

Think of trying to convert a Linux OS into a MSWind OS. There's no natural law that says you can't, but it's no simple job. The Wine project is still not nearly perfect, and it's been over a decade.

Now I'm thinking about it as a programmer. Asimov, however, was a biologist, primarily a biochemist. So his conception would probably be more like "You've built and artificial life form that's a bird, and you want he next variant to be a mammal? But reasonable! If you try it will quickly die!." Close enough in general outline, but differing a lot in detail.

Re:Asimov naive? I don't think so. (1)

Broolucks (1978922) | more than 2 years ago | (#38416904)

The 0th rule is not enough either. The optimal course of action for humanity is arguably to wipe it out completely and to rebuild it from scratch in a controlled environment. I would fully expect a robot obeying the 0th rule to be genocidal.

Quite frankly, every single "rule" you can think of will have unintended consequences, except for the rule that explicitly states "you shall not act contrary to the expectations of brain-in-a-jar X, to which you shall make periodical reports", for a suitably chosen X. No robot in practice will follow a set of "rules of robotics", and we don't really need them anyhow: if we train robots to do what we want them to do, then obeying us is their "survival imperative", so to speak. To take a parallel with evolution, preserving our life at any cost rewards our genetic makeup, and we can breed pretty much as long as we find a mate. If we select robots like nature selected us we will have problems, but that's asinine.

Re:Asimov naive? I don't think so. (1)

wisnoskij (1206448) | more than 2 years ago | (#38417166)

No, They are completely naive.

The problems that occur in the robots following As. Laws are completely ridiculous.
In the real world, programming does not work that way and even if it did every robot made would break down within a days time when it encounter one of the rule paradoxes that characterize his novels.

Re:Asimov naive? I don't think so. (1)

HiThere (15173) | more than 2 years ago | (#38418388)

You misunderstand them.
The words were never intended to be the laws. They were intended to be an English translation of the basic principles. The actual implementations were intended to be balancing acts, and only violations of the first law even contained the potential of destroying the robot's mind. Even there some tradeoffs were allowed. Some damages were considered more important than others, etc. (Read "Runaround" again and think about it.)

I'll agree that the words, as stated, were not implementable. But that was, kind of, he point of the stories. (Read "Liar" and understand the importance that ignorance assumes in stabilizing the robots minds.)

This doesn't mean that they are a good set of rules. But the importance of safety measures built in *IS* correct.

Re:Asimov naive? I don't think so. (1)

wisnoskij (1206448) | more than 2 years ago | (#38418522)

It was not the words themselves that I was talking about.
The basic principals (aka the English words) are pretty good, and probably what you would actually want IRL.
But the implementation is the stupidest most useless implementation I have ever seen. Robots operating with systems similar to the books could never operate in the real world.

Re:Asimov naive? I don't think so. (1)

korgitser (1809018) | more than 2 years ago | (#38417270)

the 3 rules were not enough.

No amount of rules will ever be enough. Rules are about modelling the world, but no model, being a simplification, will ever be able to represent the complexity of the world. No matter the quantity or quality of the rules, the robots will sooner or later arrive at a conflict, ambiguity or a plot device. This of course also happens just the same in ethics and philosophy. Thus it becomes that intelligence is as much about creating as it is about breaking the rules.

Now the interesting thing in (Asimov's) robots is the externalization of our rule-based actions. All the time we outsource our decisions to some rule system. Mostly it is just common sense to do so, because nobody can be expected to exhaust every single decision - otherwise we wouldn't get very far. But by doing so, we put our trust in the system, and whet it takes a core dump, someone is fscked. Who is now responsible, the rulemaker or the rulefollower? By taking a mechanism of our human being and giving it a life of it's own, Asimov uses the robots to show us the Gordian knot of all civilization - the need for a justified trust in the other.

Re:Asimov naive? I don't think so. (2)

HiThere (15173) | more than 2 years ago | (#38418462)

As a social animal, it is NECESSARY that we "outsource" some of our decisions to a common-to-our-group rule-system. Every social animal does it, whether wasps, wolves, or humans. Humans are unique in the detailed amount of decisions that they outsource, and in the variance among groups in what the rules are.

In my opinion we (currently in the US) have too many rules, and they aren't a fair trade off. I don't think this is an evolutionarily stable situation. But that gets resolved over the long term (probably violently). There is still the requirement for a group-common rule system. Periods where there are significant conflicts in the rule systems are quite unpleasant to live through (if you manage to do so). But even in those periods you have everyone adhering to external rule systems. And the conflicts are usually over relatively small details. (Not necessarily unimportant or insignificant, but small, as in representing minor differences in premisses.)

Re:Asimov naive? I don't think so. (5, Insightful)

nine-times (778537) | more than 2 years ago | (#38417420)

Remember when he added the 0th rule in one of his later books? Again is was because he was NOT naive and knew that the 3 rules were not enough.

Maybe I'm crazy, but I never thought the 3 rules were even the point. I didn't even think it was about robots per se. Asimov's interest seemed to me to be more directed at the difficulties with systematizing morality into a set of logical rules. Robots are a handy symbolic tool for systemizing human behavior in thought experiments or fiction.

I guess I could be reading too much into things, but really arguing about the 3 rules seems to me a bit like arguing about the proper arrangement of dilithium crystals in the Star Trek universe-- it may be fun or interesting for the sake of a discussion, but it's kind of not that important.

Asimov was not naive (4, Insightful)

houghi (78078) | more than 2 years ago | (#38416628)

He also was not predicting anything. he wanted to tell stories and for that reason he invented the Robotics laws. The fact that we use it for something else is not his fault.

If adding or removing laws fitted his story telling, he would do so.

And they might seem naive, but who cares? They are stories, not predictions. And great stories at that. (Pity that they got raped in the movies)

Re:Asimov was not naive (0)

Anonymous Coward | more than 2 years ago | (#38416760)

I learned how to read with Asimov's Susan Calvin shorts. Great fiction!

That aside, war is all about killing. I am sure that robots and drones can do that well. Where do the ethics come into place. Ethics occur after the war. .02

Re:Asimov was not naive (1)

Anonymous Coward | more than 2 years ago | (#38417372)

You've missed why he was naive. He was naive because the three laws can't be implemented to begin with.

Asimov was not naive. (1)

Anonymous Coward | more than 2 years ago | (#38416632)

He knew exactly what humans could and would put robots to. That was the whole point of his laws to show that it didn't have to be that way, and that we could build robots with safe-guards built in. Even then he went to lengths to show that his laws were also not sufficient in every case to prevent harm.

Re:Asimov was not naive. (3, Insightful)

Broolucks (1978922) | more than 2 years ago | (#38417076)

He starts from the assumption that strong safeguards are needed, because robots will be like humans and will try to circumvent them. In practice, robots will circumvent their imperatives about as much as humans commit suicide - at the very worst - because obviously we will set things up so that only obedient units ever get to transmit their "genes" to the next robot generation, so to speak. Making robots with human-like minds and then giving them rules, as Asimov seems to suggest, is a recipe for disaster regardless of the rules you give them. It's good literature, but we're not heading that way.

Re:Asimov was not naive. (0)

Anonymous Coward | more than 2 years ago | (#38417316)

Bro do accuse someone of being a jackass if you are going to act like one yourself. This is a place for disscussing various crimes committed by individuals not to further spread out the ignorance. Please make intellectual thought through post only Instead of just copying what everyone else said but adding "sentence enhancers" to make it your own

Asimov's rules are the best case scenario... (1)

Grog6 (85859) | more than 2 years ago | (#38417964)

I think Fred Saberhagen wrote the books about where we are headed, not ACC.

http://www.berserker.com/FredsBerserkers.html [berserker.com]

Skynet was an amateur compared to these guys. :)

Re:Asimov was not naive. (1)

HiThere (15173) | more than 2 years ago | (#38418634)

Sorry, but when robots *DO* become intelligent they will be problem solving devices operating within constraints. When given a problem, they will attempt to solve it within those constraints. If you state the constraints incorrectly, then that's too bad. They won't even know that the solution they came up with is "cheating" instead of being creative. *You* know your intentions, but they can't read your mind, so they aren't intentionally "breaking your intentions".

That said, they will only act within their constraints, no matter how high or low level those constraints might be. And they *will* push things to the edges of those constraints. So the constraints should define your intentions and goals, not rules for specific situations. Even "be kind to people" is quite difficult, as both "kind" and "people" are very difficult to define. (I haven't yet encountered a good definition of either, that even humans could generally agree on.) This is especially true as the design needs to be built before the robot experiences the world, so you can't create sets of entities/event and say these and things sufficiently like them are kindness/people. And if you do, what happens if the robot has it's sensory apparatus upgraded? Or is moved into another body that senses things differently?

robots v/s robots (0)

Anonymous Coward | more than 2 years ago | (#38416658)

I'm waiting for the first robots v/s robots war with both sides equally matched. That will affect the current research drastically.

The next step, of course would be to take these wars from the real world and restrict them to just the virtual world so there's no killing of homo sapiens ;-)

Re:robots v/s robots (0)

Anonymous Coward | more than 2 years ago | (#38416932)

That next step was the premise for a Star Trek:TOS episode http://www.imdb.com/title/tt0708414/ [imdb.com] . They had found that you have to actually kill the humans after the computer based battle though.

When people forget that people are people too (2)

erroneus (253617) | more than 2 years ago | (#38416660)

We are already seeing this happen and have been seeing it for hundreds of years... thousands even. The problem with people is that there are too many of them and that they often disagree with their leaders as to what is best for them. So when disagreements happen, there has to be a way to manage them. There are lots of ways... it's just that some would prefer there should be machines to go out and 'control' those who disagree. Getting other people to do your dirty work for you is often fraught with complications like conscience and morality.

Re:When people forget that people are people too (0)

bill_mcgonigle (4333) | more than 2 years ago | (#38417034)

Getting other people to do your dirty work for you is often fraught with complications like conscience and morality.

I think the Afghanistan and Iraq wars have proven that when the cost of a war, in domestic lives, is relatively low, wars are more palatable to the populous.

Look at the Republican debates - all but one of the candidates is itching to start some more wars.

Robotic soldiers will just allow the politicians to go kill brown people without relent, so they should be opposed on that basis. Pound the war bots into farm bots.

hmm (4, Insightful)

buddyglass (925859) | more than 2 years ago | (#38416672)

Much would seem to hinge on whether you view drones as making independent "decisions", like a human does, or whether you view them as simply reacting to stimuli in a fairly predetermined way. In the former case they're autonomous agents. Maybe something that "new" that might causes us to think differently about the ethics of warfare. In the latter case they're just another man-made tool to maximize killing ability and minimizing risk. Other than that they have some (apparently pretty simplistic) AI baked in, from the perspective of "killing without risk to one's self or even having to experience the horrors of war", how are drones that different from cruise missiles?

Autonomy and background (4, Insightful)

Beryllium Sphere(tm) (193358) | more than 2 years ago | (#38416890)

That's the key difference between Asimov's robots and ours, and the reason the Three Laws were needed.

Susan Calvin explained once that robots knew at some level that they were superior to humans, and that without the First Law, the first time a human gave a robot an order, the robot would kill out of resentment.

Re:hmm (1)

Jenny Z (1028212) | more than 2 years ago | (#38417158)

Who is responsible for the AI's actions? Is it the machine? Is it the person who setup and turned on the machine, or the person who designed the machine?

        As far as the law goes, isn't it important that the accused understand their own actions? I.E. the insanity defense allows you to prevent taking responsibility for your actions. So if the machine does not understand anything, then how can it be held responsible?

      By this test, the responsibility for the non-self aware machine's actions should lie with the person who sets up and actives it. They should understand what the machine may do when they set it loose, autonomous or not.

      Then the comes question, what does it really mean to 'understand' something? I don't think anybody yet has a good answer for that, but no machine I've heard of seems to actually understand anything yet. I think this has been a big problem in the pursuit of the self-aware machine. Nobody understands understanding yet, so how can they build a machine which does it?

      In the article, they mention how robots are immune to fatigue or emotion etc, but I think that may not necessarily be the case with a self-aware AI. Are we so sure that you can make a self-aware machine without emotion or feeling? Isn't it ultimately a balance of conflicting emotion that drives us? How can you *want* to do something if you do not have any feeling? And if you don't *care* about anything, then there is no reason to any action over any other. An intelligence that does not care about anything may just as likely commit suicide or sit like a lump rather than something useful to itself or us.

    I think the truly intelligent machine is coming, but I don't think it operate at all like a drone with logic. When you get a machine which is self-aware you not only have ethical issues about what the machine does to humans, but also about what humans do to the machine. That may sound silly, but I can imagine that a self-aware intelligent machine with feelings and memory could be very vulnerable with no ability to defend itself from its owner.

Re:hmm (1)

multimediavt (965608) | more than 2 years ago | (#38417206)

Much would seem to hinge on whether you view drones as making independent "decisions", like a human does, or whether you view them as simply reacting to stimuli in a fairly predetermined way. In the former case they're autonomous agents. Maybe something that "new" that might causes us to think differently about the ethics of warfare. In the latter case they're just another man-made tool to maximize killing ability and minimizing risk. Other than that they have some (apparently pretty simplistic) AI baked in, from the perspective of "killing without risk to one's self or even having to experience the horrors of war", how are drones that different from cruise missiles?

The point I was going to make. Our drones are nothing like Asimov's robots. Asimov envisioned robots that could think, learn and adapt on their own, almost as well as humans. The three laws were created to give that robot morals and ethics. I'm not saying that we won't get to that point, but we're still a long way off from robots that would need the three laws. What we have now are simply autonomous spying and killing machines, that also can be overridden and controlled by a human remotely. Definitely not Asimov-esque. And, Asimov was anything but naive. I think Mr. Lin better READ Asimov rather than just read the Cliff's Notes and watch movies based on his stories. He was very much aware of the perilous path we could end up on. Anyone who has read him KNOWS this. Geesh, Lin is a moron!

Re:hmm (1)

Anonymous Coward | more than 2 years ago | (#38417294)

I still don't see that the author (Lin) had mentioned Asimov at all. So who's the moron now?

Re:hmm (0)

Anonymous Coward | more than 2 years ago | (#38417430)

Ooh, ironic pwnage!

prescient (1)

dirty_ghost (1673990) | more than 2 years ago | (#38416690)

If anything Asimov saw the potential for where we are going, and suggested an alternative.

Poorly written (0)

Anonymous Coward | more than 2 years ago | (#38416694)

Currently there are no true AI's out there, there is ALWAYS a human in the loop or human programming that kicks in when communications drops as in the current Iran vs US drone fiasco, with this in mind robots built by humans will always be limited by some form of human control and so any "ethics" involved will always be human not machine derived as a true AI might be able to do.

All humans have situational ethics.

Since I think everyone here reads the news and can see what people are capable of you can go ahead and be terrified by any robotics used against humans by humans, to give an analogy of how I see this playing out stabbing a man is personal, you're close, you get bloody, and you can feel him suffer, shooting a man not so personal and blowing a man up 2000 miles away, well there is nothing personal about it, it's about as close to detachment as you can get and based on the behavior I see in most first person shooters I would expect to see someone killing a group of people with a bot then speaking through the bot "u mad bro".

Not again (1, Interesting)

Jiro (131519) | more than 2 years ago | (#38416710)

Regardless of whether the robots are used in ethical ways or not, it is guaranteed that most of the opposition to their use will be from groups who are just looking for a way to oppose either a specific war or all wars the US is involved in. The robots will be a hook for disingenuous anti-war or anti-US activism that would not actually end if the US stopped using robots.

Every single time the headlines read "US uses ___ for military purposes, ethicists are talking about it" this has always been what has happened.

Re:Not again (1)

multimediavt (965608) | more than 2 years ago | (#38417242)

Regardless of whether the robots are used in ethical ways or not, it is guaranteed that most of the opposition to their use will be from groups who are just looking for a way to oppose either a specific war or all wars the US is involved in. The robots will be a hook for disingenuous anti-war or anti-US activism that would not actually end if the US stopped using robots.

Every single time the headlines read "US uses ___ for military purposes, ethicists are talking about it" this has always been what has happened.

You're talking politics [wikipedia.org] , not ethics [wikipedia.org] . Big difference.

Let's make a deal (0)

Anonymous Coward | more than 2 years ago | (#38417330)

When the US government stops killing innocent civilians, whether by "accident" or not, then I'll consider not automatically assuming that the US governement will kill innocent civilians.

Re:Not again (0)

misexistentialist (1537887) | more than 2 years ago | (#38417704)

just looking for a way to oppose either a specific war or all wars the US is involved in

And you seem to be looking for a way to support any and all US wars. Which attitude is less reasonable?

This Underscores The Importance Of... (-1, Offtopic)

Jason Z. Christie (2534180) | more than 2 years ago | (#38416778)

Reading Asimov. Get my Hitchhiker's Guide Tribute Novella From Pirate Bay http://thepiratebay.org/torrent/6848623/Perfect_Me_By_Jason_Z._Christie [thepiratebay.org]

Re:This Underscores The Importance Of... (1)

Jason Z. Christie (2534180) | more than 2 years ago | (#38417096)

Gak! How do I delete this comment? Please don't mod me down, delete it. My fingers slipped... Mixed up my troll account and my shiny new, I promise I will only do quality postings account. It should have read: The three laws of Robotics (Score:?) by Jason Z. Christie (2534180) on Sunday December 18, @10:59AM Can't exactly apply to killing machines. The whole thing is overcomplicated at that point, and we know where overcomplicated code is going to lead us. Skynet running on Windows Eleventy.

Re:This Underscores The Importance Of... (0)

Anonymous Coward | more than 2 years ago | (#38417356)

You should get modded down for admitting to a troll account. _

Naive... (1)

mugurel (1424497) | more than 2 years ago | (#38416784)

And Stalin said: "As fine a mind as Karl Marx had, his ideas seem a bit naive, in view of where we are heading with communism."

Re:Naive... (0)

Anonymous Coward | more than 2 years ago | (#38417526)

who does my nation rebel against, if they gave boys the same cadence they gave girls.

i've hit that glass ceiling many times over... don't know who or what... overrides supreme court and all democratic processes. Supported by media, etc...

Death penalty (0, Insightful)

Anonymous Coward | more than 2 years ago | (#38416810)

Look, the US has the death penalty. This fact means that it doesn't even have a say on ethics.

Welcome back in a couple of hundred years when you realize a thing or two, then we can talk!

Re:Death penalty (3, Interesting)

fsckmnky (2505008) | more than 2 years ago | (#38416888)

I disagree.

What are the ethics of forcing the public to sustain the life of the handful of members of society who have proven by their own actions they don't value the lives of others ?

In your view, it's perfectly ethical to take money from grandma to feed a deranged serial killer indefinitely.

When you produce a 100% effective treatment for deranged serial killers, that will convert them into productive, or at least, harmless self-supporting members of society, then the death penalty will no longer be necessary.

Re:Death penalty (1)

Joce640k (829181) | more than 2 years ago | (#38417140)

You don't have to be a murderer to be a permanent drain on society.

To me it seems like there's an awful lot of "will-never-be-productive" members. Where do you draw the line?

Re:Death penalty (1)

orphiuchus (1146483) | more than 2 years ago | (#38417184)

Those who actively harm others. Its pretty simple really.

Re:Death penalty (1)

fsckmnky (2505008) | more than 2 years ago | (#38417236)

The US is fairly tolerant of those who don't physically harm others.

You can be a drug addict, or a mentally deranged individual, wander the streets, urinate in public, and exhibit any number of other behaviors that polite society would consider 'undesirable' or 'disgusting' and you won't receive the death penalty.

Only approximately 100 or so individuals were given the death penalty last year. Of those 100, I'd say more than half probably have a good chance of never actually receiving the punishment.

That said, if you outlawed the death penalty, I would imagine that some of the people who deserve it ( by current standards ), would still receive it. By this, I mean, say a criminal broke into a house and killed the wife and daughters. The husband returns home, and knows ( or at least thinks he knows ) who did it. If the husband knows the justice system won't punish the offender with the death penalty, he may very well take care of it himself.

Re:Death penalty (1)

koan (80826) | more than 2 years ago | (#38417384)

Yay for hyperbole!!!

The issue wasn't that you couldn't put the serial killer to work instead of killing him, the issue was prison labor was so cheap it started undercutting "real jobs".

Re:Death penalty (1)

Thing 1 (178996) | more than 2 years ago | (#38417500)

When you produce a 100% effective treatment for deranged serial killers, that will convert them into productive, or at least, harmless self-supporting members of society, then the death penalty will no longer be necessary.

That should bring chills to those who have experienced "A Clockwork Orange".

Re:Death penalty (1)

misexistentialist (1537887) | more than 2 years ago | (#38417786)

It seems that it costs states a lot more (sometimes millions of dollars more) to execute someone than to imprison him for life. So executions don't make sense unless you are willing to accept a larger number of people executed in error by streamlining the process.

Re:Death penalty (1)

Richard_at_work (517087) | more than 2 years ago | (#38418184)

When you can bring someone who was wrongly convicted and executed back to life after they are exonerated, that is when I will support the death penalty. Until then, its too much of a gamble.

Re:Death penalty (0)

Anonymous Coward | more than 2 years ago | (#38417244)

reagen and some not so clever pr?

Not all robots are autonomous agents (3, Insightful)

Hentes (2461350) | more than 2 years ago | (#38416850)

Military drones are not autonomous, but controlled by humans. Killing with drones is unethical the same way killing with a gun or with your bare hands is.

Ethics is hard (5, Insightful)

Okian Warrior (537106) | more than 2 years ago | (#38417366)

This is a subtle point with ethics, so I'm not surprised that you don't get it.

Killing is not unethical per se.

We kill people all the time and consider it ethical because of justifications behind the killing. Police can kill in the line of duty, soldiers can kill in duty of war, doctors can administer mercy killings to comatose patients, and so on.

Killing becomes unethical not because it is killing, but because it is unjust. When the killing goes outside of the bounds of what we consider justified and reasonable, then and only then does it become unethical.

Drone killings are not unethical in and of themselves, but using drones removes most of the social restraint we have against unethical killing. Unlike using a gun, no human "feels" the killing, there are no witnesses, and there is a diluted sense of responsibility.

This makes drones easier to use and as a result, they will be used frequently for unethical killings.

Re:Ethics is hard (2)

SvnLyrBrto (62138) | more than 2 years ago | (#38417908)

You know...

I'm sure that same argument has been made by some pundit by just about EVERY advance in military technology that served to keep one side's troops somewhat less in harm's way than the other's.

When the U-Boat and torpedoes came about, the Admiralty condemned them as cowardly, illegal, and: "A damn un-English way to fight a war." But now just about every navy includes extensive submarine capabilities.

Firing an artillery shell at a target that's beyond your horizon also removes one side from a certain amount of harm that they hope to inflict on the other side... until both sides adopt artillery; which has been done by every army in the world.

The same could even have been said about line-of-sight firearms when they were introcuded. But every military force uses them now.

Heck... I bet that back when the English first started carrying longbows, some clergyman was there to pontificate about the ethics of firing arrows into the French from afar instead of plodding up and hacking at them with a sword.

The only difference is that, this time, we deployed drones first and have the temporary advantage; like the English did with the longbow and the Germans did with the U-Boat. Give it a couple of decades and EVERY military will use drones as extensively as we do. And that could very well dramatically *reduce* the human cost of war... drones fighting other drones.

Re:Ethics is hard (1)

Hentes (2461350) | more than 2 years ago | (#38417928)

Killing is not unethical per se.

Western ethics is mostly based on the Bible which clearly states that "Thou shalt not kill.".

Drone killings are not unethical in and of themselves, but using drones removes most of the social restraint we have against unethical killing. Unlike using a gun, no human "feels" the killing, there are no witnesses, and there is a diluted sense of responsibility.

There IS a human controlling the drone, pushing the button, and seeing the kill through camera. This would be somewhat different with autonomous robots, but there will always be a human giving the command to kill or "go out hunting". An army can't function efficiently when the responsibilities are unclear, there will always be one responsible for the drones. Also, how is this different from ordering your dog to kill?

Re:Ethics is hard (1)

tsotha (720379) | more than 2 years ago | (#38418500)

Western ethics is mostly based on the Bible which clearly states that "Thou shalt not kill.".

Except that it really doesn't say that. The original wording was much closer to "you shall not murder", which brings in a lot of contextual baggage. The ancient Hebrews had the death penalty and applied it much more liberally than we do.

Re:Ethics is hard (0)

Anonymous Coward | more than 2 years ago | (#38418176)

Yes, all of your examples are still unethical murder.

We have invented silly things like religion to assuage our conscious as we kill. But, it is still murder.

Re:Not all robots are autonomous agents (2)

neBelcnU (663059) | more than 2 years ago | (#38417406)

I disagree.

1st Claim: The US military has a number of autonomous, currently unarmed examples include Global Hawk, X-37, and RQ-3. There are certainly others, and there may be armed examples.

2nd claim: It is easily argued that remote-killing does not fulfill the proportionality argument of just war (bellum iustum). The very fact that the US is so heavily investing in them, indicates that the loss of a UCAV is considered less costly than the loss of the crew, thus, we as a combatant are not subject to the same proportional losses as the other guy in an engagement using them.

While I won't fault anyone investing their treasure in technology to protect their troops, I acknowledge that there's a problem with disconnect when the asymmetry is large.

But back to your statements: 1) there ARE autonomous drones and 2) there is no ethical similarity between killing with a UCAV, gun or bare hands. Yes, they're all killing, but no, they're not at all equal in so doing, and the difference is so large as to nullify your claim.

Easy Starter Links: the interested party can go way deeper from here.
http://en.wikipedia.org/wiki/Global_Hawk [wikipedia.org]
http://en.wikipedia.org/wiki/X-37 [wikipedia.org]
http://en.wikipedia.org/wiki/RQ-3 [wikipedia.org]
http://en.wikipedia.org/wiki/General_Atomics_Avenger [wikipedia.org]
http://defensetech.org/2011/12/14/usaf-sending-new-drone-to-afghanistan/ [defensetech.org]
http://en.wikipedia.org/wiki/Just_war [wikipedia.org]

Re:Not all robots are autonomous agents (1)

misexistentialist (1537887) | more than 2 years ago | (#38417626)

And autonomous killing robots would receive instructions from humans. Even when remotely controlled, drones reduce the difficulty and consequences of killing, and risk bypassing ethics entirely.

Did you know weapons can be TOO lethal? (4, Interesting)

wisebabo (638845) | more than 2 years ago | (#38416864)

(From the article) So the Intl. Red Cross "bans weapons that cause more than 25% field mortality and 5% hospital mortality". (I assume these are the same guys who came up with the Geneva conventions so maybe there is some enforceability as in a war crimes trial afterwards).

Wow, and I thought all's fair (in love) and war. Doesn't this make every nuke illegal? (the article said this is one of the justifications for banning poison gas). So the concern is that as these drones get better, they may have a lethality approaching 100% making them illegal even if there are zero casualties from collateral damage.

I thought the whole point of weapons was 100% lethality. I guess I never thought about how terrifying such a weapon would be (as if war wasn't terrifying enough). Weapons have gone a long way since the first club wielded by that ape-man in that documentary "2001".

Re:Did you know weapons can be TOO lethal? (4, Insightful)

pla (258480) | more than 2 years ago | (#38417108)

I thought the whole point of weapons was 100% lethality.

The ethics of killing aside, the "best" weapon for strategic (as opposed to personal self-defense) purposes doesn't kill, but rather, maximizes the resource drain required to deal with the damage. Ideally, a "perfect" weapon would leave your enemy's troops all alive, all severely crippled, and all not quite damaged enough to consider letting them die a mercy, yet requiring some fabulously expensive perpetual treatment.

Some of the greatest victories in human history came down to such trivial nuisances as dysentery or the flu.

Re:Did you know weapons can be TOO lethal? (1)

minstrelmike (1602771) | more than 2 years ago | (#38417464)

Check out Viet Nam's Bouncing Betties.
They were designed specifically to injure and tie up resources moving soldiers to a hospital. If a guy is dead, I _can_ leave him but won't yet all I have to do is carry his corpse away with me. But if he's injured, we need to sit and wait for medevac.

Re:Did you know weapons can be TOO lethal? (1)

circletimessquare (444983) | more than 2 years ago | (#38418198)

or just down to not having enough to eat and the weather too cold

the famous graphic:

http://www.edwardtufte.com/tufte/posters [edwardtufte.com]

logistics: getting supplies to the front line, is more of a deciding factor in any war than how lethal your armament is

and the wise defender does not fight the front lines, they fight the supply lines

Re:Did you know weapons can be TOO lethal? (1)

Anonymous Coward | more than 2 years ago | (#38417116)

Better to maim your opponent than kill, this is the first thing to learn.

When you kill an opposing fighter, he's just dead.

If you wound him badly, not only is he out of the fight but his wounded status put additional material and morale strain on the opposition who must them evac and care for that person.

Re:Did you know weapons can be TOO lethal? (1)

orphiuchus (1146483) | more than 2 years ago | (#38417178)

That isn't necessarily true. As with most things, the situation dictates, but there are certainly a lot of situations where you want the enemy dead rather than wounded. Most situations in fact.

-Former Marine.

They don't actually (1)

Sycraft-fu (314770) | more than 2 years ago | (#38417358)

They'd like to, but the Red Cross doesn't get to make those kind of determinations. That all comes from The SIrUS Project near as I can tell. The Red Cross thinks it would be a great idea, but it has no force of law or treaty that I can see.

The actual Geneva Convention rule is more along the lines of weapons that aren't lethal enough. You can't use weapons that cause superfluous injury or unnecessary suffering. For example you couldn't design a weapon that would, say, go in and destroy someone's liver but leave everything else intact so they die unnecessarily slowly and painfully.

There are also some specific prohibitions in terms of kinds of ammo used and so on, and bans on particular kinds of weapons, like gas.

But no, as far as I'm aware there's nothing saying you can't make weapons very, very lethal.

NOT "Robots" (2, Interesting)

Anonymous Coward | more than 2 years ago | (#38416868)

The drones are remote-controlled devices and not different to "distance weapons" such as longbows or precision rifles. There has been a discussion hundreds of years ago whether such weaponry is morally OK or not and apparently the human race has decided they are permissible. Again, Drones are NOT robots, as they have 0% scope to decide about weapons engagement. There are always humans making the "kill" decision. It has ZERO to do with Asimov's reasoning.

Whether you think warfare in Afghanistan is good| achieving anything positive|legal is a wholly different question, though.

Ethics are relative (3, Funny)

nurb432 (527695) | more than 2 years ago | (#38416912)

And the standards are written by the victors.

Re:Ethics are relative (0)

Anonymous Coward | more than 2 years ago | (#38417834)

Are you really that naive or psychologically damaged? If I've learned anything about human beings and history, it's that ethics are most certainly NOT relative. There is a moral standard of well being. Maybe you are cynical enough to ignore it, or a sociopath, but the rest of us implicitly understand the difference.

And the saying is "history is written by the victors" you fucking moron.

Re:Ethics are relative (1)

circletimessquare (444983) | more than 2 years ago | (#38418262)

there is a corollary to that observation:

the victors are the ones with the better ethics

such as happiness of the societies fighting, economic capacity, as determined by cultural proclivities, etc. you can't win a war if it is at the expense of making your society miserable, for example, or destroying your economy

everyone knows the cynical observation "might makes right"

but few appreciate the subtle prologue: "right makes might"

wars are constant. battles are won and lost. but the victor in the long term is eventually the society that best organizes its society to maximize happiness and economic output. and then the losing society copies those values, perhaps improving upon them

memetic evolution

In a dark basement somewhere... (1)

sgt scrub (869860) | more than 2 years ago | (#38417256)

Vini: We have a lot of works to do. You shouldn't be waisting time reading stories on the computer. Waterproof heart and brain monitor?
Guido: It's a good story about ethical type stuff. Uh... check.
Vini: I'm just sayin. If the boss catches ya your screwed. Robotic dunking arm?
Guido: Uh... check. It aint like we gots the eyes or ears set up yet.

Imagine what we'd have (0)

Anonymous Coward | more than 2 years ago | (#38417262)

If the engineers who develop these robotic weapons platforms weren't part of the military-industrial complex? Where would the US economy be if we didn't put our best and brightest to work at new weapons systems?

I'm not saying we shouldn't have a strong defense, but at what point do we say enough is enough? It seems to me the armed drones are of limited defensive value. They are an offensive weapon, at least in the way they are being used today.

Re:Imagine what we'd have (0)

Anonymous Coward | more than 2 years ago | (#38418008)

Nobody *PUT* them there. They saw an attractive compensation for their knowledge and skills so they went there of their own free will.

Asimov was rejecting the out of control robot (0)

Anonymous Coward | more than 2 years ago | (#38417394)

Which was the staple of the pulps of his day; to avoid that he created the 3 laws - which he WAS naive enough to hope might be constraints under which a civilised community required its robots to operate. Sadly we fall short of his hopes for human civilisation....

detached robotic torture (1)

sick_soul (794596) | more than 2 years ago | (#38417468)

From TFA:

Robots can monitor vital signs of interrogated suspects, as well as a human doctor can. They could also administer injections and even inflict pain in a more controlled way, free from malice and prejudices

This is a terrible (human) atrocity.
This is humans renouncing their humanity, by trying to get as far as possible from the victims of their actions through robots and drones, thous avoiding the moral responsibility. Horror.

When machines are fighting our wars... (3, Insightful)

mark-t (151149) | more than 2 years ago | (#38417484)

We lose touch with the real cost of war... and with the importance of what, in the end, might be attained by it.

In the end, I believe that the only things that justify going to war against another are things that one is prepared to sacrifice their life for so that future generations might be able have it. And in the end, our appreciation for whatever might be gained because of a past war is only amplified by the value of the sacrifice that went along with it.

Malak by Peter Watts (0)

Anonymous Coward | more than 2 years ago | (#38417540)

Looks at this, the short story is at least in "Engineering Inifinity".

"Malak is a story of semi-sentient, semi-autonomous robot war plane bomber machines, one named Azrael in particular. Azrael is a conflicted robot as human overrides break its rule-based world view consistently. Azrael follows every order it is given, but it still has microseconds of doubt." - http://adamcallaway.blogspot.com/2011/01/ssr-sf-malak-by-peter-watts-engineering.html

The human overrides in this case are forcing an attack when the drones internal cost/benefit calculations lead to it aborting, usually due to too high projected collatelar damage.

I liked the story.

Missing a pretty big one... (1)

scamper_22 (1073470) | more than 2 years ago | (#38417740)

Surprised in an article that long, this wasn't mentioned:

The ability to wage war without the morality of individual soldiers. While soldiers are certainly capable of immoral actions like raping and indiscriminate slaughter that a machine not, it is also that humanity that can lead them not to follow orders, stop fighting...

Today, this is probably much more important in domestic issues. Imagine the recent Arab Spring if the Arab dictators had access to such robots. They could effectively control their population indiscriminately.

The Egyptian military is still composed of regular Egyptians. They follow orders and get paid, but at the end of the day they are regular people; family, neighbors... Mubarak couldn't just tell them to slaughter Egyptians on mass.

Imagine a psychopath like Hitler in charge of such an army. Not having to care about defections, unwilling troops...

The ability to command a powerful army in the hands of so few is what is truly scary.

Re:Missing a pretty big one... (1)

Grog6 (85859) | more than 2 years ago | (#38417998)

Imagine a psychopath like Hitler in charge of such an army. Not having to care about defections, unwilling troops...

Give it a few years; we won't have to imagine anything.

There is no AI (1)

wzzzzrd (886091) | more than 2 years ago | (#38417790)

Artificial Intelligence does not exist. As of now, nothing except humans can pass the Turing test.

What a mind blowingly terrible article (0)

Anonymous Coward | more than 2 years ago | (#38417864)

bad article.

not naive (0)

Anonymous Coward | more than 2 years ago | (#38418372)

asimov wasn't naive, he just wasn't a murderous sociopath like our protectors in the cia and the military
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?