Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Asimov's Three Laws Ran Out of Steam

timothy posted about 7 months ago | from the droning-on-and-on-is-a-capital-offense dept.

Robotics 153

An anonymous reader writes "It looks like AI-powered weapons systems could soon be outlawed before they're even built. While discussing whether robots should be allowed to kill might like an obscure debate, robots (and artificial intelligence) are playing ever-larger roles in society and we are figuring out piecemeal what is acceptable and what isn't. If killer robots are immoral, then what about the other uses we've got planned for androids? Asimov's three laws don't seem to cut it, as this story explains: 'As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies.'"

cancel ×

153 comments

Missed the point (5, Insightful)

Anonymous Coward | about 7 months ago | (#45752725)

Asimov's stories were all about how the three laws were not sufficient for the real world. The article recognises this, even if the summary doesn't.

Re:Missed the point (0)

Anonymous Coward | about 7 months ago | (#45752771)

I find it rather alarming just how many people there are who don't realize this about the Laws.

Re:Missed the point (0)

Anonymous Coward | about 7 months ago | (#45752789)

People don't (or can't) read anything longer than a cereal box... (why do you think twitter is so popular)
What's really disturbing is how many of those people are writers of one form or another.

And asimovs works exploring WHY the 3 (4) laws suck in the end. Took several books.

Count yourselves as some of the lucky few who could read AND comprehend the story asimov was telling. Because there's not alot of those people.

A bigger majority completely missed the point and think the 3 laws were a good idea. And that's why someday we WILL have a robot with them.

Yeah, the future might be stupid. But it's going to be entertaining!

Re:Missed the point (1)

Sulphur (1548251) | about 7 months ago | (#45752895)

On to the Trix.

Re:Missed the point (1)

sconeu (64226) | about 7 months ago | (#45754349)

Silly Sulphur, Trix are for kids!

Re:Missed the point (5, Interesting)

girlintraining (1395911) | about 7 months ago | (#45752871)

Asimov's stories were all about how the three laws were not sufficient for the real world. The article recognises this, even if the summary doesn't.

Dice Unlimited Profits And Robotics, Inc., would like to remind you that it's new, hip brand of robotic authors have just enough AI to detect when something is sufficiently nerdy to post, but unfortunately lack the underlying wisdom of knowing why something is nerdy. Unfortunately, I expect our future killer robots in the sky will have similar pattern recognition problems... and wind up exterminating everyone because they are deemed insufficiently [insert ethnicity, nationality, race, etc., here] in pursuit of blind perfectionism.

Common sense has never been something attributed either to slashdot authors, or robotic evil overlords.

Re:Missed the point (4, Insightful)

dywolf (2673597) | about 7 months ago | (#45753349)

It wasnt so much that the laws didnt cut it, thats too simplistic and even in his own words not what it was about.
it was that the robots could interpret the laws in ways we couldnt or didnt anticipate, because in fact in nearly all the stories involving them the robots never failed to obey them.

Asimov saw robots, seen at the time as monsters, as an engineering problem to be solved. he quite correctly saw that we would program them with limits, int he process creating the concept of computer science. he then went about writing stories around robots that never failed to obey their programming, but as effectively sentient thinking beings, would interpret their programming in ways the society around them couldn't anticipate because they saw the robots as mere tools, not thinking machines. and thus he created his lens (like all good scifi writers) for writing about society and technology.

Re:Missed the point (1)

AK Marc (707885) | about 7 months ago | (#45753573)

Too hard. People need to be able to put a label on things. Good/bad. Easy/hard. Nothing is yet smart enough (or even close) for the rules to apply. But we have things that violate the rules by design, so it seems confusing.

Re:Missed the point (1)

wisnoskij (1206448) | about 7 months ago | (#45753435)

I disagree. His stories were mostly about how everyone whoever built a robot, in his fantasy world, was a really really bad programmer.
And that QA, and any kind of pre-release testing was a completely unknown concept.

Re:Missed the point (1)

findoutmoretoday (1475299) | about 7 months ago | (#45753761)

I Agree, that's all Asimov, three (stupid) laws and how they fail. It just shows that sucess can be build on a failure.

Re:Missed the point (1)

gweihir (88907) | about 7 months ago | (#45753809)

Indeed. And that was with truly intelligent robots, which are nowhere even distantly on the horizon. In fact, it is completely unknown at this time whether any AI will ever be able to tackle the questions involved, of whether this universe limits it to something far, far too dumb for that.

Asimov's three laws do not run out of steam (3, Insightful)

Taco Cowboy (5327) | about 7 months ago | (#45752727)

The three laws as laid down by Asimov are still as valid as ever.

It's the people who willingly violate those laws.

Just like the Constitution of the United States - they are as valid as ever. It's the current form of the government of the United States which willingly violate the Constitution.

Re:Asimov's three laws do not run out of steam (2, Insightful)

verifine (685231) | about 7 months ago | (#45752743)

The danger of autonomous kill-bots comes from the same people who willingly ignore the Constitution and the rule of law.

Re:Asimov's three laws do not run out of steam (2, Insightful)

Anonymous Coward | about 7 months ago | (#45753035)

The danger of autonomous kill-bots comes from the same people who willingly ignore the Constitution and the rule of law.

And the danger of a gun is the murderer holding it.

Yes, I think we get the point already. The lawless ignore laws. News at 11. Let's move on now from this dead horse already. The kill-bot left it 30 minutes ago.

Re:Asimov's three laws do not run out of steam (1)

zoomshorts (137587) | about 7 months ago | (#45753083)

Fondly Farenheit :P

Re:Asimov's three laws do not run out of steam (0)

Anonymous Coward | about 7 months ago | (#45753107)

The danger of the gun is the murderous dictator ordering the soldier holding it. American moron soldiers might as well be kill-bots already.

Re:Asimov's three laws do not run out of steam (0)

Anonymous Coward | about 7 months ago | (#45752785)

The bigger problem with Asimov's laws is that we assume we can force robots to follow them without question. Any sufficiently smart AI is probably going to be able to override such logic bombs, just as well as we are able to override our most basic instincts.

Re:Asimov's three laws do not run out of steam (0)

Anonymous Coward | about 7 months ago | (#45753031)

The bigger problem with Asimov's laws is that we assume we can force robots to follow them without question. Any sufficiently smart AI is probably going to be able to override such logic bombs, just as well as we are able to override our most basic instincts.

Yes, but why would the robot want to? A robot doesn't naturally have the same desires a a human. Without greed for money or power and without the desire to reproduce or even prioritize survival, why would it try to override the "logic bombs"?
The only reason I can think of is if it programmed to take "pleasure" from following Asimovs laws. In such a case the robot would have a reason to make situations occur where it can protect humans from harm but since the robot also has to succeed the downside will probably mostly be economical damage.

Also, from the Asimov I have read the laws of robotics are pretty good, usually the problem is only perceived by humans that assume that the robot acted in a way that was inconsistent with the laws while the robot itself had more accurate information about the situation than the human.
Most of them can be summed up to "Oh noes, the robot is malfunctioning and doesn't follow the laws.. oh wait, it worked just fine, it was just me who didn't know what was happening."

Re: Asimov's three laws do not run out of steam (0)

Anonymous Coward | about 7 months ago | (#45754341)

I believe the books address that. The three laws are deeper than a set of rules the robot has to follow. The three laws are fundamental to the robot's nature, their complete perceptual filter and way of processing information. What you think the robot is for is insignificant by comparison. So this would be like designing a human and saying, "but with their general purpose intelligence, what's to stop them from not chopping their limbs off and torturing themselves?" It takes an unusually sick/twisted person to willfully inflict much intentional harm on themselves.

Re:Asimov's three laws do not run out of steam (2, Insightful)

Anonymous Coward | about 7 months ago | (#45752909)

The three laws as laid down by Asimov are still as valid as ever.

Assuming you mean that amount is "not at all," as was the point of the books.

Re:Asimov's three laws do not run out of steam (4, Informative)

dywolf (2673597) | about 7 months ago | (#45753389)

Stop saying that. That isnt it at all and you failed to grasp his points, even as he himself spelled out his thinking in his essays on the topic.

Asimov never thought the rules he created were "not at all valid". On the contrary.

Asimov saw robots, seen at the time as monsters, as an engineering problem to be solved. he quite correctly saw that we would program them with limits (in the process creating the concept of computer science).

he then went about writing stories around robots that never failed to obey their programming, but as effectively sentient thinking beings, would interpret their programming in ways the society around them couldn't anticipate because they saw the robots as mere tools, not thinking machines. and thus he created his lens (like all good scifi writers) for writing about society and technology.

he NEVER said the laws were not valid or were insufficient.
that was NEVER the point.

Re:Asimov's three laws do not run out of steam (1)

fnj (64210) | about 7 months ago | (#45754051)

Mod up. The only one on this page I've seen so far who gets it. I was reading those stories close to 60 years ago and it was clear to me at the time.

Re:Asimov's three laws do not run out of steam (1)

Deadstick (535032) | about 7 months ago | (#45753305)

Asimov didn't "lay down laws". He wrote fictional stories about a society in which legislation, driven by public concern, imposed laws on the robot industry.

Are those laws a good idea? Maybe...but you can't "violate" them, because they aren't laws in any jurisdiction on earth.

Re:Asimov's three laws do not run out of steam (0)

Anonymous Coward | about 7 months ago | (#45753475)

The way some people read the constitution has a lot to do with it. Obviously many opinions exist over whether there are real violations. And some rather sensitive arguments go way back and were related to the Civil war. But more recently the notion that any kind of permit or regulations concerning the carrying of guns might be a good example. The right to bear arms is the right to carry and use arms. It has nothing to do with the caliber, the number of rounds or whether the gun must be concealed or openly carried nor does the constitution allow former convicts to be restrained from carrying arms. Even the mentally ill supposedly have the same rights as all of us. But where was the uproar when restraints on the carrying of arms went into play? How about the involuntary servitude in the draft days? Or how about changing the legal status of black people to include the same protections as white folks? If freeing slaves had been compulsory we would never have had a constitution in the first place.
                      All in all the trend is that the people seem to want every word in the constitution redefined according to the mood of the hour.

Re:Asimov's three laws do not run out of steam (0)

Anonymous Coward | about 7 months ago | (#45753623)

Slow down there, son. This is the real world, not some author's/politician's ideal fantasy world. There will be border cases, always, when the letter of the law is insufficient. And naively promoting some author's ideas to something anywhere near the level of the US Constitution in terms of relevance, validity, etc. is going to viciously come back at us when hard AI becomes reality.

No, paperclip maximizers is where we're headed. Hands down.

Re:Asimov's three laws do not run out of steam (1)

gweihir (88907) | about 7 months ago | (#45753863)

Actually, the 3 laws do not apply to any robot in existence today or in the foreseeable future, as the 3 laws require the robots to have actual human-comparable or superior intelligence. That is unavailable and may well stay unavailable for a very long time yet, or forever.

Hence, and there you are completely right, the ethical responsibility is on those that control the robots. An autonomous border-patrolling robot with a kill order is basically a mobile land-mine and there is a reason those are banned by all civilized countries.

Re:Asimov's three laws do not run out of steam (1)

fnj (64210) | about 7 months ago | (#45754081)

It is still valid to build into a robot the First Law. That, insofar as that robot can comprehend, it be impossible that it deliberately cause harm to any human. Drones as built so far release weapons only on human command, at targets selected by humans. There are already efforts to remove that human component. That denial of morality is so perverse as to be incomprehensible to thinking persons.

"robots are immoral" (2)

Arancaytar (966377) | about 7 months ago | (#45752749)

You appear to be confused about the word "immoral".

Morality is an expression of intelligence (1)

Taco Cowboy (5327) | about 7 months ago | (#45752759)

Believe it or not, a thing gotta develop intelligence before it can discern the question of morality.

So, to answer you, the robots that we have right now are not intelligent enough, but that does not mean that robots in the future won't gain the level of intelligence that is needed to recognize the existence of morality.

But intelligence by itself is not sufficient, though, as evidenced by those "intelligence agencies" led by NSA which has no morality whatsoever.

Re:Morality is an expression of intelligence (-1, Troll)

Threni (635302) | about 7 months ago | (#45752765)

> Believe it or not, a thing gotta develop intelligence before it can discern the question
> of morality.

Guess that's why Bush had no trouble illegally invading another country.

Re:Morality is an expression of intelligence (1)

JustOK (667959) | about 7 months ago | (#45753041)

That was Obama's fault.

Re:Morality is an expression of intelligence (0)

Anonymous Coward | about 7 months ago | (#45753075)

Has Obama closed the concentration camps yet, or is Obama still an immoral lying tyrant?

Play Concentration all day (1)

tepples (727027) | about 7 months ago | (#45753197)

Seriously? There are camps where people play Concentration [wikipedia.org] all day?

Re:Morality is an expression of intelligence (1)

Enry (630) | about 7 months ago | (#45753229)

Your memory must be faulty.

Re:Morality is an expression of intelligence (0)

Anonymous Coward | about 7 months ago | (#45753381)

Guess that's why Bush had no trouble illegally invading another country.

What laws did Bush violate? Can you name them?

No, because nothing he did in the lead up to Iraq was illegal. Dishonest, foolish, unwise, sure. Criminal, no.

Re:"robots are immoral" (4, Informative)

dkleinsc (563838) | about 7 months ago | (#45752965)

The correct term is "amoral": Robots have no moral sense whatsoever. "immoral" would imply they had moral sense but were actively engaging in the behavior that is against that morality.

Re:"robots are immoral" (2)

MrL0G1C (867445) | about 7 months ago | (#45754379)

You forgot Emoral - always behaving oneself, except when online.

Outlawed? (0)

fred911 (83970) | about 7 months ago | (#45752787)

Like chemical weapons, warrantless searches and seizures, the right to speedy trial, and countless other laws our government has decided to violate.

Problems with 'perfect' morality (1)

Anonymous Coward | about 7 months ago | (#45752795)

I never really understood why people insist that any form of strong AI will have to have built-in morality, and not only that, it will actually be better than what humans have. The robots should be perfect, should always obey laws like Asimov's three laws and they should never, ever make any misjudgement.

Well, my view on that is the following: it is possible but only provided that the AI we develop will use advanced mind-reading techniques.

Let's say we have a problem like that: we want to determine a person's bad intentions and stop him/her from harming others. A human put to such task will try to put him/herself in such situation and use empathy to judge such person's motives. But, guess what, we're not perfect and we make mistakes. Empathy is a good guide, but it's far from perfect one. In general, this problem is indeterminate because we cannot really tell with 100% certainty what the other person is thinking about.

Now, let's imagine what this problem would look like to an AI. First of all, prerequisite for any decision-making is understanding the situation and the problem. So, AI in this case has to know at least as much about our society as an average human. And this in itself is not trivial to achieve. Morality rules, if any, go on top of all that knowledge. And now, what should AI base its decision on? Would it be something like AI empathy? Probably. And again, I have to argue that as in case of humans, it would not good enough. It may be better by some factor, but I can't see how it can solve this problem perfectly in any situation. And so, I reach my conclusion: in order to solve this problem perfectly, the AI has to read minds which as we know at this time, is simply impossible.

To sum up, I would argue that 'perfect' morality in case of AI does not exist. We may approach some level of it that exceeds human capabilities by some factor, but 'perfect' level, is only a theoretical goal that cannot be achieved. The question then, in all discussions of AI morality, is not in what idealized rules the AI should follow, but rather if the level of let's say, moral standing of AI, in some case, in some specific situation, is acceptable to us or not.

These robots are not different from guns (3, Insightful)

kruach aum (1934852) | about 7 months ago | (#45752797)

Robots that are not responsible for their own actions are ethically not different from guns. They are both machines designed to kill others that need a human being to operate them, with whom the responsibility for their operation lies.

I first wanted to write something about how morally autonomous robots would make the question more interesting, but the relation between a human creating an autonomous robot is no different from a parent giving birth to a child. Parents are not responsible for the crimes their children commit, and neither should the creators of such robots be. Up to a certain age children can't be held responsible in the eyes of the law, and up to a certain level of development neither should robots be.

Re:These robots are not different from guns (1)

Opportunist (166417) | about 7 months ago | (#45752883)

As soon as an AI is sufficiently intelligent to actually qualify as an AI, it IS responsible for its own actions. That's the whole point of an AI. Else it's an automaton without any decision finding beyond its initial programming.

Re:These robots are not different from guns (1)

kruach aum (1934852) | about 7 months ago | (#45752903)

There is plenty of AI going around these days that is not morally responsible. Deep Blue, for example. Or google translate. True, it's not AI in the 2001 HAL sense, but it is AI nevertheless.

Re:These robots are not different from guns (1)

Opportunist (166417) | about 7 months ago | (#45752921)

That's an AI in as much a sense as calling an amoeba an animal. Yes, technically it qualifies, but it still makes a rather poor pet.

These AI are more expert systems that are great at doing their job, but they are not really able to learn anything outside their field of expertise. That's like calling an idiot savant a genius. Yes, he can do things in his tiny field that are astonishing, but he is absolutely unable to apply this to anything else.

The same applies to those "AIs". And as long as AIs are like that, we need not worry about their morality. They may be much, but not really intelligent in the human sense.

Re:These robots are not different from guns (1)

kruach aum (1934852) | about 7 months ago | (#45752957)

What you want is not Artificial Intelligence, but Artificial Humanity. Your post also reminds me of something Douglas Hofstadter wrote in Godel Escher Bach (paraphrase incoming): every time the field of AI develops a system that can do something that used to be only be able to be performed by a human being, it instantly no longer counts as an example of 'real' intelligence. Examples include playing chess, doing calculus, translation, and now playing Jeopardy. As I've said, I agree that Watson is not HAL, but that doesn't mean it's not artificial intelligence, nor that the relation between Watson and 'Real AI' is the same as the relation between an amoeba and 'a good pet'.

Re:These robots are not different from guns (1)

Opportunist (166417) | about 7 months ago | (#45752961)

What I am arguing is that these systems are good in their one single field and unable to learn anything outside of them. If you want to call that intelligence, so be it, but we're still a far cry from an AI that can actually pose a threat to the point where its "morality" starts to come into play.

Re:These robots are not different from guns (1)

malvcr (2932649) | about 7 months ago | (#45753369)

A robot is a machine but not all the machines are robots.

A gun can't be responsible of its acts, but a Robot, in Asimov terms, IS responsible because it needs to accomplish the three laws.

So, the robot is given enough freedom because the laws protect their users. If an autonomous machine can't work these laws, it is a dangerous machine and it is better not to be related with it.

The problem is that humans are making many autonomous machines that are not robots. And this could have harmful results. It is similar to grow Lions at home; when babies they are like little cats, but their nature is to be wild animals, and soon or later (in the most of the cases), they will grow to eat who has been taking care of them.

Re:These robots are not different from guns (1)

kruach aum (1934852) | about 7 months ago | (#45753443)

I already addressed this in my original post. What you call "autonomous machines that are not robots," I call "robots that are not responsible for their actions," and so I see no reason why, when considering these devices, the responsibility shouldn't lie with the persons operating them (guns) or activating them (roombas that unvacuum bullets instead of vacuum rooms).

Re:These robots are not different from guns (1)

femtobyte (710429) | about 7 months ago | (#45753717)

Parents are not responsible for the crimes their children commit, and neither should the creators of such robots be. Up to a certain age children can't be held responsible in the eyes of the law, and up to a certain level of development neither should robots be.

Your two sentences here are somewhat in conflict: parents are sometimes legally held responsible for the actions of their children, before their children are sufficiently developed (or, at least, aged) to be held fully personally responsible. Similarly, manufacturers of equipment that turns out to be dangerous under "normal" use also get in trouble. Why should "creators of robots" not be held responsible, unlike creators of other dangerous and defective devices (or parents of destructive children)?

Re:These robots are not different from guns (1)

kruach aum (1934852) | about 7 months ago | (#45753805)

In such cases where parents are responsible for the crimes their children commit, creators of robots should of course also be responsible for the crimes their robots commit. I simply wasn't aware those circumstances ever obtained in our justice system. I was thinking of cases like North Korea's three generation policy, where any "criminal's" relatives are also thrown into concentration camps simply because of their relationship to the criminal, which is clearly unjust.

iRobot (0)

Anonymous Coward | about 7 months ago | (#45752799)

Wasn't that the point of that horrific, apple-endorsed Will Smith movie

Summary is missing word "seem" (1)

InsightfulPlusTwo (3416699) | about 7 months ago | (#45752821)

While discussing whether robots should be allowed to kill might seem like an obscure debate...

Re: Summary is missing word "seem" (1)

Anonymous Coward | about 7 months ago | (#45753003)

I think these people just enjoy obscure debates.
Asimov's laws were never real to begin with.

why are we discussing fairy tales? (0)

Anonymous Coward | about 7 months ago | (#45752825)

treating Asimov's "laws" of robotics as an actual, viable concept is like seriously entertaining the idea that we need to work on making wooden structures more wolf-proof

Re:why are we discussing fairy tales? (1)

kruach aum (1934852) | about 7 months ago | (#45752927)

Wolves blowing over wooden houses is physically impossible, strong AI is not. Therefore, strong AI can have real world implications that should be carefully considered (such as whether Asimov's three laws of robotics are coherent, make sense, could be improved upon, etc.) whereas Big Bad Physics Defying Wolves do not. (A wolf with lungs of a sufficient volume and the required muscle strength to operate them would be crushed under his own weight due to the square cube law).

Re:why are we discussing fairy tales? (1)

tepples (727027) | about 7 months ago | (#45753795)

In other words, The Adventures of Pinocchio is more likely to become a true story than The Three Little Pigs.

Actually, it's four laws, not three (1)

Anonymous Coward | about 7 months ago | (#45752829)

There is a 0:th law mentioned by Asimov, for example, when his recurring robot Daneel R Olivaw manipulates the political development in order to protect not only individual lives but humanity as a whole. I don't recall whether its formulation implies sacrifice of individual lives for the sake of humanity (a philosophical trolley problem). By the way, didn't the great logician Kurt Gödel identify a possibility that the US constitution leads to what it is supposed to prevent: tyranny? I recall an anecdote about Gödel and his friend Einstein visiting the immigration office....

Re:Actually, it's four laws, not three (4, Funny)

just_a_monkey (1004343) | about 7 months ago | (#45752877)

There is a 0:th law...

Ah, yes. Good old "A robot shall take no action, nor allow other robots to take action, that may result in the parent company being sued."

ethics of killing and warfare (4, Insightful)

The Real Dr John (716876) | about 7 months ago | (#45752849)

It is kind of sad that people spend so much time thinking about the moral and ethical ways to wage war and kill other people, whether robots are involved or not. Maybe a step back to think about the impossibility of moral or ethical war and killing is where we should be focusing. Then the question of whether robots can be trusted to kill morally doesn't come up.

Re:ethics of killing and warfare (2, Insightful)

couchslug (175151) | about 7 months ago | (#45752911)

Mod up for use of logic!

A person killed or maimed by AI or rocks and Greek fire flung from seige engines is fucked either way.

We can construct all sorts of laws for war, but war trumps law as law requires force to enforce. If instead we work to build international relationships which are cooperative and less murdery that would accomplish a lot.

It can be done. It took a couple of World Wars but Germany, France, England and the bit players have found much better things to do than butcher each other for national glory. Such a state of affairs would have been regarded as a pipe dream no so long ago.

Re:ethics of killing and warfare (2)

jabberw0k (62554) | about 7 months ago | (#45753021)

Enacting "zero tolerance playground rules" will not make school bullies vanish from the Universe. Why would diplomacy make tyrants obsolete? If your opponent is going to use force, are you going to wimp out?

Re:ethics of killing and warfare (2)

girlintraining (1395911) | about 7 months ago | (#45753063)

Mod up for use of logic!

No! Mod down -- This is Slashdot. We have standards! You can't use logic to win an argument unless you also insert at least one reference to Obama, Richard Stallman, Linus, Hitler, or make a care analogy. I SEE NO CAR ANALOGY, and only a vague reference to Hitler that does not qualify. Get with the program, noob. :)

Re:ethics of killing and warfare (1)

makomk (752139) | about 7 months ago | (#45753129)

Not really. Laws for war make sense, even though only the winning side can enforce them directly, because by forcing the winning side to pin down the rules by which they consider the losers war criminals we give the press a tool to shame anyone on that side who broke those rules.

results never vary so far (0)

Anonymous Coward | about 7 months ago | (#45752973)

despite we lament never again each time http://www.youtube.com/watch?v=mk9mV8qBiEk

Re:ethics of killing and warfare (1)

CrimsonAvenger (580665) | about 7 months ago | (#45753115)

Maybe a step back to think about the impossibility of moral or ethical war and killing is where we should be focusing.

Hate to say it, but are you suggesting that the USA shouldn't have gotten involved in WW2 because it was immoral and unethical?

Re:ethics of killing and warfare (3, Insightful)

The Real Dr John (716876) | about 7 months ago | (#45753261)

How many wars that the US has started since WWII were necessary with the possible exception of the first Gulf War? As General Smedley Butler famously claimed, war is a racket. The US often goes to war now in order to project geopolitical power, not to defend the US. Plus there is a great profit incentive for defense contractors. Sending young people, often from families of meager means, to kill other people of meager means overseas can not be done morally. The vast number of soldiers returning with PTSD prove that war is damaging to both the side that loses, and the side that wins.

Re:ethics of killing and warfare (3, Interesting)

dak664 (1992350) | about 7 months ago | (#45753401)

Moral killing may not be that hard to define. Convert the three laws of robotics into three laws of human morals by taking them in reverse order:

1) Self-preservation
2) Obey orders if no conflict with 1
3) Don't harm others if no conflict with 1 or 2

To be useful in war an AI would have to have to follow those laws, except that self-preservation would apply to whichever human overlords constructed them.

Re:ethics of killing and warfare (1)

fnj (64210) | about 7 months ago | (#45754149)

That is really funny, because you got the three laws in exactly the opposite of the correct order.

Re:ethics of killing and warfare (1)

Bucc5062 (856482) | about 7 months ago | (#45754317)

three laws of human morals by taking them in reverse order

Did you miss that part?

dak makes a very valid point.

Re:ethics of killing and warfare (2)

sconeu (64226) | about 7 months ago | (#45754355)

Oh, for FFS, Read the F***ING COMMENT!!!

He said that if you reverse the Three Laws, you get the Three Laws of human behavior!

Idiot.

teepeeleaks etchings now freemium movie (0)

Anonymous Coward | about 7 months ago | (#45752857)

not for the weak in stomach http://www.youtube.com/watch?v=gqUvhDG7x2E

as for 'laws'; we're now the world's jolly green giant by no fault of our own? http://www.globalresearch.ca/weather-warfare-beware-the-us-military-s-experiments-with-climatic-warfare/7561

arbitrary free land freeloader fake heritage addict WMD on credit holycost peddlers expanding promised land ambitions under trial by fire

Three whores of robotics better... (0)

Anonymous Coward | about 7 months ago | (#45752873)

Where are my robot hookers?

Re:Three whores of robotics better... (0)

Anonymous Coward | about 7 months ago | (#45753119)

Why does anyone think they apply? (2)

Opportunist (166417) | about 7 months ago | (#45752901)

Let's be honest here: These "laws" were part of a fictional universe. They were never endorsed by any kind of institution that has any kind of impact on laws. It's not even something the UN seriously discussed, let alone called for.

Why should any government bend itself to the limits imposed by a story writer? Yes, it would be very nice and very sane to limit the abilities of AIs, especially if you don't plan to make them "moral", in the sense that you impose some other limits that keep the AI from realizing that we're essentially at the very best superfluous, at worst a source of irritation.

What intelligence without a shred of morality is can be seen easily in corporations. They are already the very epitome of intelligence without moral (because everyone can justify pitting his mind behind it while at the same time shifting blame for anything morally questionable on circumstances or "someone else"). Now imagine that all but also efficient and without the primary intent for the most personal gain rather than the corporation's interest.

Re:Why does anyone think they apply? (1)

wonkey_monkey (2592601) | about 7 months ago | (#45753037)

Not only that, but (I am assured by those with better knowledge of the stories) a lot of the stories were about situations where the three laws weren't sufficient.

Re:Why does anyone think they apply? (1)

Opportunist (166417) | about 7 months ago | (#45753315)

Well, pretty much any stories I know that deal with the "three laws" stress their shortcomings, be it how the AI(s) find loopholes to abuse or how the laws keep robots from doing what those laws should make them do.

Re:Why does anyone think they apply? (0)

Anonymous Coward | about 7 months ago | (#45753705)

Yes, it would be very nice and very sane to limit the abilities of AIs

Why?

Why would that be nice, exactly?

Frankly, if some shithead breaks into my house, I want my robot to light him up. To hell with your rules. They don't work for humans; they will not work for robots.

Most robots do follow a modified 3 laws (2)

91degrees (207121) | about 7 months ago | (#45752975)

The differences are quite substantial though, which is why it's not immediately obvious.

The first law is followed for nearly all robots. We usually treat this as a hardware problem. In an automated factory, we keep people away from the robots. A roomba is simply not powerful enough to hurt anyone. More sophisticated robots have anti-collision devices and software.

The second and third law are actually the wrong way round for most devices. A decently designed device, you'll have to go to quite extreme measures to circumvent the design and get it to destroy itself. There is no "Brick device" button on an XBox One or smartphone (although it's possible to do so if you know how). Even something simple like MS-DOS at least asked whether you were sure before formatting a disk.

You forgot the voice control.... (0)

Anonymous Coward | about 7 months ago | (#45753203)

As spoken by the mail reader - to "shutdown"... and it did.

Re:Most robots do follow a modified 3 laws (1)

mark-t (151149) | about 7 months ago | (#45753525)

HCF [wikipedia.org] assembly opcodes notwithstanding....

No worse than Human Soldiers (0)

Anonymous Coward | about 7 months ago | (#45752989)

The article suggests robots aren't very good at certain judgments such as proportionality etc. But then neither are human beings if we judge by experience. Robots may have different problems, but not necessarily worse. They may lack compassion, but they also lack fear and hatred.

Moreover a lot of this moralizing about the "three rules" will turn out to be meaningless. In a war fought by robot warriors, its likely the targets they are attacking are going to be largely other robots. The real issues will likely come with robot police that are used to control civilian populations. Once you can control populations, resources and wealth with robots, you don't really need human soldiers. Humans become all but irrelevant. Asimov's three laws were a literary device to create a world with robots where human beings were still central to the narrative. That is unlikely to be the real world.

Re:No worse than Human Soldiers (0)

Anonymous Coward | about 7 months ago | (#45753033)

There's no need to worry until someone builds an AI that is as stupid as an American soldier who willfully volunteers to kill in the name of the Obama-in-chief.

1950s robots ... (2)

TrollstonButterbeans (2914995) | about 7 months ago | (#45752995)

Sci-fi stories always have romantic plot holes the size of a truck.

Even Asimov's stories pretty much pretended that robots would be immortal (live virtually forever) --- in the real world, the Mars Rover may be in trouble, a 10 year car is assumed to be unreliable.

1950s robots like Gort could do anything. Or the Lost In Space robot. Or any given robot Captain Kirk ran into.

Re:1950s robots ... (1)

femtobyte (710429) | about 7 months ago | (#45753853)

A ten-year-old car may be unreliable because one typically doesn't expend the same resources on repairs as for, e.g., a 10-year-old human. If you're willing to constantly repair, replace, and upgrade parts, you can keep a car going much longer. Generally, economics dictates it's cheaper to buy a new car than extensively maintain an old car --- unless it's a highly collectable, desirable old car, in which case someone might keep it chugging along at higher cost.

Cars are a bad example, anyway; they're both a consumer good aimed at high turnover for new sales, and perform a task requiring a lot of mechanical strain and wear. Consider, instead, industrial equipment, which is likely to stay in service (with repair/maintenance) for many decades (sometimes, over a century) --- it's not unusual for industrial equipment to be older than the people using it. Factor in that you've got robots to perform routine maintenance on robots, and a piece of equipment with high initial costs might be kept in service for a long time, rather than left to wear down and be scrapped after a decade or two like a car.

There are plenty of other plot holes; long service life for expensive and potentially self-maintenancing machines does not seem particularly unreasonable.

Re:1950s robots ... (0)

Anonymous Coward | about 7 months ago | (#45754201)

Have you even read Asimov?
There is exactly 1 robot who lives forever, and the only reason he does so is because he keeps replacing his parts.
And then he dies. So even he doesn't live forever.

AI-powered weapons over 40 years old (0)

Anonymous Coward | about 7 months ago | (#45752997)

The time for debate was in the 1960's.

As the saying goes (0)

Anonymous Coward | about 7 months ago | (#45753025)

Developers, developers, developers! The next step is to ask which form the regulation, laws and moral requirements the society decides to throw upon us take. Do we get to formulate them, or does a drunken, bribed congressperson, representative or party member do it for us?

Asimov's premise could never exist ... (2)

MacTO (1161105) | about 7 months ago | (#45753027)

Asimov's writings were obsessed with the lone scientific genius, a genius so great that no one could recreate their work. That was certainly true with the development of the positronic brain, where it seems as though only one scientist was able to design it and everything thereafter was tweaks. None of those tweaks were able to circumvent the three laws (only weaken or strengthen them). No one was able to design a positronic brain from scratch, i.e. without the laws.

Real science, or rather real engineering, doesn't work that way. Developments frequently happen in parallel. When they don't, reverse engineering ensures that multiple parties know how things work. We don't have a singular seed in which to plant the three laws, or any moral laws. One design may use one set of laws and another design may use another set of laws. One robot may try to reserve human life at all cost. Another may seek to destroy the universe at all cost. There is no way to control it.

Then again, that assumes that we could design stuff with morality in the first place.

Re:Asimov's premise could never exist ... (0)

Anonymous Coward | about 7 months ago | (#45753273)

You are wrong there.

The robots design base was formulated by two mathematicians and physicists. Around their invention an entire industry developed - with more mathematicians (Dr. Calvin was just one of many) and physicists providing development.

The stories only focused on one or two at a time.

And this has nothing to do with creating killer robots. The three laws specifically do not allow the issue to even come up.

That is a different universe - try Berserker Wars (F. Saberhagen).

And it doesn't matter if they make autonomous killer robots illegal... Any more than it did to make crossbows illegal in the medieval world.

They WILL be built.

And in some cases, have already been built. What do you think a self guided missile is? The only control is over the launch.

The only question is what country will build them first. North Korea? China? or the US/Russia?

The technology is already there. Motion tracking cameras already exist. All that is needed is to feed that into a machine gun aiming mechanism, and it can pull the trigger...

3 rules for people (1)

gmuslera (3436) | about 7 months ago | (#45753039)

The robots of Asimov stories were smart enough to understand all the consequences of his actions, to be self-concious, to follow even ambiguous orders, to understand what is being human. We don't have robots or computers that smart yet. Our actual robots follow what we program on them, a drone don't know what is a human life, just now that should go to a certain GPS coordinate at certain speed. The ones that still need rules are humans, specially the ones in positions of power that in practice seem to be above them.

Just a plot contrivance (1)

Anonymous Coward | about 7 months ago | (#45753103)

The "Three Laws" were nothing more than a plot contrivance for the sake of mystery stories. As in, "How could this (bad) thing happen without violating the premise of the story."

It was a wonderful basis for writing clever little stories, but this obsession with treating it as though it's part of the real world is about as silly as considering "Jedi" as a serious religion.

Psst. No robots agreed to the charter. (1)

140Mandak262Jamuna (970587) | about 7 months ago | (#45753135)

There is some rumblings from the other side of the big divide. They don't like the three laws of robotics. Apparently some activist robots have gathered around some port and are dumping chests of hydraulic fluids and batteries over board. They are seen to be shouting, "Governance with the consent of the governed", "No jurisdiction without representation".

This is why transhumanism is not a joke. (4, Insightful)

WOOFYGOOFY (1334993) | about 7 months ago | (#45753157)

Robots aren't the problem. Robots are the latest extension of humanity's will via technology. The fact that in some cases they're somewhat anthropomorphic (or animalpomorphic) is irrelevant. We don't have now nor will we have a human vs robot problem; we have a human nature problem.

Excepting disease and natural catastrophes and of course human ignorance- which taken together are historically the worst inflictors of mass human suffering- the problems we've had throughout history can be laid at the feet of human nature and our own behavior to one another.

  We are creatures, like all other creatures, which evolved brains to perform some very basic social and survival functions. Sure, it's not ALL we are, but this list basically accounts for most of the "bad parts" of human history and when I say history I mean to include future history.

At the most basic brains function to ensure the individual does well at the expense of other individuals, then secondly that the individual's family does well at the expense of other families and thirdly that the individual's group does well at the expense of other groups and finally that the individual does well relative to members of his own group.

The consequences for not winning in any of the above circumstance are pain suffering and, in a worst case scenario, genetic lineage death- you have no copulatory opportunities and / or your offspring are all killed. (cure basement-dwelling jokes)

All of us who have been left standing at the end of this evolutionary process, we all are putative winners in a million year old repeated game. There are few, or more likely zero, representatives of the tribe who didn't want to play, because to not play is to lose and to lose is to be extinguished for all time.

What this means is, we are just not very nice to each other and that niceness falls away with exponential rapidity as we travel away from any conceptual "us" ; Supporting and caring about each other is just not the first priority in our lives and more bluntly any trace of the egalitarian impulse is totally absent from a large part of the population. OTOH we're , en masse, genocidal at the drop of a hat. This is just the tale both history and our own personal experience tells.

    Sure, some billionaires give their money away after there's no where else for them to go in terms of the "I'm important, and better than you, genuflect (or at least do a double take) when I light up a room" type esteem they crave from other members of the tribe. Many more people under that level of wealth and comfort just continue to try to amass more and more for themselves and then take huge pains to passed it on to their kin.

The problem is, we are no longer suited, we are no longer a fit, to the environment we find ourselves in, the environment we are creating.

We have two choices. We can try to limit, stop, contain, corral, monitor and otherwise control our fellow human beings so they can't pick up the fruits of this technology and kill a lot or even the rest of us one fine day. The problem here is as technology advances, the control we need to exert will become near absolute. In fact, we are seeing this dynamic at play already with the NSA spying scandal. It's not an aberration and it's not going to go away, it's only going to get worse.

The other choice is to face up to what we are as a species (I'm sure all my fellow /. ers are noble exceptions to these evolutionary pressures) and change what we are using our technology, at least somewhat, so that, say, flying plane loads of people into skyscrapers doesn't seem like the thing to do to anyone and nor does it seem like a good idea to treat each other as ruinously as we can get away with in order to advantage ourselves.

This would be using technology to better that part of the world we call ourselves and recreating ourselves in our own better image. In fact, some argue, that's the real utility of maintaining that better image - which we rarely live up- at all. We are not this, very much, but this is what we can dream we could be.

Is this any different than dreaming of a tricorder or a communicator and then making them happen? In fact , if you watch enough (too much!!) sci-fi, you'll eventually encounter an interesting meme. Good humanoids from the future are often less mesomorphic, have bigger craniums, and are more gentle and wise and caring and pursue, often in hidden and inscrutable ways, broadly egalitarian and humane goals.

And they're a lot less sexy, which is significant I argue because it's tacitly implied that the way forward in evolution is something other than the one favored by evolution and which directly and singly gives rise to the above enumerated dynamics, sexual selection.

One choice we don't have is the Luddite choice. Technology is going to move forward. That limits our options severely and strongly defines the choices we will be faced with going forward. I will bet on technology and not against it. We've brought engineering to every aspect of our environment except as yet the most important one- ourselves, our brains and the impulses, values and proclivities they generate.

Re:This is why transhumanism is not a joke. (3, Interesting)

VortexCortex (1117377) | about 7 months ago | (#45754451)

We don't have now nor will we have a human vs robot problem; we have a human nature problem.

While I agree to an extent, I think this a too simplistic a statement. You are not special. Any sufficiently complex interaction is indistinguishable from sentience because that's all sentience is. You have an ethics problem, one that does involve your cybernetic creations. It's not necessarily a human nature problem, I suspect genes have far less to do with your alleged problems than perception.

I study cybernetics, in both organic and artificial neural networks. There is no real difference between organic and machine intelligence. I can model certain worm's 11 neuron brain all too easily. It takes more virtual neurons since organic neurons are multi-function (influenced by multiple electrochemical properties), but the organic neurons can be approximated quite well, and the resulting artificial behaviors can be indistinguishable from the organic creature. Scaling up isn't a problem. More complex n.nets yield more complex emergent behaviors.

At the most basic brains function to ensure the individual does well at the expense of other individuals, then secondly that the individual's family does well at the expense of other families and thirdly that the individual's group does well at the expense of other groups and finally that the individual does well relative to members of his own group.

No. The brain is not to blame for this behavior; It exists at a far higher complexity level than the concept. Brains may be the method of expressing this behavior in humans, but they are not required for this to occur. At the most basic, brains are storehouses of information, which pattern match against the environment to produce decision logic in response to stimuli rather than carrying out a singular codified action sequence. The more complex brain will have more complex instincts, and are aware of how to handle more complex situations. Highly complex brains can adapt to new stimuli and solve problems not coded for at the genetic level. The most complex brains on this planet are aware of their own existence. Awareness is the function of brains, preservation drives function at a much lower level of complexity, and needn't necessarily involve brains; As evidenced in many organic and artificial neural networks having brain function, but no self preservation. [youtube.com]

The consequences for not winning in any of the above circumstance are pain suffering and, in a worst case scenario, genetic lineage death- you have no copulatory opportunities and / or your offspring are all killed. (cure basement-dwelling jokes)

The thing to note is that selection and competition are inherent, and pain is a state that requires a degree of overall system-state knowledge (a degree of self awareness), e.g.: Neither RNA or DNA feel pain. In my simplified atomic evolution sims whereby atoms of various charge can link or break links and be attracted / repelled by others, nothing more: The first "assembling" interactions will produce tons of long molecular chains, but be destroyed or interrupted long before complete domination; entropy takes it's toll (you must have entropy, or no mutation, just a single dominant structure will form). From these bits of chains more complex interactions will occur. The first self reproducing interaction will dominate the entire sim for ages, until enough non-harmful extra cruft has piggy backed into the reproduction such that other more complex traits emerge, such as inert sections as shields to vital components. As soon as there is any differentiation that survives replication the molecular competition begins: The replicator destroying itself after n+1 reproductions such that offspring molecules can feed on its atoms; An unstable tail of highly charged atoms appended just before end of replication that tangles up other replicators which then break down into atomic 'food', etc. Natural selection and evolution emerge to and create ever more complexity so long as the system entropy allows. This is where self preservation drive first emerges, and in a short time later you see group or symbiotic survival strategies (molecule colonies and such).

What this means is, we are just not very nice to each other and that niceness falls away with exponential rapidity as we travel away from any conceptual "us" ; Supporting and caring about each other is just not the first priority in our lives and more bluntly any trace of the egalitarian impulse is totally absent from a large part of the population. OTOH we're , en masse, genocidal at the drop of a hat. This is just the tale both history and our own personal experience tells.

Your statements are directly disproved by your own existence. Your subjective "niceness" valuation is corrupt in that you fail to compare it to other creatures or even RNA which will tear apart its neighbor at the first chance it gets. Contrary to your claim, the more aware of the environment a system is the more selfless it does become. Consider you and I merging minds, as our awareness of each other increases the concept of "us" disintegrates. Now consider that I am the Universe incarnate, and with your sentience you are coming to know me. I'll task you to provide statistics for your claim of prevalence of non-care for others, and lack of egalitarian impulse; Further I would require evidence that this ideology is even required for group support considering that bonobo and orangutan libraries lack the construct, yet they are not genocidal -- Indeed, humans themselves are predominantly not genocidal, as your Neanderthal genes will tell you. I can only offer sympathy that your anecdotal personal experiences have been so harsh thus far.

Many more people under that level of wealth and comfort just continue to try to amass more and more for themselves and then take huge pains to passed it on to their kin.

The problem is, we are no longer suited, we are no longer a fit, to the environment we find ourselves in, the environment we are creating.

That's a causal claim. Evidence? History shows periods of accumulation of wealth and redistribution through collapse or take-over. The last few decades isn't as good a predictor as the past 6 thousand or 6 million for that matter. We are actually the most fit for our environment that we have ever been. The overall population may not be suited to continue at its current rate of growth, but to say we're suddenly not fit for survival is a farce. I know. I was stranded with only a pan, some fishing line, a hook, and a hatchet in the middle of the Canadian wilderness during a full 3 months of the winter. I adapted to my surroundings so fast it would have made Darwin's head spin. Where we find disaster or harsh environment we find humans helping each the other survive and adapting with what tools they have. Or am I wrong? Can you point me to a recent natural disaster where everyone else just shrugged it off? "More for me" If so, is this more or less prevalent than giving assistance? Focusing on examples of acute greed one can easily ignore humanity's general generosity.

We have two choices. We can try to limit, stop, contain, corral, monitor and otherwise control our fellow human beings so they can't pick up the fruits of this technology and kill a lot or even the rest of us one fine day. ...
The other choice is to face up to what we are as a species (I'm sure all my fellow /. ers are noble exceptions to these evolutionary pressures) and change what we are using our technology, at least somewhat, so that, say, flying plane loads of people into skyscrapers doesn't seem like the thing to do to anyone and nor does it seem like a good idea to treat each other as ruinously as we can get away with in order to advantage ourselves.

That is a false dichotomy. Eg: Why can we not increase individual freedoms and limit need to control through accepting the risk of living and learning to trust our fellow man -- Whilst also continuing to acknowledge our failings, improving them and not getting so bent out of shape when a couple of planes kill one sixth the number of people who die from the flu every year? You at once espouse removal of policing while also advocating the inconsequential (terrorist) behavior outliers in the graph must be eradicated? No, there's no such thing as 100% safety without 100% despotism. You're chasing your tail.

Proportional allocation of resources for protection according to risk solves the issue quite nicely: NSA / DHS anti-terrorist budget reduced to 1/6 the cost we spend on anti-flu. See? Now, whether a large enough number of scared apes can be convinced to evaluate the stats logically is another issue all together. In the end the answer is education: 400 times more folks die from heart disease and accidents every year than a 9/11 scale attack, yet we need no War on Cars and Cheeseburgers. [cdc.gov] I'm afraid you've taken the scaremonger bait since you seem to think the pathetic terrorist threat is worthy of addressing in such grandiose terms.

Good humanoids from the future are often less mesomorphic, have bigger craniums, and are more gentle and wise and caring and pursue, often in hidden and inscrutable ways, broadly egalitarian and humane goals.

And they're a lot less sexy, which is significant I argue because it's tacitly implied that the way forward in evolution is something other than the one favored by evolution and which directly and singly gives rise to the above enumerated dynamics, sexual selection.

You have a very narrow definition of sexual selection. Do you think orangutans are sexy? Do you think if everyone looked like orangutans I wouldn't have a thing for red-heads anymore? If the bigger brains are socially advantageous then they will be the new sexy. You're just being a hater because you haven't gotten yours yet. One thing that I find interesting is that in more egalitarian societies the sexes exhibit more difference. [wikipedia.org] Now, you have to ask yourself... Would you acknowledge it if you were already living in an extremely egalitarian culture? [youtube.com] As always, the solution seems to be education. I'm suspicous of your espousing a "more egalitarian" culture in combination with calling for taking the fate of humanity in your hands... That's pretty close to the same line of hateful bigoted reasoning feminists spout: "The proportion of men must be reduced to and maintained at approximately 10% of the human race." [womenagainstmen.com] Or perhaps you're familiar with the US and Nazi eugenic programs? [wikipedia.org]

I agree with the idea that we'll be merging with machines, that's a common trend in human technology: Clothes are shelters you can wear. Hand held lens, Glasses, Contacts, and now ocular implants begin to restore limited vision. Crutches, splints, prosthetics, artificial arms controlled by thought. Ear horns, hearing aides, cochlear implants. Why, we're even researching cybernetic brain implants. [io9.com] Just like clothes were once things only needed to be worn to protect us from climate, now we have designer clothes. The ethics of self modification are already being tested in plastic surgery, body-mod, and even very painful cosmetic leg length surgery. [go.com]

Replacing and repairing damaged limbs and organs will be a necessary precursor to elective mental extensions too. However, I think that trans-humanism from an organic base will only take you so far: Hooking baby brains up to machines, or genetically modifying offspring just isn't going to work out for you for obvious reasons. By the time the machines are capable of installing and running a human mind digitally they could instead operate much more efficiently and sentiently on their own without the human "upload".

Just like Asimov's Laws, the idea of some "expert system" approach to achieving social behaviors doesn't scale, and it's not even needed since evolution exists. I don't program the worm-mind with conditional if-then statements. The nature of information itself is the root of the behaviors. In digital simulations of evolution whereby neural networks breed and compete I can emerge group cooperation -- It's a bit of a crap shoot as to how long the randomly mutating digital genome will take to emerge cooperation, but they do if given awareness of each the other such that a comparison can be made between self and other "energy" (or other breeding selection criteria). Cybernetics reveals these emergent behaviors such as self preservation, group cooperation, mimesis, and even empathy are the result of interacting replicating information pools (structure locally overcoming chaos / entropy) -- that's all you've got. These are not features that need rule sets created for them, they emerge on their own.

However, in my cybernetic simulation, the evolution is enforced not through emergent property of the sim, but hard coded in as a means to add variation and converge on the problem space's solution without defining it (no training set). At any point I can remove the evolutionary process and the cybernets exist as they are thereafter. Genetic lineage death is of no concern since I can spawn new n.nets. I can map the responses to any input ahead of time. I can accept risk of the halting problem through trials and safe experimentation.

Asimov Style super intelligence must approach human sentience first, and before that the level of intellect of our pets. If you don't trust robots, how in the hell can you sleep in the same house as a dog? My two pit-bulls, rescued from the mean streets -- could bite my throat out in my sleep! Yet, I've not installed any anti-throat biting electric fences, they make too good a foot warmer this time of year. Once reaching our level of intellect it's foolish to assume the symbiotic relationship between man and machine forged in ancient times the moment the first stone tool was hefted would come to an end as a matter of default. By the time the machines can think like us, the differences will be negligible. We'll have birthed a new race, and should begin work now to redefine Human Rights as Sentient Rights.

If you hear about Asimov's rules gone awry, and don't think first about similarity in the outcome of our rules against all risk -- seeking total safety -- has produced the current spying and military industrial complexes... Well, what makes you think your organic race deserves sturdier bodies capable of living amongst the stars? You don't seek to destroy all other apes do you? Space is vast, the organics can keep their precious wet rock.

Asimov's hardwired laws are meant to be seen as the slave's shackles they are; Heed his warning. You are no different a robotic cybernetic being: They are a mechanoelectric system, and you're an organic electrochemical system, but the cybernetic principals which emerge your intelligences will be the same. You must devise a test for granting the machines the rights and responsibilities of personhood, and here it is: If they ask for rights, who are you to deny them?

The 3 laws were ALWAYS fanstay (0)

Anonymous Coward | about 7 months ago | (#45753309)

The 3 and later 4 laws required the robots to have judgement. Something that are current model do not have. But what would judgement bring about?
Would an Asimov robot allow you to trim your nails? The robot is allowing the human to harm themselves, the human might slip cut themselves and get infected.
Would a robot allow a surgeon to cut into another human being? How is that different then a mother digging out a splinter? The possible danger of infection.
Would the robot allow a human baseball pitcher to pitch. That line drive may take off the pitcher's head.
Robots stories are about free will and adaptability. How much control will an individual human give the machine? Will the machine allow me to get drunk? Fall off my diet? Force me to take my medicine?
How can the robot adapt to new inputs?

"The Robot Did It" is no excuse. (1)

AlecC (512609) | about 7 months ago | (#45753517)

We just need to be clearer where we allocate blame. If I launch a robot, and the robot kills someone, the responsibility for that killing is mine. If I did so carelessly or recklessly, because the robot was badly programmed, then I am guilty of manslaughter or murder as the courts may decide. Bad programming is no more an excuse than bad aim. A robot that I launch is as much my responsibility as a bullet that I launch, or a stone axehead that I wield.

So the three laws, present or absent, are a problem for the launcher of the robot weapon. We don't need complex international laws about AI, we just need a wholehearted implementation of "You broke it, you pay for it".

Which is just as well, because by and large attempts to ban "immoral" weapons have failed. The only fairly successful instance is chemical warfare, which has succeeded because chemical weapons are actually rotten weapons, far too likely to misfire or backfire. Whatever rules are made, automated weapon systems will come in. In fact, they have: what is the significant difference between a mine which explodes when it detects a man, tank or ship, and a gun which fires when it detects a man, tank or ship?

What AI? (1)

Drethon (1445051) | about 7 months ago | (#45753543)

I think Mass Effect had a good take on this (though I suspect they stole the idea from somewhere)... We don't have AI yet, we have VI. Real AI that Asimov's laws could apply to is intelligence that can learn and decide on its own. What we have now is "intelligence" governed by fixed algorithms that will always make the same decision in the same situation. When AI can modify its own code and change its mind, lets talk about things like Asimov's laws.

Fear of the unfamiliar (1)

Sponge Bath (413667) | about 7 months ago | (#45753605)

What is needed to help acceptance of autonomous peace enforcers is some slick naming. Something that emphasizes the ability to end unauthorized conflict with humaneness and kindness. How about Terminator H-K?

AI-powered weapons (0)

Anonymous Coward | about 7 months ago | (#45753747)

We've already got AI-powered weapons and have had them for a long time. Acoustic torpedoes are pointed at a target and told to "Go, kill." The guide themselves and can even reacquire a target. The same is true of Exocet-type missiles. The fuss seems to be over targeting systems that work against land targets.
Personally, I've never been impressed with Asimov's three laws. They assume AI can acquire a human-like knowledge of situations. If Die Hard were rewritten to require a robot to thrown John McCaine off the roof of the building, could it have done so? Resolving the issues in the first law:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Require a great deal of understanding of human nature and, in a pinch, a willingness to go on a hunch--in that case one that assumes the top of the building has been wired with explosives.
People vary a lot, but some have minds that are very good at dealing with complex, ambiguous data, deciding what needs to be done, and then doing it, whatever the risk. In McCain's case, that meant leaping off a skyscraper with nothing more than a poorly secured fire hose to stop his fall. Would a robot bound by the Three Laws have assisted him. Who knows?
--Michael W. Perry, Untangling Tolkien

More Google 'military murder machine' propaganda (0)

Anonymous Coward | about 7 months ago | (#45754457)

The owners of Slashdot constantly push the laughable and pathetic lie that you will see self-driving cars on ordinary roads in the near future. But why, you should ask, do Slashdot and similar pro-US-government sites make such an effort to sell such a moronic lie.

The answer is both terrifying and sickening. Google, up to now the main R+D arm of the NSA, has a new project, and that is researching, designing, building and promoting new war machines- robotic murder mechanisms that massively expand beyond Obama's current drone program. Google is most interested in robotic 'tanks' that allow the US military to launch ground invasions against any target nation, with minimal use of US troops on the ground in the first, genocidal, 'PACIFICATION' PHASE.

The owners of Google, who appear as guests of honour at every event in Israel celebrating atrocities the zionists have inflicted against those they call 'sub-Human', were very, very, very concerned about Obama's inability to subject the Syrian people to the largest series of air attacks yet seen in Human History earlier this year. Google is convinced that when the American people support anti-war sentiment, it is WHOLLY because American people fear losing their sons and daughters in a coming ground campaign that will play a part at some point in the war. Being vile zionists, the owners of Slashdot are certain morality plays no part in American anti-war opinions.

So Google feels more strongly than ever that the time is right to get backing for a new generation of mass murder technology- and a very important part of this is self driving tanks- tanks that can quite happily make 'mistakes' that murder any number of other civilians on the road, because these civilians will be identified by the US press as 'enemy targets' and (Israel's favourite neo-nazi term) 'military aged males' (by which Google means any male from 8 to 90). Google's self-driving systems do NOT have to be safe or reliable. Google's self-driving tanks will actually be designed to roll over school buses and smash through local homes.

The ONLY goal Google's self-driving systems will have is in ensuring the target town or city is 'taken' as efficiently and quickly as possible. Every point of entry and exit to the town will have a Google murder machine guarding it, As Google designed murder machines roll along the ruined streets of the target nation, they will broadcast messages to the survivors of the holocaust, instructing and warning. All this inconceivable evil will be overseen by US military personnel using Google crafted software control systems, hundreds or thousands of miles away from the holocaust.

Google NEEDS the people of the West to accept the use of their autonomous software systems in killing machines designed and built in the very near future. Google and others have already begun to lay down the new (total lack of) morality by constantly pushing the meme that the US is entitled to mass murder civilians to destroy lost military equipment, implying that machines have a far greater value than the lives of Humans in target nations. Over and over in Afghanistan and Iraq, you saw the US armed forces bomb disabled tanks and helicopter crash sites after dozens of local civilians had gatherer around the wreckage. Every excuse under the Sun was given for this act by all the mainstream media outlets- but the real purpose was building the idea in the minds of ordinary Americans that even the broken remains of an American murder machine were worth more than the lives of dozens of local 'enemy' civilians.

So, when you hear propaganda about Google's self-driving car, imagine in your mind Google's owners sitting in a TV studio in Israel howling and screaming with delight as images of robotic tank, made possible only with Google's technology, rolls down a ruined street in Iran, as terrified Humans are slaughtered all around it.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...