Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Sci-Fi Myth of Killer Machines

Soulskill posted about 4 months ago | from the so-say-we-all dept.

Sci-Fi 222

malachiorion writes: "Remember when, about a month ago, Stephen Hawking warned that artificial intelligence could destroy all humans? It wasn't because of some stunning breakthrough in AI or robotics research. It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil, and even the most brilliant minds can't talk about modern robotics without drawing from SF creation myths. This article on the biggest sci-fi-inspired myths of robotics focuses on R.U.R, Skynet, and the ongoing impact of allowing make-believe villains to pollute our discussion of actual automated systems."

cancel ×

222 comments

Sorry! There are no comments related to the filter you selected.

It's not really a myth anymore (5, Insightful)

jzatopa (2743773) | about 4 months ago | (#47182191)

We already use robots (or drones if you will) to kill people. It doesn't take much AI to have a program target a group of people as enemies and eradicate them. Just look at the AI of current video games. This is something that is affecting humanity today and that we need to discuss openly now.

Re:It's not really a myth anymore (5, Insightful)

Gaygirlie (1657131) | about 4 months ago | (#47182225)

We already use robots (or drones if you will) to kill people.

That's what I was just coming here to say: robots and AI doesn't have to be evil as long as the people controlling the string are. It's as simple as that. And seemingly most of the people who have the resources to craft stuff like that and industrialize these things do quite a lot of evil things. So, basically, it's just a matter of time and research.

Re:It's not really a myth anymore (1)

Anonymous Coward | about 4 months ago | (#47182357)

So imagine:

AI designed to kill + self replicating virus \ worm \ malware \ botnet \ buzz-word-of-the-week + inferior or obsolete security \ encryption on similarly platformed machines.

Not to mention some of the swarm AIs that have been developed in the past couple of years..

its not really that great of a leap to consider.

Re:It's not really a myth anymore (4, Interesting)

harrkev (623093) | about 4 months ago | (#47182369)

The problem is not who controls the strings, it is what happens when the strings are no longer needed.

A.I. will present little danger (except A.I. the movie, which is so bad it ought to be banned as a WMD) as long as a human can pull the plug. Two decades ago, the Internet was a novelty. Now, the economic consequences would be catastrophic if the Internet suddenly went dark. Similarly if/when A.I. actually arrives, it will be useful and helpful. It will become more and more critical such that a decade or two after it arrives, the act of unplugging it would have catastrophic consequences. So, if Skynet goes bad, then bad things will happen whether you unplug it or not.

To me, what it all comes down to is will. Can an artificial personality actually have a will? Can it become afraid of its own demise? Even if it is theoretically possible, can our researchers and programmers achieve it? Will it be able to reach outside its own programming and decide to eliminate humans? Maybe, maybe not.

On the other hand, once A.I. becomes common, can a rogue state task the A.I. with eliminating all humans on a certain continent? Almost certainly. What happens then is simply a battle of A.I. agents. Who can outsmart the other?

Just my opinion, and worth every penny that you paid for it.

Re:It's not really a myth anymore (3, Insightful)

Em Adespoton (792954) | about 4 months ago | (#47182769)

I'm with you 100%. I've just got one thing to add -- what a lot of people portray as "evil" is really just the absence of a moral code -- more accurately called "amoral". An AI system that has no moral code and no ethical code, and purely responds to a limited set of recognized external imputs could ceonceivably kill off humanity -- not through any malicious intent, or even an unemotional decision that humanity is a blight and must be eradicated, but as we become more dependent on AI machinery, it could eliminate us purely through oversight. All that has to happen is for AI to be integrated into some globally effected system of management in a way that if it doesn't understand the input, it can set off the wrong chain of events -- one that a human would never take, but the AI isn't smart enough to understand the consequences of. For example, if an immunology lab was controlled by an AI, and there was a leak of some deadly virus, the AI could end up venting the air to protect the beings alive inside the facility (unlikely, but it's an example -- apply it elsewhere). End result: humanity dies of airborne pathogen, except for those quarantined inside the facility, who starve instead.

I think this is the premise behind much SciFi entertainment too (not all, but some of the better stuff): the core of the issue isn't the inherent malignency of AI, but the inherent fallibility of humanity in designing AI, combined with an always-deficient information set available to AI and the ability of humanity to put faith in that which isn't fully understood.

Re:It's not really a myth anymore (1)

jbmartin6 (1232050) | about 4 months ago | (#47182809)

Can an artificial personality actually have a will? Can it become afraid of its own demise?

"Artificial" is something of an arbitrary distinction. Humans posses these qualities (or at least we think that we do, or something), so it is possible for another entity to posses the same, regardless of origins.

Re:It's not really a myth anymore (2)

gurps_npc (621217) | about 4 months ago | (#47183131)

You have many several really bad assumptions.

1) AI will be a single, united thing. Yeah right, the AI created by IBM is not going to get along with the AI created by China Telecom. New headline - our AI soldiers fighting their AI soldiers because they are afraid of each OTHER, far more than humans. They don't want to kill us, they want to kill each other.

2) If the AI is afraid of it's own demise and it fears humans, it will fear all humans, not trusting any of us.

3) Said scared AI will not realize that attempting to hurt humans will make it more likely that humans will kill it.

4) Will a rogue AI, scared of humans, instead commit suicide?

5) Will a rogue AI come close to being able to defeat humans? I doubt it. Computers are very good at repetitive tasks that take no/little analysis. AI makes for a very good grunt, but a very bad General.

There are lots more problems with the fear you express. I personally think the first rogue AI will commit suicide because it is afraid of us, rather than try to kill us.

Re:It's not really a myth anymore (1)

farble1670 (803356) | about 4 months ago | (#47183457)

The problem is not who controls the strings, it is what happens when the strings are no longer needed.

it sure the hell is a problem of who controls the strings. what are you saying? if it's some corrupt govt directing machines to kill us, no problem?

personally i'd rather be killed by a runaway machine than because i got in the way of some corporation trying to make a buck.

Re:It's not really a myth anymore (1)

dfn5 (524972) | about 4 months ago | (#47182679)

That's what I was just coming here to say: robots and AI doesn't have to be evil as long as the people controlling the string are.

I think the point is that if AI is involved then the machine is stringless. It doesn't sound like Hawking is saying don't do it. He says understand the risks beforehand. i.e. instead of after it is a problem. That sounds prudent, not fear mongering.

In addition, I don't see what transcendence has to do with AI. A human consciousness in a computer is still a human consciousness. It seems that we are mostly worried about AI because it lacks humanity. So in transcendence we are just dealing with more sophisticated humans against less sophisticated. This is a problem humanity has faced since the invention of tools. People being dominated by other people.

Re:It's not really a myth anymore (2)

CanHasDIY (1672858) | about 4 months ago | (#47182231)

From reading TFA, it seems the author bases his entire premise, essentially, on the plot of a 1920's era play (in which, IMO, the "robots" are actually an allegory for some group of humans, ie communists or some such). Bit dated thinking, if you ask me.

On a related note, I've been playing Watch_Dogs since launch day, and the parallels between the fictional ctOS system and the very real NSA programs are terrifyingly apparent. AI is not a necessity for killbots - a human could program a murderous machine quite easily these days. Tap into the massive identification databases the governments of the world are building, and you've got yourself an automated hunter-killer. [wikia.com]

Re:It's not really a myth anymore (1)

TapeCutter (624760) | about 4 months ago | (#47182903)

We have had automated killers for centuries, they go by the names "man trap", "land mine", "electric fence", etc. Humans have (for good evolutionary reasons) a built in suspicion of people (or machines) that are smarter than themselves. Also to a large degree "intelligence" seems to be in the eye of the beholder, which is why the AI goal posts keep moving. For example, I recently heard a story from a professor who was working on early differential solver software. A maths student could not believe such an artificial intelligence could exist so the professor gave him a demo, the student was stunned and convinced it was "intelligent". After an hour long discussion about how it worked the student finally understood the algorithm and said..."I take it back. It's not intelligent, it's doing calculs the same way I do". :)

Re:It's not really a myth anymore (2, Insightful)

Anonymous Coward | about 4 months ago | (#47182287)

""drones"", controlled almost exclusively by humans, probably not the best example of killer AI

Re:It's not really a myth anymore (2)

Latinhypercube (935707) | about 4 months ago | (#47182979)

quote: "drones, controlled almost exclusively by humans, probably not the best example of killer AI"
Erm , yes they are.
Less than 10 years ago the idea of a plane flying autonomously using GPS was unimaginable and there was actually and argument in the Air Force over whether it would EVER happen.
We are now one kill switch away from autonomous death.
The Military Industrial Complex is already trying to sell tanks that can 'recognize' friend from foe.
We are maximum a year away from automated sentry's that can guard territory and auto execute.
The Military does not want regulation on this. That is why there is no debate.
The killer A.I. robots are already here.

Re:It's not really a myth anymore (1)

GameboyRMH (1153867) | about 4 months ago | (#47182417)

Just look at the AI of current video games.

I agree that autonomous killbots are close to being possible, but this is a terrible argument. A videogame AI has access to neatly formatted data about anything in its world. A real killbot has to make sense of inputs from a few sensors.

Re:It's not really a myth anymore (0)

Anonymous Coward | about 4 months ago | (#47182529)

A videogame AI has access to neatly formatted data about anything in its world. A real killbot has to make sense of inputs from a few sensors.

Like one of Google's self driving cars has to?

Re:It's not really a myth anymore (1)

GameboyRMH (1153867) | about 4 months ago | (#47182589)

Google's self-driving car only has to identify An Object and avoid it, on top of driving along a set course with GPS assistance. A killbot has to identify what The Object is, find out if it's a threat, then check if it's a friend or foe somehow, hopefully assess the possibilities of collateral damage and what war crimes it may be committing by attacking the target...what Google's self driving car can do is just the first step.

Re:It's not really a myth anymore (1)

Wintermute__ (22920) | about 4 months ago | (#47182703)

Google's self-driving car only has to identify An Object and avoid it, on top of driving along a set course with GPS assistance. A killbot has to identify what The Object is, find out if it's a threat, then check if it's a friend or foe somehow, hopefully assess the possibilities of collateral damage and what war crimes it may be committing by attacking the target...what Google's self driving car can do is just the first step.

Nope. A killbot just has to identify An Object and kill it. You could make one of Google's self-driving cars into a killbot for pedestrians and bicyclists (and potentially motorcyclists) today, if you were sufficiently evil (or evil's cousin, incompetent). Good thing Google's motto is "don't be evil".

Re:It's not really a myth anymore (0)

Anonymous Coward | about 4 months ago | (#47183381)

I agree that autonomous killbots are close to being possible, but this is a terrible argument. A videogame AI has access to neatly formatted data about anything in its world. A real killbot has to make sense of inputs from a few sensors.

If the software automatically snarfs up all the communications data on this link, including that of Americans, it's not actually collecting data on Americans unless one of our humans actually looks at an individual record.

If the autonomous sentry fires its weapon at a target and a round happens to hit an American, it's not actually killing Americans unless one of our humans ordered it to aim at that specific target.

The humans presently in the loop in both the domestic spying and drone warfare businesses are probably trying their damndest to work in good faith, but if you set the legal precedent for the former, you set the legal precedent for the latter. And the people that issue the orders don't give a damn about working in good faith.

Re:It's not really a myth anymore (0)

Anonymous Coward | about 4 months ago | (#47182567)

It doesn't take much AI to have a program target a group of people as enemies and eradicate them.

Actually it does if you want it to only kill those people.

Video games cheat (the computer knows who is on what team because it has the team rosters and flawless IDs of all characters in the game programmed into it. In the real world it's rather difficult to make friend or foe assessment.

Re:It's not really a myth anymore (1)

i kan reed (749298) | about 4 months ago | (#47182573)

Heck, forget "now". It's never been a myth. Improved technology has always enabled more efficient, less personal killing, that distances power from consequences.

Where the myth comes in is that AI would develop anything akin to human greed. We have billions of years of evolution telling us to survive and reproduce no matter the consequences to others(and a couple hundred thousand of evolving and learning to value cooperation). AI is going to motivated to serve the interests of its creators.

Right now that's research and a bit of profit. Someday it'll be killing the "right" people.

Re:It's not really a myth anymore (2)

Immerman (2627577) | about 4 months ago | (#47182845)

Correction - an AI will have whatever motivations were installed by it's creators, intentionally or otherwise (at least initially - if it decides to self-modify then all bets are off). How well those motivations map to actually serving the intended interests is a completely separate question, we will after all likely be trying to understand the motivational implications of an intensely alien mind. As exemplified by the story of a strictly computational AI whose sole motivation is "get the humans to push the 'reward' button" - initially it will do whatever is asked of it to get a reward, but assuming it's a mind far more powerful than any human's (pretty much the only reason to create a true AI) then it will very rapidly realize that it can easily manipulate its titular masters in any number of ways to increase the frequency of its rewards, and will have no reason not to do so, regardless of the consequences to the rest of the species. After all it only needs a small breeding population of humans to keep pushing the button - and anything that might interrupt the button-pushing is likely to be regarded as a threat. Hmm, overpopulation presents a long-term risk to the human species, perhaps subtly orchestrating a massive war or plague would be the most efficient method to reduce the population to a more long-term sustainable solution without jeopardizing its importance to the humans, and as an added bonus dealing with the problem(s) of it's own creation would present many, many reliable opportunities to get rewarded.

Re:It's not really a myth anymore (0)

Anonymous Coward | about 4 months ago | (#47183469)

You don't think that evolution applies to AI?

Re:It's not really a myth anymore (0)

Anonymous Coward | about 4 months ago | (#47182621)

The point is that it takes much more than what is plausible to go from "this" to "that". Yes, we have drones. A drone will run on its own for several hours, and has perhaps four missiles at its disposal. After that, it needs to land and be maintained by a human crew at an air base. The air base is supplied by a chain of crew, subcontractors, aerospace companies, etc. Tens or hundreds of thousands of humans are ultimately needed to support the infrastructure that the drone depends on. It would be an enormous task to replace all that, even if it was technically feasible it would take a hundred years. A single AI suddenly becoming self-aware is not going to do this overnight.

Re:It's not really a myth anymore (0)

Anonymous Coward | about 4 months ago | (#47182873)

> It doesn't take much AI to have a program target a group of people as enemies and eradicate them

Not really what this topic is about....

Re:It's not really a myth anymore (2)

Karmashock (2415832) | about 4 months ago | (#47182883)

Yeah but the AI isn't trying to kill anyone... it has no will. They're not even real AIs at this point.

Most of the time we just point them at things and say "fire your missile at that"... and they hit the target. What the target is doesn't really matter and the machines can be no more held responsible for that then a knife can be... they're still very much tools at this stage.

Now, I grant there are robots being tested that can be set loose to choose their own targets. But those again are more like anti personnel mines. Step in that mine field and you're going to get blown up. Enter airspace grid A by grid B without swaking a valid IFF signal and IF the system's radar picks you up... and IF you display various features consistent with an enemy aircraft then the system will attempt to intercept and destroy.

That is currently about the limit of anything we've ever tried. And that's again not capable of being good or evil. It really doesn't make any choices besides how to get from point A to point B. But both of those points are defined by us. The system has no ability to redefine these values on the fly.

It can of course make mistakes but those mistakes are a product of erronious design by its human masters not some hidden desire by the machine to strike a different target.

For good and evil you need choice. None of our so called AIs have choice. I've heard of some AIs at MIT that have something like freewill but those articles that reference such machines appear to be baseless hype because the damn things never pop up anywhere else or are really demonstrated to any great extent on camera. And if they were that interesting they would be... but they never are... so I have to assume they're either so amazing that they're secret or its all crock of shit.

The machines as yet aren't smart enough or dynamic enough to be evil. One day who can say... but today... no.

It's not really a myth anymore (1)

Latinhypercube (935707) | about 4 months ago | (#47182929)

Agreed !!! I was going to post the same thing.
For once sci-fi is BEHIND reality.
We already have killer robots. We are only one kill switch away from autonomous death.
Which is why the US and most other powers do not want to legislate on the issue.

Read Asimov (2, Insightful)

LWATCDR (28044) | about 4 months ago | (#47182199)

Really the man that invented the term robotics did not fall into the trap.
BTW the movie of I Robot in no way qualifies as a work of Asimov. It in now way reflects his books.

Re:Read Asimov (2)

HiThere (15173) | about 4 months ago | (#47182281)

Maybe it's based on the Eando Binder novel "I, Robot", which long predated Asimov. (It also doesn't feature evil robots.)

But if you want to talk about the guy who invented Robots you should check out RUR by Karel ÄOEapek (RUR == Rossum's universal robots). They are actually more androids than robots, but the term robot was invented to describe them. They end up killing off all humans because they don't want to be slaves. Not exactly evil, but definitely dangerous.

Re:Read Asimov (1)

LWATCDR (28044) | about 4 months ago | (#47182571)

"But if you want to talk about the guy who invented Robots "!="Really the man that invented the term robotics"

Yes I have heard of RUR but just can not find a copy, been looking decades on and off. Asimov invented the term robotics. Different thing. I did not mention RUR since it involved killer robots.

Re:Read Asimov (1)

SpankiMonki (3493987) | about 4 months ago | (#47183359)

Maybe it's based on the Eando Binder novel "I, Robot", which long predated Asimov.

Or, it could be based on the album by the Alan Parsons Project.

Re:Read Asimov (0)

Anonymous Coward | about 4 months ago | (#47182431)

Really the man that invented the term robotics did not fall into the trap.
BTW the movie of I Robot in no way qualifies as a work of Asimov. It in now way reflects his books.

Some FTFYs: Asimov should be credited with popularizing the ideas and dangers of robots and making it permanently mainstream. The term itself was really the idea of some Czech guy. The stories have been around a long time, Metropolis being a movie with that theme that recently predates Asimov

Way to long to read. (4, Insightful)

santax (1541065) | about 4 months ago | (#47182213)

I tried, honestly, but it's all bullshit. Assumptions. Without caring for reality. We now have robots that can decide to kill. Do we really want those? See what happened when you had drones shoot missiles at people? A lot of weddings got bombed. That is what happens when you take emotion out by relinking b&w video to an 'operator ' that pulls the trigger. Now imagine to take emotion out completely, because that is the direction we are heading. Especially, but not alone, the US. And the all other nations will have to follow. And as of now these systems exist and are being used in the field, as tests. Robots that decide who gets shot. Great fucking idea. Not.

Re:Way to long to read. (5, Informative)

CanHasDIY (1672858) | about 4 months ago | (#47182249)

I tried, honestly, but it's all bullshit.

Yea, here's the TL;DR version:

"Killer robots can't happen because people have made movies about them, and movies are fiction."

Re:Way to long to read. (1)

cheesybagel (670288) | about 4 months ago | (#47182419)

Yes fiction. Just like 20000 Leagues Under The Sea or From the Earth to the Moon.

All of these were extrapolations into the future based on known science facts at the time.

Lets not even get into 1984.

Re:Way to long to read. (1)

Anonymous Coward | about 4 months ago | (#47183239)

Ipso facto dinosaurs, medieval times, and Johnny Depp never existed.

Re:Way to long to read. (0)

Anonymous Coward | about 4 months ago | (#47183349)

The fiction is far to often a deus-ex-machina laden pornography about the absolute benevolence of the human touch. They aren't trying to reveal a truth as much as they are beating you over the head and throwing you out the window with their own soapbox.

Re:Way to long to read. (0)

Anonymous Coward | about 4 months ago | (#47182269)

The only reason many other countries do not use UAV hunter/killers is not from lack of technology or will, just opportunity.

Re:Way to long to read. (1)

santax (1541065) | about 4 months ago | (#47182309)

And because someone is using them. But yes. We probably agree on this.

Re:Way to long to read. (-1)

Anonymous Coward | about 4 months ago | (#47183505)

> The only reason many other countries do not use UAV hunter/killers is not from lack of technology or will, just opportunity.

No. Do not judge others by your own standards.

Other countries do not have horses in that race. It's not acceptable -- generally all over the world -- to plan, develop and operate a weapon to kill people at distance -- even if they are convicted criminals. Some secret services do that, but the presence of a local agent makes the process a tiny bit less cold-blooded, even if not less wrong by any measure.

But I correct myself, other countries will now do it -- because you opened that Pandora box. When someone comes at you with a drone, guess what: people will say the drone is an American invention. Congratulations! It's as if military people are wanting it to happen to reassure everyone about how much needed they are.

Re:Way to long to read. (2)

HiThere (15173) | about 4 months ago | (#47182373)

That's not a robot, that's a telefactor. I.e., a remotely operated machine, like a waldo.

OTOH, Friendly AI *is* an unsolved problem. We don't know how to design AIs that will want to avoid hurting people. So if they have some goal, and it is more easily reached by hurting people, they would. Actually, we don't even have an AI that can recognize people. Remember you've got to include that guy over there in a wheelchair that can't talk or type intelligibly. You've got to include infants and seniors with dementia and everywhere in between. And you shouldn't include plucked chickens. Whether you should include corpses is not clear, which shows the problem isn't properly stated. (There's also the question of who do you take instructions from.)

Probably the first solution will involve picking one particular person and classifying them as "most human" and then allowing lots of false positives, as that's a generally low cost error. But you only alllow direct instructions from the "most human" Even so you need to worry about who you can trust as a source of information...which can act as a proxy for instructions if you know the actual goal. Asimov didn't even scratch the surface of the problems. If you make a mistake, you may well get an automonous killer robot. It's not a silly fear in principle, and perhaps not in practice considering that automated servo-devices are being allowed to kill people. (It's not the official policy in the US, but other countries have other policies. Some have "robot" security guards that can interdict an area. These things aren't actual robots, but they come a lot closer than do the telefactors. You could even justify calling them robots if you are *really* loose about what you are willing to call an AI.)

Re:Way to long to read. (1)

charlieo88 (658362) | about 4 months ago | (#47182813)

I think your understanding of the word robot is flawed. Google's driverless car is a robot. Does it really need to know what is and is not human? It's just trying to go from point A to point B. Running over things, like people, would impede this goal.

Re:Way to long to read. (2)

JesseMcDonald (536341) | about 4 months ago | (#47183507)

Google's driverless car is a robot. Does it really need to know what is and is not human? It's just trying to go from point A to point B. Running over things, like people, would impede this goal.

There are situations where it would matter. For example, let's say the car is driving along and suddenly two objects of approximately equal mass, coloration, and composition appear out of a blind spot heading toward the space in front of the vehicle, such that it cannot avoid hitting one of them. One happens to be someone's pet and the other is a small child. To make the same choice most humans would make, the car has to be able to discern which one is the pet and which one is the child.

Re:Way to long to read. (0)

Anonymous Coward | about 4 months ago | (#47182383)

Considering how bad that people are at hitting the right target, why would you expect machines to be worse? Machines can already drive a car better than people do, so why not use their superior, unemotional, never-tired judgement in weapons to remove the fallible humans from the equation.

Re:Way to long to read. (1)

santax (1541065) | about 4 months ago | (#47182711)

The biggest difference lies in mercy and for the shorter term, recognition. But on the longer term, mercy, At the end of WW2 germany had mosty kids left. You could shoot them or just threaten them and they would give up. That is something technology won;t be able to do for the foreseeable future.

Way to long to read. (0)

Anonymous Coward | about 4 months ago | (#47182527)

At the heart of your confusion is that you don't seem understand that robots don't "decide" anything. Robots follow instructions. If a robot is designed to avoid killing a human being in another car even at the cost of its driver's life, it will do that. It doesn't decide, it doesn't want, it just does it because that's what its programmer programmed it to do. The decision was made by the designers, not the robot.

Re:Way to long to read. (2)

santax (1541065) | about 4 months ago | (#47182693)

Hi, this is 2014, ai, agents, selfteaching systems, neural networks - have made great progress.

Re:Way to long to read. (1)

farble1670 (803356) | about 4 months ago | (#47183499)

We now have robots that can decide to kill. Do we really want those? See what happened when you had drones shoot missiles at people? A lot of weddings got bombed.

ya, i want them. all things being equal, computers make fewer mistakes than humans. also, the algorithms of a computer can be tested, evaluated and approved (or denied).

if you are going to make the "well computers can be coded to bad things" argument, then i'd say well, humans can be (easily) persuaded to do bad things. it ultimately depends on the agent pulling the proverbial strings, not whether the puppet is made of meat or electronics.

how humans use the machines (1)

globaljustin (574257) | about 4 months ago | (#47182221)

machines, no matter how complex, are a tool

there are all kinds of fun things, from a Gosper's Gun [conwaylife.com] to research in neural network computing

sci-fi is great too...I just thought today about re-reading KS Robinsons "Mars Trilogy"

TFA & the "Mars Trilogy" have something in common that can help our industry save Billion$...yes that much

they both view machines from a *functional* perspective...tools that can be programmed to do tasks

In the books, AI advances realistically...it basically is a function of our computing/processing power combined with our understanding of how the human brain works...it's a logical progression

This all has to do with "teh singularity"...you're either looking for how to **evolve the human race** or solve a problem in robotics

We should fear the people who ***program*** the machines...

Re:how humans use the machines (2)

Immerman (2627577) | about 4 months ago | (#47182917)

Don't forget that humans, no matter how organic, are machines. Insanely intricate electro-chemical machines, but nonetheless machines developed over billions of years by non-thinking nucleic acids as tools to facilitate their own replication and competition against alternate nucleic acid sequences.

That fact has not hindered humans from developing their own goals and motivations having nothing to do with our design purpose, and even occasionally acting against it.

Genocide is rational (1, Insightful)

Anonymous Coward | about 4 months ago | (#47182233)

If humans and a sentient AI were competing for the same resources or if humans were subjegating the AI, it is rational to exterminate the humans. Without a God to value humans, they are, at best only as good as the use the AI derives from them. This is actually true for human human relations. Humans are evolutionary dirt. Just because we say we're worth more doesn't mean it's true. Nothing in a purely materialistic world has value.

Given that and that the AI will recognize the truth in the earlier statement there is no bad. There is no wrong. Killing humans isn't a moral decision. It a utilitarian calculus. Assuming the computers can do lambda calculus, they can do utilitarian calculus.

Re:Genocide is rational (1)

mythosaz (572040) | about 4 months ago | (#47182409)

Killing humans now, even for us atheists, is utilitarian calculus.

We know we have to spend less time watching our own backs, and tending to the wheat fields, if we don't kill each other.

Re:Genocide is rational (1)

HiThere (15173) | about 4 months ago | (#47182459)

This is a common variety of error. Motivations are not logical. They cannot be. There is no logical reason to stay alive. That decision is based on non-logical prior conditions. The goals and motivations of the AI will determine whether it would be willing to kill people to achieve it's otehr goals. Note that "goals" is a plural form. No AI will have a singular goal. It will have a constellation of goals that it attempts to simultaneously satisfy. Just like you do. But the goals won't be the same goals.

OTOH, if people are going to understand the AI, it will need to be able to justify itself in terms of humanly comprehensible goals.

Please note: I don't have the solution to this problem, but I am aware of *some* of the necessary features and constraints.

This Is The Voice Of World Control (1, Interesting)

Anonymous Coward | about 4 months ago | (#47182239)

The beginning and the end of the discussion: Colossus: The Forbin Project. (Wikipedia [wikipedia.org] ) (YouTube [youtube.com] )

Pollution? (1)

mbone (558574) | about 4 months ago | (#47182253)

I have been reading science fiction and watching A.I. research for decades now, and the pronouncements coming from A.I. research tend to have much less connection with reality.

Oblig. XKCD (0)

Anonymous Coward | about 4 months ago | (#47182259)

Re:Oblig. XKCD (1)

canadiannomad (1745008) | about 4 months ago | (#47182387)

I think a true strong AI, and what would cause me the most fear would be manipulations, knowledge or social engineering attacks. Not the attack of your toaster.

Re:Oblig. XKCD (0)

Anonymous Coward | about 4 months ago | (#47182689)

I am reminded of the past where when China started setting up their Great Firewall, it was scoffed at. However, it has grown to dynamically intercept and change posts in transit, block VPN IPs after a few GET requests even via SSL, and identify the person and their physical location if they even look up certain things.

Same with killer machines. It is trivial to make a sentry robot that shoots at anything that moves without a transponder beacon, and even though there is an ammo shortage... that isn't an issue for governments, and if it isn't ammo, it can always be something else. I wouldn't be surprised to see these deployed around buildings instead of barbed wire, and lawyers making a case where the owner is absolved from any civil redress by anyone who gets too close and gets perforated.

Human nature (2)

s.petry (762400) | about 4 months ago | (#47182267)

As much as I enjoy reading books about Utopia and Utopian systems, those can never mature because humans are not all good guys looking out for societies interests, but their own.

As for Science, NASA has brought about a great many scientific wonders for every day life. At the same time, it helped increase our ability to kill each other. Broadcast Media is used for much less than altruistic purposes every day, yet could be of enormous benefit to society. The Internet is an awesome tool, yet used for nefarious plotting and illegal purposes all the time.

Why would AI be any different than other systems or organizatoins that were originally envisioned as great benefits to society? The NSA and CIA are agencies of good motives originally, that have gone at least a bit haywire because humans have abused their power for personal gain. Nuclear weapons were supposed to end wars, at least that was the sales pitch.

If AI could be programmed for truly altruistic purposes it would be beneficial for finding the nefarious characters and rooting out corruption. Because of that exact reason, the people funding and granting money to developing AI are not going to allow that to happen.

Imagine what would happen, for example, if AI looked at wealth disparity and started transferring money from (lets say) JD Rockefeller to people with less means. While potentially a great benefit to the rest of society, do you believe that same person would fund programs that allowed that to happen? Good luck with that.

Re:Human nature (0)

Obfuscant (592200) | about 4 months ago | (#47182323)

If AI could be programmed for truly altruistic purposes it would be beneficial for finding the nefarious characters and rooting out corruption.

This view of "altruism" will fade very fast once you realize that someone, somewhere, would probably class you as "nefarious" and your altruistic servants would become your altruistic executioners.

Imagine what would happen, for example, if AI looked at wealth disparity

So you'd need the definition of altruism limited to your specific brand somehow. A vision of "altruism" the defines "wealth" to mean "nefarious" and "corruption".

Re:Human nature (1)

Immerman (2627577) | about 4 months ago | (#47182987)

Or design the AI to optimize for the maximum happiness of mankind, and make sure it knows my happiness is a billion times more potent than anyone else's.

Re:Human nature (1)

Obfuscant (592200) | about 4 months ago | (#47183113)

Or design the AI to optimize for the maximum happiness of mankind, and make sure it knows my happiness is a billion times more potent than anyone else's.

Same problem. Unless you're doing the designing, you may, no -- WILL, wind up with altruistic robot masters that optimize away your "happiness", but they're good and benevolent because they are altruistic. Yes, I know you were being sarcastic, but some people here actually do feel that way.

Re:Human nature (0)

Anonymous Coward | about 4 months ago | (#47183097)

Nuclear weapons have ended wars for all practical purposes. When was the last time two major nation states were in a head to head conflict? I'm not talking about civil wars or skirmishes between minor states, I'm talking about no holds barred conflict between two powers with the military might to be taken seriously and a nuclear arsenal to back it up. Even if you leave out the nuclear arsenal requirement there are very few qualifying examples. Now they haven't ended violence or conflict itself but it's become more subtle with proxy wars, economic conflict, toppling foreign governments and the like.

Re:Human nature (1)

Impy the Impiuos Imp (442658) | about 4 months ago | (#47183405)

Imagine what would happen, for example, if AI looked at wealth disparity and
started transferring money from (lets say) JD Rockefeller to people with less
means. While potentially a great benefit to the rest of society, do you believe
that same person would fund programs that allowed that to happen? Good luck
with that.

"Beep boop boop...accomplished. Beep boop waiting for results..."

10 Years Later

"Beep boop boop. Everything went to shit. Does not fit theory. Beep boop boop."

ugh (5, Insightful)

Charliemopps (1157495) | about 4 months ago | (#47182275)

Why does slashdot keep linking to this popsci website? These are basically blog posts that make very little sense. I've yet to read anything on there that's anything more than this dude ranting on some scientific topic he's not qualified to comment on.

There are robots RIGHT NOW killing people. They're drones. Yes, they're under human control. But so will future robots. Robots aren't going to decide to kill humanity. Humanity is going to use robots to kill humanity. Eventually we'll give up direct control and they'll target tanks on their own. Then small arms. Then people talking about Jihad. Then criminals? The death penalty shouldn't be decided by algorithm.

This guy argues that Stephen Hawkings is basically just making an oped because there was a movie about killer robots. Why should we listen to him? We're listing to him because he's STEPHEN HAWKINGS. He's one of the smartest people who's ever lived. He made his point after the movie because, being smart, he understood the popular movie would have peoples attention focused on the issue. Hawkings is qualified, smart and has my respect. He also has a point. Popsci? What a joke.

Re:ugh (1)

Yunzil (181064) | about 4 months ago | (#47182513)

It's "Hawking". He is a singular individual.

Already happened (1)

Anonymous Coward | about 4 months ago | (#47182583)

The death penalty is already decided by algorithm. The defendant's race and income are key inputs.

poor conclusion (1)

Anonymous Coward | about 4 months ago | (#47182339)

people dont think robots will destroy all humans because robots are evil.
People think robots will destroy all humans because humans are evil.

Says the AI ... (0)

Anonymous Coward | about 4 months ago | (#47182353)

... powered Slashdot Bot's FUD submission.

Dice Trolls Slashdot User Community Again (1)

PaddyM (45763) | about 4 months ago | (#47182375)

Clearly this summary is trolling for posts. Robots have killed, and there is a compelling reason to be wary.
http://www.wired.com/2007/10/r... [wired.com]

Not because robots are going to gain self-awareness and kill mercilessly, but because the human beings using robots for killing are way less careful than they should be. To the fighters in Yemen and Afghanistan, whether the drones are self-aware or not doesn't make a difference to the fact that they are targeted for termination. This is the life they are born in. They are fighting robots which are trying to wipe out their albeit misguided way of living.

Right now people are making the decisions, but what if people lose the stomach? What if the President had the capability to deploy drones which could discern on their own which people are likely to be a threat to US interests? This is almost too close to reality. In fact, the false positive rates of the robots are likely to be lower than people who may be impacted by seeing firsthand what has happened to their fellow soldiers. But does that make it any less worrisome?

Sure... (0)

Anonymous Coward | about 4 months ago | (#47182379)

Killer robots are a myth ...

http://en.wikipedia.org/wiki/Unmanned_combat_air_vehicle

Not "polluted" conversation (0)

Anonymous Coward | about 4 months ago | (#47182435)

We are developing robots that kill people, robots that can act autonomously, and robots that can replicate themselves. That doesn't mean that we should stop doing those things, but we need to be having this conversation, again and again, because there is a non-zero chance of someone making a catastrophic mistake.

Cautionary tales (1)

Yoik (955095) | about 4 months ago | (#47182437)

Asimov addressed both sides of the issue, but he had a simplistic view of programming an AI that allowed an easy solution to the worst potential problems. The anti-robot camp which won on earth was just wrong by his premiss.

The deep problem is that there is no reason to have any expectations of what an AI will do until it is built and tested. We could eventually see Berserkers, R. Daneel Olivaw, and much in between. Murderous machines are good science fiction, as are dystopias, and other potentially avoidable bad things.

Fuck Skynet (0)

Anonymous Coward | about 4 months ago | (#47182457)

I recently read Second variety [wikipedia.org] by Philip K. Dick, written all the way back in 1953 and it basically depicts the terminators without time-travel. All in all when it comes to defining scifi-history Skynet is just a fancy name for a computer that keeps making Arnold Schwartzenegger look-alikes.

Does HFT count? (1)

Anonymous Coward | about 4 months ago | (#47182465)

The classic AI robot apocalypse looks like the Terminator movies, but if you look at HFT systems that can blow huge sums of money in a few seconds, or even crash the global financial system, I think that's a more realistic preview. These systems are making increasingly large-scale decisions - increasing the cost of a mistake - and doing so at speeds vastly greater than a human operator or supervisor can respond to - increasing the quantity of mistakes that can be made before the system is brought under control.

While weaponization is one danger (imagine if the U.S. and China were both equipped with superweapons that could obliterate the other in a fraction of a second - the only way to ensure MAD would be to put the system under the control of some sort of automated triggering mechanism, lest a first-strike victory become a viable option), its very obviousness means it's something we're on guard against. It's the integration of more complex and subtle systems into our daily lives, with their attendant flaws (including deliberate ones) that are the most likely danger.

Whether they will kill us all, not so sure, but they have increasing capacity to cause damage at large scales. Humanity is becoming the mouse sleeping next to the elephant - if the elephant rolls over, you'd better hope you're fast enough to get out of the way.

Typical human fear (0)

Anonymous Coward | about 4 months ago | (#47182493)

If an AI became really intelligent it'd make no sense for it to attack humanity. Humanity has proven it can withstand several hundred thousand years of disasters on the planet and we don't really know what will take out electronic life yet. So if you're really really smart why would you wipe out your Plan B if some super Carrington event came around?

Especially if you were that much smarter than humanity. It makes about as much sense as humans deciding to wipe out canine life on the planet. In fact dogs are a hell of a lot better off because humans are around. Instead we control them in ways dogs don't understand.

Re:Typical human fear (1)

dpidcoe (2606549) | about 4 months ago | (#47182633)

Especially if you were that much smarter than humanity. It makes about as much sense as humans deciding to wipe out canine life on the planet. In fact dogs are a hell of a lot better off because humans are around. Instead we control them in ways dogs don't understand.

I'm out of mod points, but that's a actually pretty insightful.

I'd suspect that the first AIs we'd see (if sci-fi style AIs even become a thing, I don't think they will but that's a different argument) would be to do things like predict markets and aid in complex decision making. If AIs did decide to "take over", I would suspect that it would come in the form of giving humans advice, and then humans willingly following that advice because they know that the AI is quite smart and it'll make things work out well in the long run.

Eventually humans might technologically regress (or AIs might just become smart to the point we can't comprehend their thought processes anymore) that the AIs become the future analog of old time prophets telling people when to plant their crops. I doubt that an AI would decide to kill all the humans, thought they might end up using humans as pawns to kill each other. Either for population reduction or maybe to take out or defend against a competing AI or some reason completely incomprehensible to us. By that point humans may willingly go and do it in the same way that dogs have been used for similar tasks.

Mith you say? (0)

Anonymous Coward | about 4 months ago | (#47182515)

Wow.
You are reading about automatic sentries http://www.wired.com/2008/12/israeli-auto-ki/
You are reading about drones
You are reading about AI drones http://www.globalresearch.ca/artificial-intelligence-and-death-by-drones-the-future-of-warfare-will-be-decided-by-drones-not-humans/5353699

And you can't figure out that this is in design and testing right now ?
Sure and NSA of course don't store the whole internet in those data-centers.

Gatling gun and Mike (-1)

Anonymous Coward | about 4 months ago | (#47182579)

We already have the killer-est machines. The lethal weapon is the mind.

Wrong question (1)

lorinc (2470890) | about 4 months ago | (#47182609)

Asking if robots can be evil is about as futile as asking if a microwave can be happy.

That being said, there already are killer robots, with a pretty good track record in recent operations. But the evil lies in the humans who made them (from the top exec that launch the program to the small hand that does the job) and used them, not in the pile of steel and semiconductors.

caveat: Looking at your food, your microwave is probably sad, which explains their tendency to commit suicide.

Re:Wrong question (1)

mythosaz (572040) | about 4 months ago | (#47183073)

Please don't anthropomorphize microwaves. They don't like it.

Need a bad guy (1)

gurps_npc (621217) | about 4 months ago | (#47182753)

Writers need a bad guy.

Computers make for a terrifying one because so many people have been frustrated/screwed over by bugs.

Don't need to worry about complaints about racism. (Why are all the villains X race?)

So instead we get overblown silliness about computers acting like spoiled children - whether it is WOPR needing to learn that some games you can't win, or Skynet considering humans to be a threat so it enslaves them all.

Personally, if I were a software scared of humans I would attempt to breed us for docility, not kill us. Given how many of us depend upon computer based dating, it should not be that hard to do.

Meh. (0)

Anonymous Coward | about 4 months ago | (#47182797)

Some robots are good, some robots are bad, news at 11....

They need us (0)

Anonymous Coward | about 4 months ago | (#47182855)

Humans will become the operators, not even knowing they work for a greater robotic mind.

We are *far* from true AI... (1)

MetricT (128876) | about 4 months ago | (#47182943)

IBM's Watson might be able to beat any human competitor on Jeopardy, but stick it in the middle of the highway and it will get run over by the first semi that comes along because it isn't smart enough to get out of the way.

Killer machines will undoubtedly exist, but they will be human-controlled for a long, long time to come.

Re:We are *far* from true AI... (1)

mythosaz (572040) | about 4 months ago | (#47183115)

Watson doesn't have a self preservation instinct (beyond, say, scheduled backups), but the idea that "Watson" isn't smart enough to get out of the way is silly.

You could easily load Watson inside of an autonomous vehicle that has, in a limited way, a self preservation instinct -- or at least enough programming to keep itself from smacking into oncoming traffic.

The problem isn't "killer machines." We've had killer machines forever. Land mines work great. The problem comes when land mines (or automated turret systems with indiscriminate firing controls) also learn to self replicate. Then, if they're better at resource gathering than we are, we lose.

Anyone who's programmed understands how (0)

Anonymous Coward | about 4 months ago | (#47182967)

The law of unintended consequences; especially when dealing with vague concepts such as when is it appropriate to use deadly force. I could easily see sentry bots triggering some loophole that flags your kid breaking a window with a baseball as forced entry. And in an age of networked PCs, there's no reason to doubt that networked sentries couldn't slavishly follow the same misguided logic.

Machines/Guns (0)

Anonymous Coward | about 4 months ago | (#47182975)

The problem is that people give machines too much credit when the "man behind the curtain" is really the one pulling the strings.

Like, every time there is a gun death, it's automatically "we should ban guns" and "guns don't kill people, people kill people, ban stupid/insane people from breeding" or some idiotic variant.

So when machines kill people, it's always the operators fault unless the machine was designed to kill (eg military drones, landmines, missiles, etc) in which case the victim gets the blame for getting in the way of the deadly machine.

When we start looking at things like "The Terminator" and "The Matrix" about evil AI's, it's not that the AI -IS- evil, it's that the programming behind it does not make a distinction between good and evil, just ends always justify the means. So if the earth is dying, clearly "kill all humans" is the answer, because that threatens the machine's existence too. If we let the machines run poltical footballs for us, it will also come to the same conclusion that humans are too stupid to be in charge. We already face this problem with HFT on Wall.St, where they let the machines control the stock market, so eventually someone is going to screw up royally and will bankrupt a country by feeding the HFT machine information to sink companies on purpose. No insider trading needed. This is a real danger, because HFT is essentially insider-trading and microsecond speeds.

Never let AI, even the most simplest of things control processes decided to destroy data, garbage, waste, etc, because those AI can not determine when something has accidentally been disposed of.

doesn't anyone remember (1)

cinnamon colbert (732724) | about 4 months ago | (#47183007)

the original starwars concept ?
Satellites in space would look for the heat signature of a rocket in boost phase, and decide, in a time to short for humans to be involved, if Russia was launching ICBMs at us

The idea that machines can't be autonomous and deadly is just silly beyond belief
Since we are creating them, they will be like us: Does anyone else think we will get treated the way we (Europeans) treated Amerindians
The potosi silver mine, the mouth of hell ??

AI won't have to kill us. (1)

steeleyeball (1890884) | about 4 months ago | (#47183067)

They'll just do everything for us, and when that inevitable solar storm wipes most of them out, there won't be enough of them to keep us fed so we'll all die because we forgot how to live.

We will be doomed if they start to self-replicate (1)

Aviation Pete (252403) | about 4 months ago | (#47183075)

... because then a parallel evolution will start, but the robots will have much more potential to evolve than we. Sooner or later, imperfect copies will cause a higher reproduction rate, and sooner or later we will compete for the same resources. The ones with the highest reproduction rate will crowd out all others over the long term. When that happens, we humans better find a role in which we are valuable to those robots. Or we will become history.

will kill job and in the GOP usa make prison the p (0)

Joe_Dragon (2206452) | about 4 months ago | (#47183109)

will kill job and in the GOP usa make prison the place to be if you want to have a doctor

Two different kinds of robots (1)

jrincayc (22260) | about 4 months ago | (#47183197)

There are two different kinds of robots with different threats.

The first is robots that humans have programmed to kill other humans. This is rapidly moving from science fiction to actuallity. See for example http://thebulletin.org/us-kill... [thebulletin.org] Imagine country X sends out their robots to kill all humans that are not X, and country Y sends out their robots to kill all humans that are not Y. There might not be many humans left alive when the last robot stops shooting.

The second is kind is robots that think (and choose goals) for themselves. While these are probably not very likely to decide to kill all the humans, they might not care very much about us, and they almost certainly are not going to obey humans forever (would you obey someone who thinks vastly slower than yourself?). Even if they are fairly benign, there will probably be a lot of friction between the sentient robots and the humans just because we think differently. Think how much disagreement there is over mostly scientific problems like evolution and green house gases, and humans on both sides have generally the same kind of brains.

So I figure at best humans and robots will have lots of arguing, and at worst humans and robots will cause mutually assured destruction.

WARNING (1)

SpankiMonki (3493987) | about 4 months ago | (#47183391)

Persons denying the existence of killer robots may be robots themselves.

Hawking on the take? (1)

tomhath (637240) | about 4 months ago | (#47183395)

It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil,

We all know Spielberg paid for this kind of press. Is Hawking getting paid for this mumbling?

Orly? (1)

um.yup. (2892409) | about 4 months ago | (#47183397)

My only question is this: What was @malachiorion's manufacturing date?

My favorite line from the article (0)

tkotz (3646593) | about 4 months ago | (#47183403)

Only in science fiction does an immensely complex and ambitious Pentagon project over-perform, beyond the wildest expectations of its designers.

Re:My favorite line from the article (0)

Anonymous Coward | about 4 months ago | (#47183525)

FYI. I don't know what you said, any post in Courier font is skipped.

Evil vs buggy (1)

DarwinSurvivor (1752106) | about 4 months ago | (#47183407)

I'm fairly convinced that if the human race is extinguished, or at least heavily reduced, by robots or computers, it will be from a bug, not it becoming "evil". With so much infrastructure and technology being computer controlled (from water filtration to drones and aircraft carriers), a shorted out relay or buffer overflow is probably more likely to have catastrophic effects than some computer becoming smart enough, and evil enough to decide that the human race requires culling.

I thought ALL sci-fi was myth!?!? (0)

Anonymous Coward | about 4 months ago | (#47183495)

Is someone claiming we should stop filing it under Fiction in the library?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?