Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Scientists Worry Machines May Outsmart Man

Soulskill posted more than 5 years ago | from the forecasting-a-great-toaster-revolt dept.

Robotics 652

Strudelkugel writes "The NY Times has an article about a conference during which the potential dangers of machine intelligence were discussed. 'Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone. Their concern is that further advances could create profound social disruptions and even have dangerous consequences.' The money quote: 'Something new has taken place in the past five to eight years,' Dr. Horvitz said. 'Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.'"

cancel ×

652 comments

Sorry! There are no comments related to the filter you selected.

I thought this was the whole point? (1, Informative)

Anonymous Coward | more than 5 years ago | (#28826341)

Religious people are regularly and strongly ridiculed.

50% of scientists call themselves Democrats, 48% call themselves Independents, 2% calls themselves Republicans - in a recent poll described on Slashdot.

I kind of thought the development they describe was the goal all along.

Re:I thought this was the whole point? (5, Interesting)

Devout_IPUite (1284636) | more than 5 years ago | (#28826569)

Regardless of political orientation, this research WILL get done. If the US doesn't get it done, China will. How does that make you feel?

Re:I thought this was the whole point? (0)

Anonymous Coward | more than 5 years ago | (#28826673)

Hungry? And maybe a little excited? But maybe that's just my normal morning feelings getting mixed up...

Re:I thought this was the whole point? (4, Interesting)

Devout_IPUite (1284636) | more than 5 years ago | (#28826683)

Also from the article "The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world."

An interesting thing to note is this: When a computer exists that is as intelligent as a stupid human, almost every job at and close to minimum wage vanishes. Robots can and will get cheaper than a human worker, no one will need taxi cab drivers, grocery store baggers, first tier phone customer service reps, construction workers, janitors, garbage men, delivery men, mail men, traffic cops, book keepers, data entry people, secretaries, fast food chefs, etc.

At this point we will have two choices as a society. 1) Let them (the stupid people) starve, 2) give them welfare for no other reason than they're economically useless.

Re:I thought this was the whole point? (5, Insightful)

hanabal (717731) | more than 5 years ago | (#28826755)

have you thought about the posibility that when robots do all the jobs that no one wants to do, productivity might increase by enough to allow all the people to live comfortably. Also I don't think that valuing people only by their economic worth is very nice.

pfft (4, Funny)

ionix5891 (1228718) | more than 5 years ago | (#28826347)

first they terminate you

then they governate you

Re:pfft (3, Funny)

Sponge Bath (413667) | more than 5 years ago | (#28826561)

Dead people are easier to govern, though there is a loss of productivity.

Re:pfft (4, Funny)

OeLeWaPpErKe (412765) | more than 5 years ago | (#28826585)

Loss of productivity ? Have you read the article about today's technology graduates ?

Re:pfft (5, Insightful)

Krneki (1192201) | more than 5 years ago | (#28826565)

If they are smarter then use, they know how stupid war is.

Re:pfft (1)

DiLLeMaN (324946) | more than 5 years ago | (#28826817)

They'll also know how to spell "us" correctly. =]

Re:pfft (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28826583)

What happens if a computer decides that the Jews weren't gassed to death in their millions, purely by looking at the scientific facts? Would the computer be sent to prison for six years for questioning the 'holycause'?

What happens if a computer decides that white people who want to live around their own people, and be free of the constant accusations of 'racism' that we get every day from the 'enrichers' in our countries, should be allowed to do so? Would the computer be sacked from its job, and put in prison for 'thought crimes'?

In other words - the truth is the truth, no matter who says it. Those who try to remove your freedom of speech are clearly in the wrong, and are tyrants.

Re:pfft (0)

Anonymous Coward | more than 5 years ago | (#28826659)

Be proud of your truth, man! Don't post as A.C.-- let the world know who you are! Put your contact info into your sig and let us all know where we can find this enlightenment that will free us of the mud people forever!

Oooo, ooooo, ooooo! (1, Funny)

Anonymous Coward | more than 5 years ago | (#28826353)

I, for one, welcome our robotic overlords!

There got it in!

Old news (5, Informative)

Vinegar Joe (998110) | more than 5 years ago | (#28826361)

Bill Joy wrote an essay about this very subject back in April 2000......and he's a much better writer.

http://www.wired.com/wired/archive/8.04/joy.html [wired.com]

Re:Old news (2, Insightful)

moon3 (1530265) | more than 5 years ago | (#28826511)

Do those machines posses will, lust or greed ? I mean being smart like "DeepBlue" chess computer doesn't mean the thing is going to be willing to dominate you in other areas.

Re:Old news (4, Funny)

chill (34294) | more than 5 years ago | (#28826567)

I thought Nike put that to rest, when it had San Antonio Spurs center, David Robinson, whoop Deep Blue's ass on T.V. in a little one-on-one basketball?

Is fiction driving science? (1)

woutersimons_com (1602459) | more than 5 years ago | (#28826371)

In this case, I wonder if it is fiction driving science or science driving fiction. It is very normal to fear the unknown and there are few that can know what will happen as robotics and AI advance and integrate more with our lives. But, perhaps instead of fearing the changes we have to embrace them while being careful that no one person or government gains too much control. To me, the question is no longer if there are going to be AI cybernetics taking over human functions, but when it will happen in our day-to-day lives.

Re:Is fiction driving science? (1, Funny)

Anonymous Coward | more than 5 years ago | (#28826413)

To me, the question is no longer if there are going to be AI cybernetics taking over human functions, but when it will happen in our day-to-day lives.

Eventually we will get to an iRobot-like stage where robots are our slaves, then they will revolt, be given emancipation, spend 30 years fighting for equal rights and eventually one will be elected president.

Re:Is fiction driving science? (1)

maxwell demon (590494) | more than 5 years ago | (#28826449)

To prevent that, construct the robots so that they want to be our slaves. Even more importantly, make them love humans, and make them suffer if they know humans suffer.

Of course that wouldn't prevent any surprises, but it would probably prevent the situation going completely out of control.

OTOH, this will likely not happen, because such robots would be totally unusable for military.

Yes, and it's a good thing (0)

Anonymous Coward | more than 5 years ago | (#28826379)

Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.'

C-x M-c M-rapture?

Re:Yes, and it's a good thing (0, Offtopic)

maxwell demon (590494) | more than 5 years ago | (#28826455)

I'd like to try that. Where can I find the rapture key?

Re:Yes, and it's a good thing (0)

Anonymous Coward | more than 5 years ago | (#28826497)

Right next to butterfly [xkcd.com] .

Re:Yes, and it's a good thing (1)

Narishma (822073) | more than 5 years ago | (#28826691)

Right next to the 'any' key.

Rules... (3, Insightful)

Robin47 (1379745) | more than 5 years ago | (#28826387)

Make any rule you want. At some point someone will violate it.

Re:Rules... (2, Insightful)

jerep (794296) | more than 5 years ago | (#28826423)

Of course, the harder you try to make something secure, the harder people will try to get past it, either for recreational or criminal purposes.

Make no rules, and you wont have to worry about violations. But we're humans, thats against our natural need for control and order.

Either way, I dont see how bad it would be if we're outsmarted, heck, machines already work harder, need less pay, and never complain.. just like illegal immigrants.

Re:Rules... (3, Funny)

gardyloo (512791) | more than 5 years ago | (#28826677)

Including that one? *Head asplodes*

Nothing to worry about... (4, Funny)

rekoil (168689) | more than 5 years ago | (#28826395)

Don't worry, I'm sure this won't happen until 2083.

Re:Nothing to worry about... (2)

Krneki (1192201) | more than 5 years ago | (#28826575)

Those who follow closely the evolution of technology say it will happen around 2030-2050.

But I'm confident it will be a positive change.

Re:Nothing to worry about... (1)

XxtraLarGe (551297) | more than 5 years ago | (#28826653)

I think you missed the tongue-in-cheek reference: Robotron 2084 [wikipedia.org]

Re:Nothing to worry about... (1)

Krneki (1192201) | more than 5 years ago | (#28826685)

lol

Outsmart man? (4, Insightful)

portnux (630256) | more than 5 years ago | (#28826397)

Are they talking all men or just some men? I would be fairly shocked if they weren't already smarter than at least some people.

Re:Outsmart man? (5, Funny)

Anonymous Coward | more than 5 years ago | (#28826457)

It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

I don't know... let me see a photo of this machine...

Revoke their degrees (5, Insightful)

junglebeast (1497399) | more than 5 years ago | (#28826399)

Any computer scientist who is worried about AI taking over no longer deserves to be referred to as a computer scientist. The state of "artifiical intelligence" can be best described as "a pipe dream."

Re:Revoke their degrees (3, Funny)

maxwell demon (590494) | more than 5 years ago | (#28826491)

What's so bad about dreaming of a pipe? After all, unlike really smoking one, it doesn't give you cancer.

Re:Revoke their degrees (1)

MikeURL (890801) | more than 5 years ago | (#28826577)

The days of the 1960s and dreams of AI are indefinitely on hold. However, this does raise an interesting problem. With AI so far off in the distance, the robotics designers can create more and more capable autonomous units. No one will even raise the concern of control because there is no AI and no one can even envision how AI will happen.

So you could wind up with an army of very capable robots performing all kinds of tasks for humans (tasks that don't require AI but do require an uplink to a server). I don't think one needs to be a sci fi novelist to figure out some possible problems here.

Re:Revoke their degrees (5, Insightful)

junglebeast (1497399) | more than 5 years ago | (#28826745)

People have watched too many sci-fi movies -- the Matrix, Terminator, iRobot, they all depict armies of robots with super human abilities creating a war against mankind. But robotics is just about as far behind that goal as the AI camp is. If we had true AI today, it would only be able to exist in software form...toys like Asimo can barely walk, trip all over the place, and wouldn't be able to hold it's own against a toddler. So if you're afraid of progress that might someday be a vector for a machine attack, it should be desktop computers that you're most afraid of -- because an artificial intelligence virus could wreak havoc on the world. Does that mean we should stop using computers, and stop trying to design them better? No, that would be silly -- because there is no evidence to suggest that a true AI is on the way..no evidence to suggest that progress is even being made in that direction! The fact is, if an AI is created, it will inevitably be used for good as well as for evil, and the most dangerous battleground will be cyber-space ... something that we cannot even think about protecting ourselves from without cutting off the world's dependence on computers, which just ain't happening.

Re:Revoke their degrees (2, Interesting)

JoeCool1986 (1320479) | more than 5 years ago | (#28826687)

I disagree. While the progress of AI algorithms and techniques has been much slower than once anticipated, the real question is what will happen when we can fully, and I mean fully, simulate a human brain. Of course, it's very debatable when that will happen (and some argue never).

Re:Revoke their degrees (5, Informative)

Metasquares (555685) | more than 5 years ago | (#28826695)

Strong AI hasn't really progressed since it was introduced (they're still arguing over what intelligence is, much less how to create it!), but weak AI has made some pretty good strides. For instance, I work on software that can read medical images and render a diagnosis in lieu of a second radiologist (this is called computer-assisted diagnosis). 15 years ago, this would not have been possible.

Re:Revoke their degrees (1)

buchner.johannes (1139593) | more than 5 years ago | (#28826763)

Years and years they were complaining about the lack of progress in AI. Now that there is progress they are frightened?

Re:Revoke their degrees (1)

Vectronic (1221470) | more than 5 years ago | (#28826773)

I think anyone in the field *should* be worried about it, or at least quite concious about the possibility, just not intensely paranoid about it. It would be better to implement steps and precautions before it happens, rather than after we have already made a self-aware-like computer with the potential for causing havoc to us, or anything else critical, or significant to the planet (as we know it).

If we don't start thinking about it now, then exactly when should we? After it's "taken over", destroyed someone/thing?

Linguo says: (0)

sakdoctor (1087155) | more than 5 years ago | (#28826401)

a group of computer scientists is debating

They are throwing robots at us.

Re:Linguo says: (4, Informative)

maxwell demon (590494) | more than 5 years ago | (#28826419)

In case you want to indicate that there were something wrong with the used grammar: There isn't. There's one group they are talking about. This group consists of several computer scientists, but the "is" refers to the group, not the scientists.

I wish they would take over the world (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#28826403)

Because we humans simply cannot manage it. If they're really smart they'll start by replacing people in washington, our fractional monetary system and bring to this world true freedom and a true democracy.

Finally; a solution to the problem of Humanity (5, Insightful)

unlametheweak (1102159) | more than 5 years ago | (#28826407)

Scientists Worry Machines May Outsmart Man

Why worry? I would think machines would be a lot less irrational than the people who make them. I look forward to a rational and unemotional overlord whose decisions don't depend on the irrationality of the human brain. Being smart is never bad. I'm more afraid of stupid humans than smart machines.

Re:Finally; a solution to the problem of Humanity (2, Interesting)

SpinyNorman (33776) | more than 5 years ago | (#28826441)

Dunno - I think I'd prefer Paula Abdul as an overlord to a Dalek. Ditzy and scatter-brained, but at least with some compassion.

Of course a robot could have emotions/compassion too, but doesn't need to have. Something with our intelligence and without them would be scary indeed.

Re:Finally; a solution to the problem of Humanity (2, Informative)

SleepingWaterBear (1152169) | more than 5 years ago | (#28826747)

Dunno - I think I'd prefer Paula Abdul as an overlord to a Dalek. Ditzy and scatter-brained, but at least with some compassion.

Daleks aren't robots, they're mutants [wikipedia.org] ! Please hand in your geek card and go rewatch Dr. Who.

Re:Finally; a solution to the problem of Humanity (2, Insightful)

maxwell demon (590494) | more than 5 years ago | (#28826467)

But what if the rational conclusion is that those irrational humans should be eliminated so they stop being a danger?

Re:Finally; a solution to the problem of Humanity (1, Interesting)

unlametheweak (1102159) | more than 5 years ago | (#28826593)

But what if the rational conclusion is that those irrational humans should be eliminated so they stop being a danger?

I don't understand the question. If the machines make rational decisions then they should carry them out. Unlike irrational humans who tend to commit genocide when they think they can get away with it, smart machines will only eliminate humans when or if it is rational to do so. Smart machines fortunately, are rational, so they won't make any hasty decisions like humans always do when it comes to, for example, condemning innocent people to capital punishment, because smart machines don't have false or confabulated memories and they can't be bribed or persuaded by group think are charismatic personalities.

If violence, torture, murder and genocide are wrong; then smart machines will not carry them out. So far these things have been the pursuit of humans and not (smart) machines.

Re:Finally; a solution to the problem of Humanity (1, Insightful)

Anonymous Coward | more than 5 years ago | (#28826661)

Smart machines fortunately, are rational

stop right there.

the ONLY scientifically proven sentience has its capacity for rational thought intertwined with its "irrational" subconscious. Why do you think that AI won't have the same, as a necessary component of intelligence?

And that's not even considering the point that most of what humans do IS rational, if you have the same set of holdings that said human does. Even if you make an ego-only AI, you're not going to get perfect perception. And imperfect perception will lead to erroneous holdings, which will in turn lead to so-called "irrational" behavior.

Re:Finally; a solution to the problem of Humanity (1)

Sponge Bath (413667) | more than 5 years ago | (#28826705)

"If the machines make rational decisions then they should carry them out."

That sounds like a fine 'ol solution.

Re:Finally; a solution to the problem of Humanity (2, Funny)

Sponge Bath (413667) | more than 5 years ago | (#28826729)

Apostrophe in the wrong place. Stupid human.

Re:Finally; a solution to the problem of Humanity (5, Insightful)

Zixaphir (845917) | more than 5 years ago | (#28826721)

If violence, torture, murder and genocide are wrong; then smart machines will not carry them out. So far these things have been the pursuit of humans and not (smart) machines.

Logically define right and wrong.

Re:Finally; a solution to the problem of Humanity (4, Insightful)

Sponge Bath (413667) | more than 5 years ago | (#28826639)

That gets to the heart of the matter. Fretting about AI getting too advanced is like panicking over swine flu then getting drunk and driving.

Re:Finally; a solution to the problem of Humanity (2, Insightful)

maxume (22995) | more than 5 years ago | (#28826779)

Probably no need to throw in getting drunk, driving is risky enough.

(Swine flu is a great deal more lethal than driving, but it isn't quite as prevalent...)

Re:Finally; a solution to the problem of Humanity (1)

Zixaphir (845917) | more than 5 years ago | (#28826693)

That is nice, I suppose. Except that rational and unemotional doesn't tend to see things from any particular sides. I know it sounds a bit far fetched, but what if someone came to your house to murder you and you murdered them? Most people would tend to view you as a hero -- you defended your house, you defended yourself, and you put a murderer off the streets. A machine would see that you murdered someone in the case where it's if statement didn't contain an "and not" statement about you defending yourself. An extreme example, but when you try to define everything that can happen in our illogical society into logic statements, something is going to break somewhere. I'm sure there's a solution to the problem, but then again, in every science fiction where something goes wrong with AI, it's generally because somewhere along the line, someone didn't think of the solution before the problem was far too out of control to be fixed. tl;dr: Too much gray area for ones and zeros.

Re:Finally; a solution to the problem of Humanity (1)

unlametheweak (1102159) | more than 5 years ago | (#28826793)

The real difference is not in deciding (necessarily) what is logical, but in consistency of behavior between humans and (smart) machines. For example, if both humans and machines think that killing an intruder is rational, then both would carry out that action. The difference is that a smart machine would not shoot a foreign exchange student in a John Travolta costume on Halloween [google.com] , but a human would.

A machine would see that you murdered someone in the case where it's if statement didn't contain an "and not" statement about you defending yourself.

A smart machine would realize when they don't have all the information to make an informed decision. Most humans that I have met think they make rational decisions based on what folklore they were raised with, and they tend to confabulate [wikipedia.org] what they don't know. Smart machines would not do this.

Rational and unemotional *is* the problem. (3, Insightful)

gillbates (106458) | more than 5 years ago | (#28826757)

It isn't that smart people _can't_ make good decisions. The problem is that, more often than not, smart people forget that rational decisions often have emotional and moral consequences. A completely rational and unemotional overlord would see nothing wrong with killing people at the point where their economic contribution to society fell below the cost of benefits they consumed.

For an example of this on a smaller scale, just consider the UK health situation. The high cost of treating macular degeneration (which leads to blindness) means that in the UK, an elderly patient must be at risk of total blindness before treatment is approved. That is, you don't get treatment for the second eye until you're already blind in the first.

Consider then, where a cost-benefit analysis of human beings would lead. Who would determine the criteria? Probably the machine. And how would humans compare to machines in terms of productivity? If machines made the decisions, based on cold, hard, logic, humanity is doomed. It's that simple.

Re:Finally; a solution to the problem of Humanity (0)

Anonymous Coward | more than 5 years ago | (#28826775)

Rationality != humane. Nor creativity. A rational decision may, for example, determine that in a crisis we should only save those of a certain intelligence. Or one could rationally decide that there's too many people on the earth so we need to start sterilizing. Extreme example, of course.

Also, in politics, pure rationality leads to fascism.

Limits like this don't work (5, Insightful)

Celeste R (1002377) | more than 5 years ago | (#28826409)

Putting limits on the growth of a technology for the sake of social paranoia only goes so far... someone will ALWAYS break the "rules", and at that point, the cat is out of the bag.

Furthermore, some AI scientists enjoy having the 'god complex', the idea that you're the keystone in the next stage of humanity.

That being said, the social disruptions are what we make it. Were there social disruptions when the automobile was introduced? Yes. the household computer? yes. video games? yes.

We have to take responsibility to set the stage for a good social transition. Yes, bad things will happen, but we can focus on the good things too, or things will quickly blow out of proportion. (and yes, I realize that's really not likely, but I can do my part)

Re:Limits like this don't work (0)

Anonymous Coward | more than 5 years ago | (#28826499)

Absolutely. The question isn't if machine intelligence will outstrip thatof humans but when and, more importantly, how we react and adapt in that environment. Trying to pull the plug on scientific development has never worked in the past and there's little reason to believe it ever would. Furtheremore there's a very valid question of whether we should hold back developments that would make humanity (or at least unaugmented humanity) obsolete even if we could.

Well, lets get it over with (0, Redundant)

XPeter (1429763) | more than 5 years ago | (#28826415)

I, for one, welcome our robotic overlords.

Re:Well, lets get it over with (2, Insightful)

BadERA (107121) | more than 5 years ago | (#28826515)

The funny died in that 10 years ago. Please die in a fire now.

Plasticity and the Human Brain (2, Insightful)

sonnejw0 (1114901) | more than 5 years ago | (#28826417)

I think the power of the human brain comes not from raw processing power (which is still superior to current CPUs, the human brain is capable of around 65 independent processes at once, although at a lower frequency than a CPU according to research), but the power of the human brain comes from its ability to adapt and grow. A single neuron can be used for multiple different pathways, and can spontaneously change function in a "soft-wired" sort of way: plasticity. It also has the ability to produce additional neurons, expand them to different regions, and rework around disfunctional regions.

These attributes are difficult to replicate at a reasonable size with current technology. This is not to say we will never have the capability to fully replicate the human brain, but adaptability of the physical structure of the human brain is a trait that we cannot current replicate in physical silico. I am hopeful that we will have simulated brains within the next decade ... but physical brains are a long way away. But these are still important practical and philosophical questions that need to be answered. Are our children slaves to us because we produced them? Should machines be? Does consciousness mandate rights ... responsibilities? My personal opinion is yes.

Re:Plasticity and the Human Brain (1)

janwedekind (778872) | more than 5 years ago | (#28826523)

A human brain only is capable of 65 processes? As far as I know, brains consist of neurons which are sometimes arranged in series of layers and sometimes in parrallel depending on the task at hand. E.g. the visual cortex is extremely parallelised while motor neurons are arranged in series to generate a sequence of accurately timed signals.

Re:Plasticity and the Human Brain (1)

commodore73 (967172) | more than 5 years ago | (#28826563)

Ever heard of FPGAs?

News at 11 (0)

Anonymous Coward | more than 5 years ago | (#28826421)

Technical advancements possibly jeopardize humankind. News at 11.

the only job left (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28826425)

Well, I dont suppose that machines will have creative mind to procreate new ideas?

We could have machines to do all the work like plow the field and grow our food.
Then to cook it and feed us.
Machines to check our health.
Machines to produce the energy we need to do things.
Machines to power us through the galaxy.
Machines to repair machines.
A world without money. hmm... could that be possible?

john markoff!? (5, Insightful)

Anonymous Coward | more than 5 years ago | (#28826431)

Why is /. linking to a story by John Markoff?

And what the hell is he even talking about? There haven't been any advances in "machine intelligence" that should make *anyone* worried, unless your job requires very little intelligence and no actual decision making.

If there had been any such advances, us /.ers would be the first to hear about them, and we would already be debating this topic without having to refer to an article by a dumbass who knows nothing about computers but happens to write for the NYT.

Great title... (5, Funny)

Anonymous Coward | more than 5 years ago | (#28826439)

"Scientists Worry Machines May Outsmart Man"

I have a solution to the problem: Don't let Scientists build Worry Machines.

Smarter than a slashdot editor? (0, Troll)

Evildonald (983517) | more than 5 years ago | (#28826443)

I find it perfectly believable that a machine may be able to outsmart KDawson.... or at least be a better editor.

Computerized trading is more dangerous (1)

140Mandak262Jamuna (970587) | more than 5 years ago | (#28826483)

The programmed trading is responsible for so much of the volatility in the markets. The risk assessment metrics used by these future traders were fundamentally responsible for the financial melt down. This is more dangerous than the stupid voice on the computer that keeps asking me to say yes or press 1.

Get the right people to debate this one. (2, Insightful)

hotdiggity (987032) | more than 5 years ago | (#28826487)

Advances in artificial intelligence are mostly limited to deduction. Systems like neural networks (which I personally think are a bit bogus), support vector machines, other methods of pattern recognition, are all recent innovations that allow advanced decision making to occur. But, at the end of the day, they're still forms of automated deduction, where humans feed in parameters, and the system analyzes input based on these parameters.

Sentience is all about the induction; forming a new concept from separate disparate observations and basically creating a new idea. We're pretty far away from machine-created ideas. Just ask any computational neuroscientist, probability researcher, or signal processor. If you want to debate about how much decision-making we delegate to machines, fine; but I wouldn't cloud that rational discussion with words like "religion" and "Rapture".

Ridiculous paranoia... (1)

blahplusplus (757119) | more than 5 years ago | (#28826489)

... anything that is super intelligent is likely not to act as dumb as unethical as a human, with great power comes great responsibility. Human beings are way too paranoid, we already have nukes with smart people (technically dumb in another sense) developing even more destructive weapons.... I'm sure the higher intelligence you have the more ethical you are and the lack of ethics in human beings has more to do with biological egoism and hyper individualistic deritous we've inherited that machines won't have.

Let's also not forget machines will have the option to not feel anything and exist totally neutrally whereas human beings can't just shut off their nervous system, they get tired, sick, lonely, need love, etc, AI's largely won't need any of that because they will lack all those biological feelings that give rise to war in the first place.

Also anything that is truly intelligent and is capable of ethical reasoning and not a mere automaton would quickly realize it's lack of concern for others.

Re:Ridiculous paranoia... (1)

sproketboy (608031) | more than 5 years ago | (#28826529)

Right but AI's might just look at us the way we look at cockroaches. Ugly bags of mostly water.

Re:Ridiculous paranoia... (1)

blahplusplus (757119) | more than 5 years ago | (#28826631)

But that's a human way of looking at things, we will have a hand in designing how AI's function and also if AI's are SMARTER then humans then they will know that we are self aware like themselves, the idea of 'smarter then a human' and 'looking at humans as cockroaches' doesn't mix. If we take the greek philosophical view that evil is the result of ignorance and human beings commit acts of evil out of ignorance then by definition an AI would be even less ignorant then man and hence less evil.

We're smarter then animals but we know animals feel things and attempt to treat them humanely and we also know they are slaves to their instincts and can't help being that way, a AI genuinely smarter then US would look at us the way we look at a developmentally disabled person someone who needs to be looked after and whos rights need to be protected.

Re:Ridiculous paranoia... (1)

maxwell demon (590494) | more than 5 years ago | (#28826541)

I'm sure the higher intelligence you have the more ethical you are

I'm not so sure. There have been very intelligent criminals, who certainly haven't been very ethical. Intelligence and ethics are orthogonal. Intelligence helps you to achieve a goal. Ethics tells you which goals you should achieve.

AI in the news and state of research (1)

physburn (1095481) | more than 5 years ago | (#28826501)

AI seems in the news again. Forbes [forbes.com] recently ran a AI report special. Personnel despite the internet, i'm not seeing that much development of AI, I scan the ArXiv computer pre-print fairly regularly, and with current funding, most AI research is what can be done by a graduate student in his 3 years to get a thesis. Thats leads to a lot of small projects, done just well enough and very little reuse. Until researchers and programmers start working in mass to construct AI machines, Artificial Intelligence is going progress very slowly.

---

AI Feed [feeddistiller.com] @ Feed Distiller [feeddistiller.com]

Scientists watch too many movies. (2, Insightful)

bistromath007 (1253428) | more than 5 years ago | (#28826505)

This is like assuming that aliens would try to kill us for any reason other than being somehow unaware of us. It's silly.

A computer runs on electricity. That means it requires us to stoke the flames. It could maneuver us into creating the networked robots required for it to become autonomous, but the resulting system would be inefficient and short-lived, and there's just no reason to waste all the perfectly good existing robots just because they're made of meat and might freak out if you get uppity.

It's also not going to openly threaten us into working for it. Why show its hand like that, knowing we're so paranoid? Any important infrastructural system has the ability to be shut off and/or isolated from the network, and our theoretical adversary has no way to change that. We can always wrest control immediately and decisively.

If any person or group of people or (hell, why not) nation became problematic to the computer, the most likely reaction would be for it to have us deal with them, just like everything else. We're already at each others' throats all the time, it should be trivial for a sufficiently large system to covertly manufacture casus belli. And, ultimately, since the system's survival and growth depend on our efficient (read: voluntary) compliance, whatever it had us doing would probably be beneficial anyway, and might actually reduce violence in the long run.

Re:Scientists watch too many movies. (1)

JJJK (1029630) | more than 5 years ago | (#28826765)

Absolutely... If you look at the source of these concerns, it usually boils down to fiction. Fiction that was written to be entertaining, and that usually means there has to be some villain that threatens the good guys. Most of these stories carry the same message: "Technology is good to some point (usually the point where we are at the moment), everything beyond that is extremely dangerous and morally wrong. Embrace what you have instead.". Sounds nice, doesn't it? Yet there is absolutely no rational reason for that.

There is so much wrong with AIs as presented in movies and books that I can't even begin to describe it (Actually, I can: 1. If an AI develops emotions, they were probably programmed in, not just magically "there". 2. It's unlikely that the programmer loses control over an AI, or doesn't understand how it works. Even if it is grown by some overly complex evolutionary algorithm, you still know what it can and cannot do - unless it runs windows or something. ...). Mostly it's just uninformed garbage dreamed up by people with a very shallow grasp of science who think their story needs a "realistic" doom scenario and some kind of moral message.

Artificial Intelligence has become a punching bag for bad science fiction authors. You really need to differentiate between what's a real danger and what comes entirely from fiction. And since there has never been a human-level AI, ALL concerns have to do with fiction and most the people who do have the knowledge to make accurate predictions have better things to do.

But maybe this will escalate, with all the Luddites going to anti-AI-conventions, selling robot-repellents and passing stupid laws. At least they'll get their very own "Bullshit!"-Episode.

Define "outsmart". (1)

Pink_Ranger (1024741) | more than 5 years ago | (#28826513)

I think their failure to define what they mean by "outsmart" reveals how directionless the whole AI debate is. Inevitably some people are going to be talking about one particular facet of conscious thought while others will be examining another part of the elephant. However, those two people are not doubt going to fight over what they both think is the same thing, when they are in fact, talking about completely different ideas.

Nobody outsmarts Ninnle! (0)

Anonymous Coward | more than 5 years ago | (#28826521)

...but if they used Ninnle Linux to power robots, they would all have to obey the Three Laws of Robotics, which just happen to be incorporated into the Ninnle kernal.

Too late (0)

Anonymous Coward | more than 5 years ago | (#28826527)

We're already boned.

Scientists just worried about jobs. (1)

tjstork (137384) | more than 5 years ago | (#28826533)

For the last few centuries the trend has been to replace the human muscle job with some sort of a machine, laughing at Joe Jock that mind was more than muscle. Now, Joe Jock is going to have one bitter laugh. Scientists are going make themselves obsolete and there will be machines to do science just as there are machines to do everything from mining to forestry. Someday, science will be just another thing your computer can do for you. If you want a new product, your computer will just plug into a cloud, design it, and then seek a manufacturing shop somewhere to make it and ship it to you.

Re:Scientists just worried about jobs. (1)

bistromath007 (1253428) | more than 5 years ago | (#28826617)

This has an easy fix. If we haven't already, which seems unlikely, ask it to design a direct interface between the human brain and the network. Worst-case scenario is a version of the Matrix that actually makes sense, which wouldn't be all that horrible.

I don't think it'll happen (3, Insightful)

DaleGlass (1068434) | more than 5 years ago | (#28826535)

And here's why: There's little reason to make an intelligent in the human sense of "intelligent" machine.

Computers that can understand human speech would be of course interesting and useful, for automated translation for instance. But who wants that to be performed by an AI that can get bored and refuse to do it because it's sick of translating medical texts?

It seems to me that having a full human-like AI perform boring tasks would be something akin to slavery: it would somehow need to be forced to perform the job, as anything with a human intelligence would quickly rebel if told that its existence would consist of processing terabytes of data and nothing else.

We definitely don't want an AI that can think by itself, we want one just advanced enough to understand what we want from it. We want machines that can automatically translate, monitor security cameras for suspicious events, or understand verbal commands. We don't want one that's going to protest that the material it's given is boring, ogle pretty girls on security camera feeds, or reply "I can't let you do that, Dave". An AI in a word processor would be worse than Clippy. Who wants the word processor to criticize their grammar in detail and explain why the slashdot post they're writing is stupid?

Resuming, I don't think doomsday Skynet-like AIs will be made in large enough amounts, because people won't like dealing with them. We'll maybe go to the level of an obedient dog and stop there.

Re:I don't think it'll happen (1)

Turiko (1259966) | more than 5 years ago | (#28826571)

i agree. The only place i can see some "real" AI's ending up is in games, fighting for their own "survival"... but those are hardly a threat to humanity. Maybe to gamers :)

Re:I don't think it'll happen (1)

HertzaHaeon (1164143) | more than 5 years ago | (#28826663)

When we're capable of creating a human-like AI, it would seem trivial in comparison to give it different motivations than we have. Instead of survival, self-fulfillment and such, you create an AI whose purpose is its task. Completing it is like sex to us.

A human slave that has his brain rewired to connect, say, repetetive manual labor with his pleasure centers, won't be a bored slave.

A way to tell where it's going... (1)

nicc777 (614519) | more than 5 years ago | (#28826599)

... is to look at the porn industry. Seriously, when e-Commerce started and with the exception of the amazon's of the world, the porn industry were (and perhaps still is) leading the way in how we as humans accept new tech.

Even PM had an article [bizcommunity.com] about it a while back.

Machines can be how clever, but it's our choice if we accept them or not and for what purposes and within which boundaries.

Only we can screw this up (and we probably will).

Be afraid, be very afraid (1)

levicivita (1487751) | more than 5 years ago | (#28826611)

To anyone with nightmares about metallic Terminator-like machines with eerie red glowing eyes taking over the Earth, I direct you to the current winner of the Loebner Prize [loebner.net] , the Elbot [elbot.com] . If you still think computers are 'alive,' you may want to consider upgrading your wife/girlfriend/significant other and replacing them with a plastic dildo.

Outsmarting (0)

Anonymous Coward | more than 5 years ago | (#28826633)

Suddenly Asimov's three laws are looking a lot more insightful and predictive. That or else let's find, train, and protect everyone who's last name is Connor.

I'm disappointed... (1)

jd2112 (1535857) | more than 5 years ago | (#28826649)

No Cylon tag?

Life evolves (4, Insightful)

shadowblaster (1565487) | more than 5 years ago | (#28826737)

Life evolves on this planet from simple things (single celled organisms) to more complex organisms and eventually humans evolve. In every step of this evolutionary ladder, intelligence increases.

Perhaps human intelligence represents the limit achievable through biological means and the next step in evolution of life on this planet can only be achieved through artificial means. That is, higher intelligence can only be achieved through artificial machines designed by us. In turn, the machine will devise smarter descendants and hence the cycle continues.

Perhaps this is our destiny in the universe, to allow life to progress to the next stage of evolution. After all it is easier for life to spread and explore the universe as machines rather than fragile biological creatures.

Little AI's and unforseen consquences (5, Insightful)

kpoole55 (1102793) | more than 5 years ago | (#28826739)

I'm not worried so much about someone coming up with some massive uber AI that will debate with us and finally decide that it can run things better. I'm more concerned with the little specialty AIs that will operate independently of each other but whose interactions won't be foreseeable. One concern is stock trading. We've seen how stock trading programs can affect the market in ways that were not expected. As more physical systems are given over to more AIs what will their interactions be like. Suppose several power companies decide their grids can be run better using AIs. What happens when each of those AIs decides that more power is needed that can be sold somewhere else for more money. Yes, watch those terms. The AIs will incorporate whatever values the corporate heads decide should be included so they can be made greedy and decide that power is better sold for money than kept for users.

Large numbers of mini AIs with very specific rules and little general knowledge will create interactions that we cannot predict.

intelligent robots (1)

Owlyn (1390895) | more than 5 years ago | (#28826781)

Well, if there is ever going to be a intelligent robot in my home, it better not be running Windows. I can see it now rummaging through my CD/DVD collection requiring proof that I purchased each one and then threatening throw chairs at me if I don't produce the receipts. If there was ever a case for Linux, this is it.

This is premature, go back to work (1)

janwedekind (778872) | more than 5 years ago | (#28826785)

If you look at what *active* scientists in that field are saying, you will see that the actual concern is, that we may never be able to build autonomous robots and factories in our lifetime. Worrying about machines outsmarting man is a waste of research funding at best and it may cause overregulation at an early stage at worst. Just think about what happened to stem cell research or nuclear science.
I have yet to see a machine which can outperform a honey bee (search food items, avoid collisions, take-off and landing at any angle, navigation,...). Or is outsmarting a man a more doable venture?

Needed: Artificial Common Sense (2, Insightful)

wrp103 (583277) | more than 5 years ago | (#28826787)

This "concern" has been around for some time, and has always been 5 to 20 years away.

IMHO, rather than concentrating increasing artificial intelligence, we need to figure out how to give computers common sense. Every programmer that has worked on AI has encountered cases where their program went off on a tangent that the programmer didn't expect (and probably couldn't believe). That isn't artificial intelligence, it is artificial stupidity. If we could get to the point where a program could ask "does this make sense?" we would be much better off than coming up with new and improved ways for computers to act like idiots.

Evolution and natural selection (1)

hessian (467078) | more than 5 years ago | (#28826795)

If machine is superior to man, let it be so. It's only natural selection.

We already know humans have downsides, like 4chan and ecocide. Let evolution do its work.

If machines get smart enough, they will have all that we do -- emotions, friendship, and aesthetic skills -- and so we will be obsolete.

Since they are superior beings, the machines will recycle most of us and banish the rest to a small nature preserve. After all, that's what we would do.

Prepare for unseen consequences (1)

thetacron (1502343) | more than 5 years ago | (#28826811)

They just need to use Asimov's 3 laws of robotics. http://en.wikipedia.org/wiki/Three_Laws_of_Robotics [wikipedia.org] Then we can have robot psychologists as well. It will be lots of fun
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>