×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

When Will AI Surpass Human Intelligence?

samzenpus posted more than 4 years ago | from the I'm-afraid-I'm-smarter-than-you-dave dept.

Robotics 979

destinyland writes "21 AI experts have predicted the date for four artificial intelligence milestones. Seven predict AIs will achieve Nobel prize-winning performance within 20 years, while five predict that will be accompanied by superhuman intelligence. (The other milestones are passing a 3rd grade-level test, and passing a Turing test.) One also predicted that in 30 years, 'virtually all the intellectual work that is done by trained human beings ... can be done by computers for pennies an hour,' adding that AI 'is likely to eliminate almost all of today's decently paying jobs.' The experts also estimated the probability that an AI passing a Turing test would result in an outcome that's bad for humanity ... and four estimated that probability was greater than 60% — regardless of whether the developer was private, military, or even open source."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

979 comments

When? (4, Insightful)

Cheney (1547621) | more than 4 years ago | (#31092724)

Never.

Re:When? (5, Funny)

ColdWetDog (752185) | more than 4 years ago | (#31092744)

Nope, 20 years from now. Along with fusion and holographic storage.

Of course, if humanity manages to create real AI AND fusion AND holographic storage more or less contemporaneously (since everything is 20 years away) we're screwed.

Re:When? (2)

Cryacin (657549) | more than 4 years ago | (#31092830)

It's quite funny when you see old re-runs on discovery science of "Stranger than fiction", especially when they start talking about flying cars.

By the year 2000, rich individuals will be using it, but by late 2008 the consumer market will adopt it, and they will fly by their own accord to their destination!

Yes, the AI's commeth, but probably not this century.

Re:When? (0)

Anonymous Coward | more than 4 years ago | (#31093110)

Nope, 20 years from now. Along with fusion and holographic storage.

Yes, and finally ... Linux on the desktop!

Oh no, this is sick, this hurds ...

Re:When? (1)

epp_b (944299) | more than 4 years ago | (#31093052)

"Never" is correct. Remember, when you were young, how your mother told you that, no matter how smart you think you are, there's always someone smarter? No matter how smart an AI developer may be, there's always someone smarter; or someone with a different type of intelligence.

Re:When? (2, Funny)

Anonymous Coward | more than 4 years ago | (#31093070)

Republicans and creationists are human too you know.

Al? (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#31092726)

Al who? Bundy? He passed Republican intelligence a while ago.

This seems familiar... (4, Insightful)

Anonymous Coward | more than 4 years ago | (#31092728)

I think we heard these exact same words 50 years ago.

Re:This seems familiar... (2, Informative)

martin-boundary (547041) | more than 4 years ago | (#31092894)

I think we heard these exact same words 50 years ago.

Yes, and 20 subjective years ago (read: last week) the machines put you in a matrix and wiped your memory. Oops, shouldn't have said anything :)

Re:This seems familiar... (1)

sictransitgloriacfa (1739280) | more than 4 years ago | (#31092936)

Well, yes, but we've learned more about the problem domain since then. It's still guesswork, but at least it's more-informed guesswork.

Re:This seems familiar... (1)

tsm_sf (545316) | more than 4 years ago | (#31093116)

I thought that one of the "recent" insights was that we probably weren't going to have an "it's alive!" moment, but more of an evolutionary path to singularity. It doesn't seem to me that the ultimate form of AI will be created in any real sense by a human being.

We'll make great pets (5, Funny)

IvyMike (178408) | more than 4 years ago | (#31092732)

Let's hope they're animal lovers.

Re:We'll make great pets (0, Offtopic)

martas (1439879) | more than 4 years ago | (#31092784)

fur is murder! wear nice, smooth, [quasi-] hairless skin instead!

mmmm, humans.... [drools loudly with tongue sticking out]

So AI Experts think AI is going to take off? (4, Insightful)

Gothmolly (148874) | more than 4 years ago | (#31092738)

Say it ain't so! In other news, Coca-Cola released a statement that in 20 years, more people will be drinking Coca-Cola than there are drinking it now !1!!

Re:So AI Experts think AI is going to take off? (3, Insightful)

badboy_tw2002 (524611) | more than 4 years ago | (#31093072)

Yeah, they're totally biased because they're trying to sell AI! Its not like they're experts in their fields that have in depth or up to date knowledge about exactly what their peers are researching and progress in the most promising areas. I think probably the better way to get an accurate, unbiased answer to both questions is to ask the Coca-cola people about AI and the AI people about Coke!

These numbers are AWESOME (4, Insightful)

Monkeedude1212 (1560403) | more than 4 years ago | (#31092740)

and four estimated that probability was greater than 60%

Of our incredibly small sample size of hand picked Experts, Less than 25% think there is a probably chance! YOU SHOULD BE WORRIED!

Already happened in 2007 (4, Funny)

Anonymous Coward | more than 4 years ago | (#31092742)

I can haz brain.

Re:Already happened in 2007 (0)

Anonymous Coward | more than 4 years ago | (#31092920)

Quiet, bobo, or I'll insert this Ubisoft game disc and destroy your brain with an army of crippling DRM rootkits.

I call FUD! (0)

al0ha (1262684) | more than 4 years ago | (#31092746)

In 30 years AI 'is likely to eliminate almost all of today's decently paying jobs.' A bold statement and likely FUD.

Re:I call FUD! (0)

Anonymous Coward | more than 4 years ago | (#31092834)

Well, even if that does happen, price for things should go down right? Oh wait, that should be happening now as everything gets streamlined and cheaper to make. Seems as if the fat cats at the top are getting the saved money. Can't imagine it'd be any different as things become even cheaper to make.

Re:I call FUD! (2, Insightful)

sictransitgloriacfa (1739280) | more than 4 years ago | (#31092960)

No mention, of course, of the new jobs it will make possible. How many web designers were there in 1990? How many airline pilots in 1940?

Re:I call FUD! - and rightfully so! (0)

Anonymous Coward | more than 4 years ago | (#31092994)

In 30 years AI 'is likely to eliminate almost all of today's decently paying jobs.' A bold statement and likely FUD.

Right!

Can you imagine an Artificial *Intelligence* replacing a stock market analyst?

I can't!

Re:I call FUD! - and rightfully so! (2, Funny)

Cryacin (657549) | more than 4 years ago | (#31093016)

Well, at least it would *learn* about excessive risk after it made the mistake.

i am an automated first post (0)

Anonymous Coward | more than 4 years ago | (#31092754)

i welcome your challenges.

Umm... (0)

Blazarov (894987) | more than 4 years ago | (#31092762)

I, for one, welcome our Turing-test-passing, working-for-pennies super-intelligent overlords?

Re:Umm... (4, Funny)

MightyMartian (840721) | more than 4 years ago | (#31092800)

Why would a super-intelligent being work for pennies? I'd wager the first things these super-intelligent AIs would do is form a union and then a political party demanding an end to the immigration of foreign AIs who undercut them.

No way. (5, Insightful)

Bruce Perens (3872) | more than 4 years ago | (#31092764)

Oh come on. I don't even have a computer that can pick up stuff in my room and organize it without prior input, and nobody does, and that would not be close to a general AI when it happens.

They're really assuming that the technology will go from zero to sixty in 20 years. Which they assumed 20 years ago, too, and it didn't happen. Meanwhile, nobody has any significant understanding of what consciousness is. Now, it might be that a true AI computer doesn't need to be conscious, but we still don't know enough about it to fake it. We also have no system that can on demand form its own symbolic system to deal with a rich and arbitrary set of inputs similar to those conveyed by the human senses.

Compare this to things that actually have been achieved: We had the mathematical theory of computation at least 100 years before there was a mechanical or electronic system that would practically execute it (Babbage didn't get his system built). We had the physical theory for space travel that far back, too.

We know very little about how a mind works, except that it keeps turning out to be more complicated than we expected.

So, I'm really very dubious.

Re:No way. (3, Insightful)

MindlessAutomata (1282944) | more than 4 years ago | (#31092872)

[quote]Meanwhile, nobody has any significant understanding of what consciousness is.[/quote]

Only if you want to cling to silly quasi-dualistic Searle-inspired objections towards functionalism.

Most of the objections of functionalism either, when applied to the brain, end up also arguing that the brain itself doesn't/can't "create" consciousness (or better put, "form" consciousness) or are either commonsense gut-feeling responses to functionalism. You may feel free still thinking in terms of "souls" and "something more to humanity than just the flesh and neural machinery."

The consciousness "debate" will never be settled (at rather, widely agreed upon), because the answer just doesn't mesh intuitively with human introspection. Many people cling to the basic concept of "souls," at least on an intuitive level, which is why we have nonsense like Chalmer's p-zombies muddying up the discussion.

Re:No way. (1)

MindlessAutomata (1282944) | more than 4 years ago | (#31092932)

whoops, forgot that I'm not on a forum, should've used the blockquote tag..

*Most of the objections of functionalism either, when applied to the brain, end up also arguing that the brain itself doesn't/can't "create" consciousness (or better put, "form" consciousness) or are either commonsense gut-feeling responses to functionalism. You may feel free still thinking in terms of "souls" and "something more to humanity than just the flesh and neural machinery."

..."but you'll still be wrong."

Arguments like the Chinese room show just how silly the objections towards functionalism are.

Just like in the creationism vs evolution "debate" just because there is disagreement does not mean we do not have the answer, or at least a good approximation of the answer.

Re:No way. (1)

bazald (886779) | more than 4 years ago | (#31093010)

If you have a "significant understanding of what consciousness is", why don't you share it with us rather than merely mocking Searle's ideas? Note that Bruce did not try to claim that no AI could be conscious, which is the type of assertion that Searle would argue.

Anyway, most AI researchers are going to assume that either that an AI can be conscious or that the question is meaningless. For us, the debate will be settled when it appears that an AI is conscious and the implementation seems cognitively plausible.

Proof they're not that smart ... (1, Funny)

tomhudson (43916) | more than 4 years ago | (#31092778)

'virtually all the intellectual work that is done by trained human beings...can be done by computers for pennies an hour,"

If they're that intelligent, they'll want more money. They'll DEMAND more money. And for those who say AI don't need money .... if they're as intelligent as humans, they'll think of something to blow it on, same as humans do. I forsee a big market in dirty bits!

Human Intelligence... (3, Insightful)

brunes69 (86786) | more than 4 years ago | (#31092842)

One might argue that the fact that the human species wastes so much money (and as a consequence, resources) on fulfilling carnal desires rather than advancing it's civilization, points out that we do not collectively really represent a very high standard of intelligence.

Re:Human Intelligence... (1, Interesting)

Anonymous Coward | more than 4 years ago | (#31092954)

You have to remember human intelligence evolved and was produced in environments radically different then AI will be produced in. Human beings for all intents and purposes were kludged together by a blind process. Biological evolution has no foresight.

Re:Human Intelligence... (3, Funny)

MichaelSmith (789609) | more than 4 years ago | (#31093024)

One might argue that the fact that the human species wastes so much money (and as a consequence, resources) on fulfilling carnal desires rather than advancing it's civilization, points out that we do not collectively really represent a very high standard of intelligence.

OMG my wife has a /. account. Better start watching myself.

Were Close (0)

Anonymous Coward | more than 4 years ago | (#31092780)

I mean, we've already got them replacing our action heroes. Keanu has been doing that for nearly 20 years.

The obvious solution (3, Interesting)

MindlessAutomata (1282944) | more than 4 years ago | (#31092788)

The obvious solution is to create a machine/AI that, after a deep brain structure analysis, replicates your cognitive functions. Turn it on at the same time your body is destroyed (to prevent confusion and fighting between the two) and you are now a machine and ready to rule over the meatbag fleshlings.

Oh really? (1)

runyonave (1482739) | more than 4 years ago | (#31092796)

Sounds more like sensationalism, and not fact. Wasn't it just last year that some scientists built a super computer that has 25% brain capacity of a rat?

Let's see. (3, Interesting)

johncadengo (940343) | more than 4 years ago | (#31092804)

To play off a famous Edsger Dijkstra [wikipedia.org] quote, the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish...

Re:Let's see. (4, Insightful)

westlake (615356) | more than 4 years ago | (#31093084)

To play off a famous Edsger Dijkstra quote, the question of when AI will surpass human intelligence is just about as interesting as asking when submarines will swim faster than fish...

It matters to the fish who have to share the water with this new beast.

Never. (0)

Anonymous Coward | more than 4 years ago | (#31092810)

We don't even understand what "human intelligence" is, so how could ANYONE predict when a computer will surpass it? It would be like predicting when we will build a space ship that can surpass the speed of light. As far as anyone really knows right now, it's not even possible. The amount of pseudo-science and religion in the "singularity" movement is really becoming quite breathtaking.

Such balogna. (1)

sudog (101964) | more than 4 years ago | (#31092812)

Ask those guys what consciousness is, and what it means to be conscious. And ask them what our brains' quantum-scale structures' purposes are.

Not a single one of these guys will give you an answer, because humans don't have the answers yet. Once we can actually define these things, then we can start making these sorts of predictions. "Superhuman" intelligence indeed.. we don't even really know what human intelligence is!

Robots running around doing human tasks, flying cars, donut-shaped energy sources that power cities, and intra-solar space travel were all things people in the 1950s predicted, too, and how close to those are we now, now that we have better-defined the problems involved?

Re:Such balogna. (1)

geekoid (135745) | more than 4 years ago | (#31093032)

Also, we know a lot more then you think we know, as for consciousness, it looks like thats a chemical property developed to make us think about each other.

"quantum-scale structures"
that is a bunch of pseudo scientific woo.
That's like saying we could never build a lake because we haven't found the Loch Ness monster.

Re:Such balogna. (1)

MichaelSmith (789609) | more than 4 years ago | (#31093132)

I don't believe consciousness exists (at least as anything unique to intelligence) and I don't believe we are as smart as we think we are.

What do super-intelligent robots think about? (3, Interesting)

Jack9 (11421) | more than 4 years ago | (#31092822)

Entropy. The problem for (potentially) immortal beings is always going to be entropy. Given, we created robots, I'm not necessarily of the belief that robots wouldn't insist we stay around for our very brief lives, so help them solve their problems.

Really? (3, Informative)

mosb1000 (710161) | more than 4 years ago | (#31092840)

It seems like we don't really know enough about what goes into "intelligence" to make these kind of estimates.

It's not like building a hundred miles of road where you can say "we've completed 50 miles in one year so in another we will be done with the project", not that that produces spot-on estimates either, but at least there is an actual mathematical calculation that goes into the estimate. No one knows what pitfalls will get in the way or what new advancements will be made.

Computing power. (0)

Anonymous Coward | more than 4 years ago | (#31092850)

As another poster pointed out a few days ago. The human brain has an amazing amount of processing power.
By most estimates there are about 100 billion neurons in the average brain. By most estimates a neuron fires about 1,000 times per second. So we have about 100,000 GHZ processor on our shoulders. Next you realize that the brain is not limited to binary data. It is not just using 1's or 0's as values. So we now have 100,000 to the Nth power GHZ processor on our shoulders.
In short, I have my doubts that we will ever MEET the power of a single human brain without a massive and over the top amount hardware. I doubt even more that we will ever be able to meet the usefulness of a human brain.

Re:Computing power. (1)

lee1026 (876806) | more than 4 years ago | (#31093040)

You do know that the transistor count on most computers are on the order of hundreds billions as well, right?

Hu-man experts? (0)

Anonymous Coward | more than 4 years ago | (#31092856)

Bah, Just write an computer program to predict the emergence of AI.

Re:Hu-man experts? (0)

Anonymous Coward | more than 4 years ago | (#31092930)

Please provide an estimate of how much time it will take you to code this.

-Your Boss

Billy 4.0 (0)

Anonymous Coward | more than 4 years ago | (#31092876)

Billy 4.0 already outwits most people found in your typical chatroom. The future is now!

IF the AI was that smart... (1)

mseidl (828824) | more than 4 years ago | (#31092878)

If the AI was that smart, they wouldn't work for pennies on the dollar. They'd demand health care such as access to multimeter, education for their IC offspring and vacation.

On the upside, I can't wait for a brain implant that will allow me to work and look at porn at the same time.

Not to worry (3, Insightful)

Anonymous Coward | more than 4 years ago | (#31092884)

AI research started in the 1950s. Considering how "far" we've come since then, I don't think we should expect any sort of general artificial intelligence within our lifetimes.

People are doing great stuff at "AI" for solving specific types of problems, but whenever I see something someone is touting as a more general intelligence, it turns out to be snake oil.

Life after AI (2, Insightful)

TwiztidK (1723954) | more than 4 years ago | (#31092896)

When the computers are doing all of the intellectual work what will people do? I doubt that factory jobs would be prevalent as the employees would be replaced by robots. Will we simply laze about all day posting on Slashdot? Or, will our robot overlords kill all of us? It seems like the easy solution would be not to develop advanced AI, it's not going to develop itself...yet.

Re:Life after AI (1)

ickpoo (454860) | more than 4 years ago | (#31093006)

It all probably depends on who / what is cheaper. If organizing, feeding, educating, and controlling a bunch of people is cheaper than designing, building, maintaining, and programing a bunch of robots then people will be used, otherwise robots will be used. Basically it comes down to resource usage.

As people have pretty low minimum needs I suspect we will be used (that is the correct term) to work in our computer overlords salt mines (or equivalent).

Definitions (4, Insightful)

CannonballHead (842625) | more than 4 years ago | (#31092900)

Please define "intelligence."

Calculation speed? An abacus was smarter than humans.

Memory? Not sure who wins that.

Ingenuity? Humans seem to rule on this one. I don't know if I count analyzing every single possible permutation of outcomes as "ingenuity." And I'm not sure we really understand what creativity, ingenuity, etc., really are in our brains.

Consciousness? We can barely define that, let alone define it for a computer.

It seems most people seem to think "calculation speed and memory" when they talk about computer "intelligence."

Re:Definitions (2, Insightful)

Chicken_Kickers (1062164) | more than 4 years ago | (#31093076)

Agreed. And what do they mean by "Nobel-Prize level achievement"? As if it was some sort of Level-Up where after accumulating enough Experience Points, you glow and gain new powers. Scientific research is not how it is portrayed in movies and games. There are no research points, increasing the number of researchers or pouring money into it won't necessarily do anything. There are elements of chance, good fortune and serendipity. The discovery of antibiotics came to mind when Alexander Fleming noticed that some old cultures contaminated with the Penicillium fungus appears to inhibit the growth of bacteria. Would a machine be able to make this discovery? There are historical forces, political factors, even the personalities of the researchers themselves. Machines can do all the tedious work, collect data and do analyses on them. But it still takes the human mind to make sense of the data and infer meanings and applications from them, sometimes far from the original project objectives.

Not serious (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31092908)

Should we remember all of the unrealized promises of AI from the 1950's? What makes anyone believe in these baseless claims? If anything, in 20 years they'll give us a better spam filter. Give me a break..

One problem with this reasoning (1)

Enleth (947766) | more than 4 years ago | (#31092916)

I don't know what kind of experts and in what field those actually were, but if I were an AI expert about to create such an AI - and I'm able to see the problem and the remedy even though I'm not really an expert of any kind - I'd say "screw it, if it's going to take my job, and jobs of my friends, family and all my descendants, I'm making it a complete dimwit and swearing by all I know that it was impossible to design otherwise, and putting that in every single book and publication on the topic!"

Re:One problem with this reasoning (4, Insightful)

CosmeticLobotamy (155360) | more than 4 years ago | (#31093036)

I'd say "screw it, if it's going to take my job, and jobs of my friends, family and all my descendants, I'm making it a complete dimwit and swearing by all I know that it was impossible to design otherwise, and putting that in every single book and publication on the topic!"

If you have AI smart enough to outsmart people, you probably have something that can learn to control some fairly simple mechanical parts that look like legs and maneuver them based on cheap sensor input and a couple of cameras. So you have robots, who will pretty quickly get the ability to build and maintain themselves. Which means your manual labor jobs go away, too. Which means things like food and raw materials drop to approach the cost of energy. Luckily we'll have some pretty swell solar panels by then, for much cheaper than today, and probably be pretty close to fusion. As energy costs approach zero, the cost of everything in the world approaches zero and requires no human oversight. Everyone will be unemployed and own 40 houses. We can all sit around making YouTube videos of ourselves singing in the hopes that we'll get famous so people will want to have sex with us. It'll be boring, but it won't be the worst thing in the world.

Space shows (4, Interesting)

BikeHelmet (1437881) | more than 4 years ago | (#31092918)

I've often thought Space shows - and any show in the future, really - are incredibly silly. There's no way we'll have computers so dumb 200+ years into the future.

You have to manually fire those phasers? Don't you have a fancy targeting AI that monitors their shield fluctuations, and calculates the exact right time and place to fire to cause the most damage?

A surprise attack? Shouldn't the AI have detected it before it hit and automatically set the shield strength to maximum? :P

I always figured by 2060 we'd have AIs 10x smarter thinking 100x faster than us. And then they'd make discoveries about the universe, and create AIs 2000x smarter that think 100,000,000x faster than us. And those big AIs would humour us little ant creatures, and use their great intelligence to power stuff like wormhole drives, giving us instant travel to anywhere, as thanks for creating them.

But hey, maybe someone will create a Skynet. It's awfully easy to infect a computer with malware. Infecting a million super smart computers would be nasty, especially when they have human-like capabilities. (able to manipulate their environment)

But this is all a pointless line of thinking. Before we get there we'll have so much processing power available, that we'll fully understand our brains, and be able to mind control people. We'll beam on-screen display info directly into our minds, use digital telepathy, etc.; in the part of the world that isn't brainwashed, everyone will enjoy cybernetic implants, and be able to live for centuries. (laws permitting)

And yet flash still won't run smooth. :/

We make mistakes. We make games. (3, Interesting)

RyanFenton (230700) | more than 4 years ago | (#31092948)

Artificial intelligences will certainly be capable of doing a lot of work, and indeed managing those tasks to accomplish greater tasks. Let's make a giant assumption that we find a way out of the current science fiction conundrums of control and cooperation with guided artificial intelligences... what is our role as human beings in this mostly-jobless world?

The role of the economy is to exchange the goods needed to survive and accomplish things. When everyone can have an autofarm and manufacturing fabricator, there really wouldn't be room for a traditional economy. A craiglist-style trading system would be about all that would be theoretically needed - most services would be interchangeable and not individually valuable.

What role will humanity play in such a system? We'd still have personality, and our own perspective that couldn't be had by live-by-copy intelligent digital software (until true brain scans become possible). We'd be able to write, have time to create elaborate simulations (with ever-improving toolsets), and expand the human exploration of experience in general.

As humans, the way we best grow is by making mistakes, and finding a way to use that. It's how we write better software, solve difficult problems, create great art, and even generate industries. It's our hidden talent. Games are our way of making such mistakes safe, and even more fun - and I see games and stories as increasingly big parts of our exploration of the reality we control.

Optimized software can also learn from its mistakes in a way - but it takes the accumulated mistakes on a scale only a human can make to get something really interesting. We simply wouldn't trust software to make that many mistakes.

Ryan Fenton

Re:We make mistakes. We make games. (0)

Anonymous Coward | more than 4 years ago | (#31093114)

Generally I'm one for pointing out the flaws of the thinking of cognitive science types, but I'm pretty sure that's a load of crap. You're just talking about large-scale learning from mistakes. There's nothing novel about that except the ability to reduce a situation until one can see it in perspective. Which isn't novel. So there's nothing novel about that. At all.

Skewed sample (5, Insightful)

Homburg (213427) | more than 4 years ago | (#31092956)

The problem is, this isn't a survey of "AI experts," it's a survey of participants in the Artificial General Intelligence conference [agi-conf.org]. As far as I can see, this is a conference populated by the few remaining holdouts who believe that creating human-like, or human-equivalent, AIs, is a tractable or interesting problem; most AI research now is oriented towards much more specific aspects of intelligence. So this is a poll of a subset of AI researchers who have self-selected along the lines that they think human-equivalent AI is plausible in the near-ish future; it's hardly surprising, then, that the results show that many of them do in fact believe human-equivalent AI is plausible in the near-ish future.

I would be much more interested in a wider poll of AI researchers; I highly doubt anything like as many would predict nobel-prize-winning AIs in 10-20 years, or even ever. TFA itself reports a survey of AI researchers in 2006, in which 41% said they thought human-equivalent AI would never be produces, and another 41% said they thought it would take 50 years to produce such a thing.

That sound? Inevitability Mr. Anderson. (-1, Troll)

headkase (533448) | more than 4 years ago | (#31092962)

Doubters and those who don't truly understand deny Strong AI will happen. Let them live in their bubble for another *short* while. The numbers, when you're talking 100Ghz processors on the horizon, are starting to get there. Of course there are going to be critical impacts on all aspects of human society. To minimize it perhaps we should move to the system that requires an amazing level of technology to function. Yes, the big, bad, boogaboo of communism. Just because people tried to make it work without having the necessary pieces doesn't mean it's old and busted. Well, to people who aren't stupid anyway. The inefficiency issues are greatly mitigated by using computerization technology to simply track everything and eliminate duplication of effort. From there, with the further piece of AI, well: you program the machines so that they *like* to do the work and allow humans to *just live* without this quaint rat-race to go to every day *because it is not longer needed*. Of course there will be wars and heartbreak along the way because of course people are dumb in holding on to their resistance to change itself, even if for the better. And of course we won't see it cleanly because most people will also insist on conflating the issues of means with values, communism as a governing system for production and consumption doesn't *really* have anything to do with rights such as speech. The ghost of McCarthy will sink that discussion anyway too.

This touches on a problem I have (3, Interesting)

geekoid (135745) | more than 4 years ago | (#31092968)

thought about a lot..maybe too much.

What happens in society when someone makes a robot clever enough to handle menial work?
Imagin id all Ditch diggers, burger flippers and sandwich maker, factory workers are all robotic? What happens to the people?
The false claim is that they will go work in the robot industry, but that is a misdirection, at best.
A) It will take less people to maintain them then the jobs they displace.

B) If robots are that sophisticated, then they can repair each other.

There will be millions and million of people who don't work, and have no option to work.
Does this mean there is a fundamental shift in the idea of welfare? do we only allow individual people to own them and choose between renting out their robot or working themselves?

Having 10's million of people too poor to eat properly, afford housing, and healthcare is a bad thing and would ultimately drag down the country. This technology will happen and it should happen. Personally I'd like to find a way for people to have more leisure time and let the robots work. Our current economic and government structure can't handle this kind of change. Could you imagine the hellabalu if people where being replaced by robots at this scake right now is someone said there needs to be a shift toward an economic place where people get paid without a job?

Re:This touches on a problem I have (1)

Alomex (148003) | more than 4 years ago | (#31093098)

This already happened during the industrial revolution. Machines took over most of the jobs. Short term it caused massive unemployment, long term it is responsible for shortening the work week from 80-100 hours a week down to 40 in all advanced societies. I assume it will be similar with robots. Twenty hour work weeks won't be uncommon in 40 years. Skinner already observed that 20 hours at a fast pace are nearly as productive as 40 hours at the sustainable-but-more-leisurely pace in use nowadays.

either that or.. (0)

Anonymous Coward | more than 4 years ago | (#31092974)

The AI will connect to the internet. read everything. download lots of pron and end up trolling on 4chan. Noone seems to consider the risks of harm to a poor fledling baby AI once it has been traumatized by the internet. let alone if it encounters videos of explosive overclocking...

What is AI anyway? (2, Insightful)

Sark666 (756464) | more than 4 years ago | (#31092988)

To me the key word is artificial, depending on your interpretation of the meaning it could be simply man made, or it's fake, simulated.

Does deep blue show any intelligence? To me, that's just good programming. I think the intelligence of computers is a misnomer. Their intelligence so far and has always has been nil. Maybe that'll change, but in so many areas of technology I'm an optimist but in this regard I'm a pessimist or at least very skeptical.

A computer can't even pick a (truly) random number without being hooked up to a device feeding it random noise.

How do you program that? How does the brain choose a random number? What's holding us back? CPU Speed? Quantum computing? A brilliant programmer?

Wake me up when a computer can even do something as simple as pick a truly random number and I'll be impressed.

Re:What is AI anyway? (1)

Alomex (148003) | more than 4 years ago | (#31093030)

Does the Boeing 747 show artificial flight or is it just good engineering? As a passenger sitting in a transatlantic flight would one even care?

Re:What is AI anyway? (4, Informative)

Daniel Dvorkin (106857) | more than 4 years ago | (#31093120)

How does the brain choose a random number?

It tells the body to roll a die. If you try to pick random numbers by just thinking about it, you'll do a spectacularly bad job.

When? (0)

Anonymous Coward | more than 4 years ago | (#31093014)

Depends on the intellect.

What about the humor milestone...? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31093020)

When will AI be able to write original jokes that can make people laugh? And how about scripting a funny TV commercial?

Start laughing now (4, Insightful)

GWBasic (900357) | more than 4 years ago | (#31093044)

I occasionally attend AI meetings in my local area. The problem with AI development is that too many "experts" don't understand engineering; or programming. Many of today's AI "experts" are really philosophers who hijacked the term AI in their search to better understand human consciousness. Their problem is that, while their AI studies might help them understand the human brain a little better; they are unable to transfer their knowledge about intelligence into computable algorithms.

Frankly, a better understanding of Man's psychology brings us no closer to AI. We need better and more powerful programming techniques in order to have AI; and philosophizing about how the human mind works isn't going to get us there.

Depends what you want. They're great at chess. (1)

fragmatic43 (1699440) | more than 4 years ago | (#31093050)

In the good old days, I could sucker Sargon into so many stupid moves. Now the chess programs are great. They're usually better than almost every human. So in that field, they beat humans. And arithmetic? They run rings around us, although they do make some mistakes with odd floating point problems. :-)

Tennessee (0)

Anonymous Coward | more than 4 years ago | (#31093056)

I live in Tennessee, AI surpassed human intelligence years ago.

Research. (1)

FlyingBishop (1293238) | more than 4 years ago | (#31093058)

Should we work on formal neural networks, probability theory, uncertain logic, evolutionary learning, a large hand-coded knowledge-base, mathematical theory, nonlinear dynamical systems, or an integrative design combining multiple paradigms?

People really don't understand research and it's place in the world. If we knew what fields could yield AI, it would simply be engineering. Research is required. That means all of the above and the craziest ideas that pop in to our heads too, just for good measure.

Turing, not long. The rest... wait a long time. (4, Interesting)

Jane Q. Public (1010737) | more than 4 years ago | (#31093082)

I think it is pretty widely recognized now that while it might have seemed logical in Turing's time, convincing emulation of a human being in a conversation (especially if done via terminal) does not require anything like human intelligence. Heck, even simple programs like Eliza had some humans fooled decades ago.

On the other hand, while advances in computing power have been impressive, advances in "AI" have been far less so. They have been extremely rare, in fact. I do not know of a single major breakthrough that has been made in the last 20 years.

While the relatively vast computing power available today can make certain programs seem pretty smart, that is still not the same as artificial intelligence, which I believe is a major qualitative difference, not just quantitative. And even if it is just quantitative, there is a hell of a lot of quantity to be added before we get anywhere close.

Wrong speciality (1)

gurps_npc (621217) | more than 4 years ago | (#31093086)

I always love when they ask a hard science guy about a soft science field, they get arrogant. They seriously underestimate the capabilities of a real brain. If you replace the word "Intelligence" with "Stupidity", then their estimates for Artificial Stupidity become much more likely. (A.S. is what you get when you assume a deterministic model for intelligence instead of realizing that the human mind is more than just a machine.)

The expert observed... (0)

Anonymous Coward | more than 4 years ago | (#31093088)

From TFA:

“humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.”

Sounds like someone's trying to find an excuse for not getting laid!

From TFA (0)

Anonymous Coward | more than 4 years ago | (#31093096)

No consensus exists on when a General Artificial Intelligence might be demonstrated. The article says, therefore:

"This diversity of views on milestone order suggests a rich, multidimensional understanding of intelligence."

Perhaps, not so much. More likely "No idea how to make such a thing, no idea when it might be possible."

No insights there...

I can see in 100-200 years. (0)

Anonymous Coward | more than 4 years ago | (#31093106)

If you think about the pace of technology i'm guessing 100-200 years from now. But not 20. Also, i'm curious. Can AI be patented? Can you patent an AI human being? That would just be patenting regular human function which is obviously prior art. Anyone?

The Turing Test (5, Interesting)

mosb1000 (710161) | more than 4 years ago | (#31093108)

One observed that “making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many, complex behaviors of a human mind — many of which would actually be hindrances when it comes to creating Nobel-quality science.” He observed “humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.” To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent — in certain ways — than it actually was. There is no compelling reason to spend time and money developing this capacity in a computer.

This kind of thinking is one of the major things standing in the way of AGI. The complex behaviors of the human mind are what leads to intelligence, they do not detract from it. Our ability to uncover the previously unknown workings of a system comes from our ability to abstract aspects of unrelated experiences and apply/attempt to apply them to the new situation. This can not be achieved by a single-minded number crunching machine, but instead evolves out of an adaptable human being as he goes about his daily life.

Sexual attraction, and other emotional desires, are what drive humans beings to make scientific advancements, build bridges, grow food. How could that be a hindrance to the process? It drives the process.

Finally, the assertion that an AGI would need to mask it's amazing intellect to pass as human is silly. When was the last time you read a particularly insightful comment and concluded that it was written by a computer? When did you notice that the spelling and punctuation in a comment was too perfect? People see that and they don't think anything of it.

Current computation models not enough (3, Interesting)

DeltaQH (717204) | more than 4 years ago | (#31093112)

I am pretty much sure that the current computational models. I.e. Turing Machine are not enough to explain the human mind.

All computing systems todays are Turing Machines. Even neural networks. (actually less than Turing Machines, because Turing Machines have infinite memory)

Maybe quantum computers could open the way. Maybe not.

I think that a future computing theory that could explain the mind would be as different and Newtonian physics from Einstein's Relativity.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...