Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Artificial Intelligence at Human Level by 2029?

Zonk posted more than 6 years ago | from the i-need-me-an-implanted-robot-buddy dept.

Robotics 678

Gerard Boyers writes "Some members of the US National Academy of Engineering have predicted that Artificial Intelligence will reach the level of humans in around 20 years. Ray Kurzweil leads the charge: 'We will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029. We're already a human machine civilization, we use our technology to expand our physical and mental horizons and this will be a further extension of that. We'll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons.' Mr Kurzweil is one of 18 influential thinkers, and a gentleman we've discussed previously. He was chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering. The experts include Google founder Larry Page and genome pioneer Dr Craig Venter."

cancel ×

678 comments

Oblig. (5, Funny)

Anonymous Coward | more than 6 years ago | (#22449964)

I for one welcome our broadly supple, emotionally intelligent overlords.

Re:Oblig. (1, Funny)

Anonymous Coward | more than 6 years ago | (#22450394)

While you're at it, welcome the flying car and Duke Nukem Forever.

Re:Oblig. (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22450406)

Exactly--Human-Level AI is the new flying car. Not going to be around anytime soon, certainly not in 2029.

Well I'm not holding my breath (0)

Anonymous Coward | more than 6 years ago | (#22449972)

That will be 21 years until we get the first AI first post

in the meantime, humans will continue to win the frost post battle

am I right, or am I right?

Re:Well I'm not holding my breath (5, Insightful)

2.7182 (819680) | more than 6 years ago | (#22450048)

Yes, I remember well my youth, reading Goedel Escher Bach and Winograd, etc., thinking that the next scientific revolution was coming. Things never got any better than Eliza. Now as a hard scientist, I strongly feel that the problem is far far off.

Re:Well I'm not holding my breath (2, Funny)

Anonymous Coward | more than 6 years ago | (#22450292)

Things never got any better than Eliza.
Dude, that was 40 years ago. You need to talk to someone to help you get over her.

Re:Well I'm not holding my breath (1)

Hal_Porter (817932) | more than 6 years ago | (#22450184)

The 'people' getting first post haven't been human for years.

No chance (3, Insightful)

Kjella (173770) | more than 6 years ago | (#22449976)

I mean it could happpen but this is so far from the current state of the art, I think we're talking 50-100 years forward in time. We have the brute powers of computers but nowhere near the sophistication in software or neural interfaces to do anything like this.

Re:No chance (1)

styrotech (136124) | more than 6 years ago | (#22450418)

I think he's correct, but the summary stated it the wrong way. It should've said:

"based on current trends, human intelligence will reach the level of a computer by 2029"

Hrmmmm (3, Interesting)

BWJones (18351) | more than 6 years ago | (#22449982)

I'll be meeting with Kurzweil in April.... Speaking as a neuroscientist who is doing complex neural reconstructions, I think he's off his timeline by at least two decades. Note that we (scientists) have yet to really reconstruct an actual neural system outside of an invertebrate and are finding that the model diagrams grossly under-predict the actual complexity present.

Re:Hrmmmm (2, Insightful)

Anonymous Coward | more than 6 years ago | (#22450182)

I think he's off by a decimal place.

now - ai can beat a human at chess. human designed, and setup the game.

20 years from now - ai can autonomously walk up to a nearby human ask them to play chess, if the invitation is not received well, the robot could make a convincing case as to why the human should change their mind, the robot could setup the board, and initiate the game. In the middle of the game, the ai, would have no way to predict, react to, or analyze after the fact an occurrence of unexpected human behavior in the form of violence, humor, insanity, irrational requests, casual misinformation, or conversation.

200 years from now, most of the above problems will be CRUDELY solved, but still polish will be lacking. ai will not be capable of higher abstract imagination.

we're about 500-1000 years from data's head. several thousand from his head and body.

the last 10% of the job takes 90% of the time.

e

You've underestimate Apple (1, Funny)

Anonymous Coward | more than 6 years ago | (#22450234)

In 25 years, the iAI that autonomously walks (with a strut) will be introduced by Apple. It will not play chess, however it will play checkers, and the board will be setup perfectly.

Re:Hrmmmm (4, Insightful)

DynaSoar (714234) | more than 6 years ago | (#22450202)

And as a cognitive neuroscientist, I say he's off the mark entirely. As per Minsky, a fish swims under water; would you say a submarine swims?

What exactly is the "level of humans"? Passing the Turing test? (Fatally flawed because it's not double blind, btw.) Part of human intelligence includes affective input; are we to expect intelligence to be like human intelligence because it includes artificial emotions, or are we supposed to accept a new definition of intelligence without affective input? Surely they're not going to wave the "consciousness" flag. Well, Kurzweil might. Venter might follow that flag because he doesn't know better and he's as big a media hog as Kurzweil.

I think it's a silly pursuit. Why hobble a perfectly good computer by making it pretend to be something that runs on an entirely different basis? We should concentrate on making computers be the best computers and leave being human to the billions of us who do it without massive hardware.

Re:Hrmmmm (2, Funny)

timeOday (582209) | more than 6 years ago | (#22450240)

Please do not take this personally, but I don't think neuroscience is particularly important to AI. Yes, biology is horribly complex. But airplanes surpassed birds long ago, even though airplanes are much simpler and not particularly bio-inspired. Granted, birds still surpass airplanes in a few important ways (they forage for energy, procreate, and are self-healing... far beyond what we can fabricate in those respects) but airplanes sure are useful anyways. I don't think human-identical AI would have much use anyways, since it would have the same neuroses and demand all the same rights that make humans such a pain to work with.

Re:Hrmmmm (4, Interesting)

LurkerXXX (667952) | more than 6 years ago | (#22450372)

What aircraft corner as fast as barn swallows?

There are still many things we can learn from biology that can be translated to machines. The translations don't have to be 1:1 for us to make use of them. The way birds as well as insects make use of different shapes in surfaces during wing beats have translated into changes in some aricraft designs. They weren't directly incorporated the same way, but they taught us important lessons that we could then implement in different ways but with a similar outcome.

I think Neuroscience does have a lot to teach us about how to do AI.

I agree... (4, Insightful)

Jane Q. Public (1010737) | more than 6 years ago | (#22450284)

As an party "outside" the field but interested, I agree with all of you here so far, except that of course you disagree on timelines. :o)

"Artificial Intelligence" in the last few decades has been a model of failure. The greatest hope during that time, neural nets, have gone virtually nowhere. Yes, they are good at learning, but they have only been good at learning exactly what they are taught, and not at all at putting it all together. Until something like that can be achieved (a "meta-awareness" of the data), they will remain little more than automated libraries. And of course at this time we have no idea how to achieve that.

"Genetic algorithms" have enormous potential for solving problems. Just for example, recently a genetic algorithm improved on something that humans had not improved in over 40 years... the Quicksort algorithm. We now have an improved Quicksort that is only marginally larger in code size, but runs consistently faster on datasets that are appropriate for Quicksort in the first place.

But genetic algorithms are not intelligent, either. In fact, they are something of the opposite: they must be carefully designed for very specific purposes, require constant supervision, and achieve their results through the application of "brute force" (i.e., pure trial and error).

I will start believing that something like this will happen in the near future, only when I see something that actually impresses me in terms of some kind of autonomous intelligence... even a little bit. So far, no go. Even those devices that were touted as being "as intelligent as a cockroach" are not. If one actually were, I might be marginally impressed.

Re:Hrmmmm (1, Funny)

Anonymous Coward | more than 6 years ago | (#22450320)

Your so full of shit. Anyone with anything going doesn't post at Slashdot.

Blue Brain Project (2, Interesting)

vikstar (615372) | more than 6 years ago | (#22450388)

The blue brain project [bluebrain.epfl.ch] is already simulating a cluster of 10,000 neurons known as a neucortical column. Althought quite good already (in terms of biological realism), their simulation model is still incomplete with a few more years work to get the neurons working like in real life. With more computational power to increase neuron count and better models they will be able to one day simulate an entire mammalian brain.

Re:Blue Brain Project (2, Informative)

BWJones (18351) | more than 6 years ago | (#22450420)

Althought quite good already (in terms of biological realism),

While this project is verrry cool, they are not even remotely close to biological realism. Sorry...

their simulation model is still incomplete with a few more years work to get the neurons working like in real life.

That is just it. We are finding that real biological systems from complete neural reconstructions are far more complex with many more participating "classes" of neurons with much more in the way of nested and recurrent collateral connectivity than is predicted by any existing model of neural connectivity.

Re:Hrmmmm (2, Insightful)

podperson (592944) | more than 6 years ago | (#22450398)

I also think that we're unlikely to equal human intelligence except as a curiosity long after we've obtained the necessary technology. Instead, we'll produce AIs with wildly different abilities from humans (far better in some things, such as arithmetic, or remembering large slabs of data, and probably worse in others). Calibrating an AI to be "equal" to a human will be a completely separate and not especially useful endeavor, and it will be something tinkerers do later.

And I suspect that the necessary insights to produce human-like intelligence aren't going to be around for some time. We still have only a foggy idea of how a lot of human intelligence works in the existing hardware.

I'm just some guy (0)

Anonymous Coward | more than 6 years ago | (#22450404)

But it seems like what you said was entirely obvious, but still a complete guess.
That is, there is NO WAY we'll have computers than can think as humans.

What we may have is the theoretical output capacity of the human brain in CPU power, but that doesn't mean it's artificial intelligence.

In fact, I more than doubt, but laugh at the idea we'll have anything like artificial intelligence in 20 years.

If that were true then basically the world as you know it is about to end and be reborn in about 20 years when machines are capable of human like thought and adaptive problem solving.

The fact is, the most life like AI simulations are pathetic. We may have the terraflops of what we imagine the human mind to output, but as you stated our actual knowledge of how the mind works, no less translating that to hardware/software is far behind.

I think we'll have what we today believe to be the human minds teraflop potential in hardware in 20 years, but it'll be many decades before anything like AI really happens.

How can this guy be so brilliant and think that AI would be here so fast. If todays software is an example of our ability to reach for AI, I think we have more like 200 years before you have AI devices that truly think and solve problems beyond trial and error.

Re:Hrmmmm (0)

Anonymous Coward | more than 6 years ago | (#22450448)

I think what he meant was that by making dumbass statements like this, the National Academy of sciences is *lowering* the level of human intelligence. More statements like this over the next twenty years will actually prove him right.

Exponential AI? (5, Interesting)

TheGoodSteven (1178459) | more than 6 years ago | (#22449984)

If artificial intelligence ever gets to the point where it is greater than humans, won't it be capable of producing even better AI, which would in turn create even better AI, and so on? If AI does reach the level of human intelligence, and eventually surpasses it, can we expect an explosion in technology and other sciences as a result?

Re:Exponential AI? (4, Informative)

psykocrime (61037) | more than 6 years ago | (#22450026)

If artificial intelligence ever gets to the point where it is greater than humans, won't it be capable of producing even better AI, which would in turn create even better AI, and so on? If AI does reach the level of human intelligence, and eventually surpasses it, can we expect an explosion in technology and other sciences as a result?

That's the popular hypothesis [wikipedia.org] .

Re:Exponential AI? (1)

Jah-Wren Ryel (80510) | more than 6 years ago | (#22450308)

On the other hand - perhaps human intelligence is just not at the level required to create an AI at the same, much less, better level. If that's true, we are stuck.

Re:Exponential AI? (0)

Anonymous Coward | more than 6 years ago | (#22450040)

Go look up the Singularity. Kurzweil predicts it to be around 2040 or so.

We May see skynet or wopr at that time (1)

Joe The Dragon (967727) | more than 6 years ago | (#22450196)

and it will start a Global Thermonuclear War

Re:Exponential AI? (0)

STrinity (723872) | more than 6 years ago | (#22450200)

If artificial intelligence ever gets to the point where it is greater than humans, won't it be capable of producing even better AI, which would in turn create even better AI, and so on?


You aren't going to get modded up for repeating things Vernor Vinge said twenty years ago.

Re:Exponential AI? (1)

mdenham (747985) | more than 6 years ago | (#22450312)

If artificial intelligence ever gets to the point where it is greater than humans, won't it be capable of producing even better AI, which would in turn create even better AI, and so on?
You aren't going to get modded up for repeating things Vernor Vinge said twenty years ago.
Obviously you're wrong, as he already got modded up.

Re:Exponential AI? (3, Interesting)

wkitchen (581276) | more than 6 years ago | (#22450216)

This positive feedback effect happens to a considerable extent even without machines that have superintelligence, or even what we'd usually consider intelligence at all. It's happening right now. And has been happening for as long as humans have been making tools. Every generation of technology allows us to build better tools, which in turn helps us develop more sophisticated technology. A great example from fairly recent history, and that is still ongoing, is the development of CAD/CAM/CAE tools, particularly those used for design of electronic hardware (schematic capture, PCB, HDL's, programmable logic compilers, etc.), and the parallel development of software development tools. Once computers became good enough to make usable development tools, those tools helped greatly with the creation of more sophisticated computer technology, which supported better development tools.

Superintelligence may speed this up, but the effect is quite dramatic already.

Re:Exponential AI? (1)

wellingj (1030460) | more than 6 years ago | (#22450426)

I know there is a lot of evidence to the contrary, but what scientific law or theory says human intelligence can't advance at the same or better rate?

20 years is too long to predict (3, Insightful)

Bill, Shooter of Bul (629286) | more than 6 years ago | (#22449994)

The farther out you make a projection, the less likely it is to be true. With this one in particular, I just don't see it being a focus of research. Yes we will have increase levels of intelligence in cars toasters and ball point pens, but the intelligence will be in a supporting role to make the devices more useful to us. There isn't a need for a human like intelligence inside a computer. We have enough ones inside human bodies.

Also, I will not be ingesting nano bots to interact with my neurons, I'll be injecting them into my enemies to disrupt their thinking. Or possibly just threatening to do so to extract large sums of money from various governmental organisations.

Re:20 years is too long to predict (4, Insightful)

Jugalator (259273) | more than 6 years ago | (#22450256)

There isn't a need for a human like intelligence inside a computer.
And even if there was (and I think this is key to the fallacy in this prediction), we wouldn't have the theories backing the hardware. We will most likely get some super fast hardware within these years, but what's much less certain is if AI theories will have advanced enough by then, and if the architecture will be naturally parallelized enough to take advantage of them. Because while we don't know much about how the human brain reasons, we do know that to make it at an as low temperature as 37 degrees Celsius in an as small area as our cranium (it's pretty damn amazing when you consider this!), it needs to be massively parallelized. And, again, we don't really even have the theories yet. We don't know how the software should best be written.

That's why we even in this day and age of 2008l, we're essentially running chatbots based on Eliza since 1966. Sure, there's been refinements and the new ones are slightly better, but not by much in a grand scheme. A sign of this problem is that they are giving their answers to your questions in a fraction of a second. That's not because they're amazingly well programmed; it's because the algorithms are still way too simple and based on theories from the sixties.

If the AI researches claiming "Oh, but we aren't there yet because we haven't got hardware nearly good enough yet", why aren't we even there halfway, with at least far more clever software than chatbots working on a reply to a single question for an hour? Sure, that would be impractical, but we don't even have the software for this that uses hard with even the boundaries of our current CPU's.

So at this point, if we'd make a leap to 2029 right now, all we'd get would be super fast Eliza's (I'm restricting my AI talk of "general AI" now, not in heuristic antispam algorithms, where the algorithms are very well understood and doesn't form a hurdle). The million dollar question here is: will we before 2029 have made breakthroughs in understanding the human brain well enough in how it reasons along with constructing the machines (biological or not as necessary) to approximate the structure and form the foundation on which the software can be built?

I mean, we can talk traditional transistor-based hardware all day and how fast it will be, but it will be near meaningless if we don't have the theories in place.

Re:20 years is too long to predict (2, Insightful)

Jane Q. Public (1010737) | more than 6 years ago | (#22450328)

Cars and toasters are NOT "intelligent"!! Not even to a small degree. Just plain... not.

Yes, they do more things that we have pre-programmed into them. But that is a far cry from "intelligence". In reality, they are no more intelligent that an old player piano, which could do hundreds of thousands of different actions (multiple combinations of the 88 keys, plus 3 pedals), based on simple holes in paper. Well, we have managed to stuff more of those "holes" (instructions) into microchips, and so on, but but the machines themselves are just as stupid as they have EVER been, including back in the stone age. No intelligence. At all. Not even a little.

Do not mistake complexity for intelligence. A certain amount of complexity might be necessary for intelligence to exist, but on the other hand, things can be enormously complex without the presence of ANY intelligence. Just look at Government, for example.

Some major assumptions (3, Interesting)

Yartrebo (690383) | more than 6 years ago | (#22450000)

How are we so sure that advances in computers will continue at such a rapid pace. Computer miniaturization is hitting against fundamental quantum-mechanical limits and it's crazy to expect 2008-2028 to have progress quit as rapid as 1988-2008.

Short of major breakthroughs on the software end, I don't expect AI to be able to pass a generalized Turing Test anytime soon, and I'm pretty certain the hardware end isn't going to advance enough to brute-force our way through.

Re:Some major assumptions (1)

tiffany98121 (1094419) | more than 6 years ago | (#22450068)

because quantum computing is going to blow away our current rate of increase in computing power.

Re:Some major assumptions (1)

_KiTA_ (241027) | more than 6 years ago | (#22450226)

How are we so sure that advances in computers will continue at such a rapid pace. Computer miniaturization is hitting against fundamental quantum-mechanical limits and it's crazy to expect 2008-2028 to have progress quit as rapid as 1988-2008.

Short of major breakthroughs on the software end, I don't expect AI to be able to pass a generalized Turing Test anytime soon, and I'm pretty certain the hardware end isn't going to advance enough to brute-force our way through.
Stuff already passes Turing Tests.

Sort of, anyway. [slashdot.org]

To heck with Artificial Intelligence! (4, Insightful)

RyanFenton (230700) | more than 6 years ago | (#22450008)


Artificial intelligence would be a nice tool to use to reach towards, or to use to understand ourselves... but rare is there a circumstance that demands, or is worth the risks involved with making a truly intelligent agent.

The real implication to me, is that it will be possible to have machines capable of running the same 'software' that runs in our own minds. To be able to 'back up' people's states and memories, and all the implications behind that.

Artificial intelligence is a nice goal to reach for - but it is nothing compared the the siren's call of memories being able to survive the traditional end of existence, cellular death.

Ryan Fenton

Re:To heck with Artificial Intelligence! (2, Insightful)

clusterlizard (1136803) | more than 6 years ago | (#22450098)

It would come in pretty handy for space exploration.

Re:To heck with Artificial Intelligence! (1)

Fred Ferrigno (122319) | more than 6 years ago | (#22450370)

The real implication to me, is that it will be possible to have machines capable of running the same 'software' that runs in our own minds. To be able to 'back up' people's states and memories, and all the implications behind that.
Maybe some famous person would agree to do that as a favor to future generations, but I don't see how it would really be useful for average people. We are not the "software" that our brains run, we are our brains. The copy of "you" that runs on a computer for thousands of years won't really be you. You'll still be dead.

Probably false alarm ... again (2, Interesting)

golodh (893453) | more than 6 years ago | (#22450020)

For over 40 years, the field of AI has been *littered* with predictions of the type: "We will be able to mimic Human levels of xxx" (substitute for XXX any of the following: contextual understanding, reasoning, speech, vision, non-clumsy motoric ability).

So far _not one_ of those claims has come true, with the possible exception of the the much-vaunted "robotic snake".

So ... I'd say: less claims, fewer predictions, and more work. Let me know when you've got anything worthwhile to show.

Not to be outdone by forecasters, I have a forecast of my own to make: before the term is us it will transpire out that all this fanfare and this announcement were only ever meant as means to attract research grants.

Re:Probably false alarm ... again (1)

lambent (234167) | more than 6 years ago | (#22450140)

I concur, except replace 'probably' with 'definitely'.

We have not accomplished any real advances that bear even a faint similarity to what this prediction purports. This is just passing the buck, and hoping that in the future, someone will solve the problem we have not even made any real progress against, yet.

In the end, this is just idiot blathering. It doesn't help further any effort, and is just so many useless words.

I'll make a prediction, too. By 2029, we will realize we've wasted another 20 years trying to achieve human level AI in computers, but people will continue to make wild claims about what will happen 20 years out from that.

In 2029, they'll say, "by 2049, for sure. This time we really really mean it."

Re:Probably false alarm ... again (1)

TrevorB (57780) | more than 6 years ago | (#22450282)

Agreed. Turing test passing AI is way up there beyond flying cars in predictions that won't come true for a very, VERY long time, if ever. If we're going to see an AI in 20 years, it's going to be biological rather than electronic. This is a problem where no amount of extra computing power you can throw at the problem will solve it. The theory to make it work just isn't there.

If you could show that a machine (say like SETI at home) could pass a Turing test, even if it needed a day to make a reply, I'd believe it was possible. It's just not coming, folks.

Re:Probably false alarm ... again (4, Interesting)

timeOday (582209) | more than 6 years ago | (#22450286)

So ... I'd say: less claims, fewer predictions, and more work. Let me know when you've got anything worthwhile to show.
Has it occurred to you that all of us already work, to some extent, at the direction of computers? Think of the tens of thousands of pilots and flight attendants... what city they sleep in, and who they work with, is dictated by a computer which makes computations which cannot fit inside the human mind. An airline could not long survive without automated scheduling.

Next consider the stock market. Many trades are now automated, meaning, computers are deciding which companies have how much money. That ultimately influences where you live and work, and the management culture of the company you work for.

We are already living well above the standard that could be maintained without computers to make decisions for us. Of course as humans we will always take the credit and say the machines are "just" doing what we told them, but the fact is we could not could not carry out these computations manually in time for them to be useful.

2029? Skynet? Coincidence? (1)

Rich Acosta (1010447) | more than 6 years ago | (#22450022)

From Wikipeida [wikipedia.org]

Skynet gained access to several autonomous military drones (such as the T-1 in Terminator 3), using them to round up survivors, who were forced to build automatic factories and robots that were better at construction than the military robots. Skynet then killed these human slaves, and using the infrastructure they had been forced to start, rapidly designed newer and better machines until it controlled an extremely advanced empire on Earth by 2029.

This is good news and bad news... (4, Funny)

zappepcs (820751) | more than 6 years ago | (#22450024)

Good news: This could herald a lot of good stuff, increased unemployment, greater reliance on computers, newer divides in the class strata of society, further confusion on what authority is and who controls it, as well as greater largess in the well meaning 'we are here to help' phrase department.

Bad news: After reviewing the latest in the US political scene, getting machines smarter than humans isn't going to take so much as we thought. My toaster almost qualifies now. 'You have to be smarter than the door' insults are no longer funny. Geeks will no longer be lonely. Women will have an entire new group of things to compete with. If you think math is hard now, wait till your microwave tells you that you paid too much for groceries or that you really aren't saving money in a 2 for 1 sale of things you don't need. Married men will now be third smartest things in their own homes, but will never need a doctor (bad news for doctors) since when a man opens his mouth at home to say anything there will now be a wife AND a toaster to tell him what is wrong with him.

oh god, this list goes on and on.

Re:This is good news and bad news... (0)

Anonymous Coward | more than 6 years ago | (#22450190)

'You have to be smarter than the door' insults are no longer funny.

You watch this door, it's about to open again. I can tell by the intolerable air of smugness it suddenly generates.

I'm not getting you down am I?

AI may not get that far (3, Funny)

httpcolonslashslash (874042) | more than 6 years ago | (#22450028)

As soon as they make robots that can have sex like humans...what's the point in inventing anything else? All scientists will be busy "researching" their robots.

Re:AI may not get that far (1)

weighn (578357) | more than 6 years ago | (#22450114)

Artificial Insertion? [google.com]
Piers Anthony had some interesting ideas on this ...

Re:AI may not get that far (2, Insightful)

imasu (1008081) | more than 6 years ago | (#22450178)

Or as Scott Adams put it, "[The holodeck] will be society's last invention."

2029? (2, Insightful)

olrik666 (574545) | more than 6 years ago | (#22450030)



Just in time for AI to help me drive my new fusion-powered flying car!

O.

wrong (4, Insightful)

j0nb0y (107699) | more than 6 years ago | (#22450034)

He obviously hasn't been paying attention to AI developments. The story of AI is largely a story of failure. There have been many dead ends and unfulfilled predictions. This will be another inaccurate prediction.

Computers can't even defeat humans at go, and go is a closed system. We are not twenty years away from a human level of machine intelligence. We may not even be *200 years* away from a human level of machine intelligence. The technology just isn't here yet. It's not even on the horizon. It's nonexistent.

We may break through the barrier someday, and I certainly believe the research is worthwhile, for what we have learned. Right now, however, computers are good in some areas and humans are good in others. We should spend more research dollars trying to find ways for humans and computers to efficiently work together.

Re:wrong (0)

Anonymous Coward | more than 6 years ago | (#22450100)

He's not talking about the software shortcuts that most of us are used to regarding AI we see in real life. He's talking about simulating every aspect of the human brain down to every neuron in software. With the current exponential trends in computing that seemingly keep breaking every barrier that people keep referring to, it only makes sense to extrapolate that out to see what the logical conclusion is. They've already started simulating portions of rat brains in software that behave identically to the real counterpart.

Re:wrong (3, Informative)

Alaria Phrozen (975601) | more than 6 years ago | (#22450316)

Excuse me.. who are you? You're saying RAY KURZWEIL hasn't been paying attention to AI developments? And you're modded insightful?

http://en.wikipedia.org/wiki/Ray_kurzweil

"Everybody promises that AI will hit super-human intelligence at 20XX and it hasn't happened yet! It never will!" ... well guess what? It'll be the last invention anybody ever has to make. Great organizations like the Singularity Institute http://en.wikipedia.org/wiki/Singularity_Institute [wikipedia.org] really shouldn't be scraping along on such poor budgets, seriously if this ever worked, even a 0.001% chance of a friendly technological singularity occuring, isn't it worth investigating?

Re:wrong (0)

Anonymous Coward | more than 6 years ago | (#22450350)

According to wikipedia, Ray Kurzweil is 60.

It's obvious that he is just desperate for the singularity to start before he dies. It is wish fulfillment at its worst, leading to poor predictions.

As he has aged, he has unfortunately become wackier and wackier in his attempts to stave off death*, and his wish for technological AI advances are part of this (as he considers AI advancements necessary for a technological singularity which will give people effective immortality).

*(Heck, I read on wikipedia that he touts something called alkaline water for health benefits. I'm a biochemist with my PhD in oxidative radical chemistry and ageing, and it is obvious that he understands almost nothing of biochemistry and chemistry from the linked article. It's sad when smart people go off the deep end, but it happens to a surprising amount of them).

Re:wrong (1)

Cedric Tsui (890887) | more than 6 years ago | (#22450428)

Hmmm. Interesting.

But when can it be said that a computer is smarter than a human? A computer can do certain things better than a human can, but irregardless of how many transistors you have, they will never be able to do everything better than a human.
Just what will a computer need to be able to do before we say; yup. They've surpassed us.

I think 200 years is a pretty far cry, and would say that 20 is closer. Computers have only been around for about 200 years (they had punch card controlled looms 200 years ago didn't they?)

Think about the high performance computing labs these days. I can't predict global weather patterns. Can you?

nonsense (1)

timmarhy (659436) | more than 6 years ago | (#22450036)

we aren't even close to the processing power of the human brain.

i'll make a prediction of my own - this guy is after funding.

Re:nonsense (2, Interesting)

bnenning (58349) | more than 6 years ago | (#22450260)

we aren't even close to the processing power of the human brain.

We aren't that far off. Estimates for the computational power of the human brain are around 10**16 operations per second. Supercomputers today do roughly 10**14, and Moore's Law increases the exponent by 1 every 5 years. Even if we have to simulate the brain's neurons by brute force and the simulation has 99% overhead, we'll be there in 20 years. (Assuming Moore's Law doesn't hit physical limits).

Re:nonsense (2, Insightful)

shura57 (727404) | more than 6 years ago | (#22450432)

You can similarly compare the temperature of the human brain and then observe that the machines have long bypassed it. Does it make machines smarter? I don't think so.

The brain is so insanely parallel and the neurons are not just digital gates, more like computers in themselves. The machines of today are a far cry from the brain in how they are built. But sure, you can compare them by some meaningless parameter to say that we're close. How about the clock frequency: neurons are 1kHz devices, and modern CPUs are in GHz now...

Garbage. (1)

FooAtWFU (699187) | more than 6 years ago | (#22450044)

I have little doubt we already have the components necessary to simulate a human-like brain, one way or another, right now. But I that's not enough. You need to know how to put it together, how to set it up to be educated somewhat-like-a-human, how to get it some semblance of human-like sensory input (at least for vision/hearing centers, if you're interested in either of those things) and then you need to train it for years and years. So, 21-years-off is too optimistic, I think, by at least an order of magnitude, and possibly two or even three.

The "intelligent nanobot" bit is complete and utter garbage, though. Especially by 2029. I don't think you can even begin to fit something "intelligent" in a package about the size of a cell. Even if it's theoretically possible, our technology can currently construct nano-gears one atom at a time.

Don't do it! (3, Insightful)

magarity (164372) | more than 6 years ago | (#22450054)

(most) People can go out to get more education to advance from a menial job to a more skilled one when taken over by a robot but wtf do we do if the machines are as smart as we are? Who is going to hire any people to do even the most advanced thinking jobs when the machine that works for electricity 24/7 can do it? This kind of thing will bring on the luddite revolution in a hurry.

Re:Don't do it! (1)

andy314159pi (787550) | more than 6 years ago | (#22450334)

wtf do we do if the machines are as smart as we are? Who is going to hire any people to do even the most advanced thinking jobs when the machine that works for electricity 24/7 can do it?
We can raise the price of electricity and lower the cost of candles.

Re:Don't do it! (2, Insightful)

lee1026 (876806) | more than 6 years ago | (#22450386)

alternatively, we can just lazily sit around and get the computers to do all of the work.

Retarded (1, Insightful)

mosb1000 (710161) | more than 6 years ago | (#22450056)

I think these nonsense predications are best described as retarded. You can't predict something that is beyond our current technological capability, since it depends on breakthroughs being made that are impossible to predict. These breakthroughs could come tomorrow, or they could never come at all. I don't know why I'm posting this. Even talking about this fantastic nonsense is a waste of time.

Quite simply BS (1)

wanax (46819) | more than 6 years ago | (#22450072)

Until we figure out how a water buffalo can be an individual at one spatial scale, and part of a herd as a texture at another scale... just in vision... we won't have smart computers.

Fat chance (1)

popmaker (570147) | more than 6 years ago | (#22450082)

For something as inexplicably complex as our brain... which by the way we HAVEN'T understood a fraction of. For something as mysterious as emotions - a problem which lies behind a philosophical problem which we don't KNOW if we ever came closer to solving in the last 2000 years... I say FAT F***ing CHANCE!

Either they are stupid (in which case, I have to admit, solving the problem is a tiny bit easier) or this is publicity

I don't care if there is a while until 2029 hits us, we have nothing real to believe this prediction is useful.

Whatever Could They Mean? (5, Funny)

flyneye (84093) | more than 6 years ago | (#22450088)

" Artificial Intelligence will reach the level of humans"
Buddy,I've been around more than four decades.I've yet to see more than a superficial level of intelligence in humans.
Send your coders back to the drawing board with a loftier goal.

Which human? (0)

Anonymous Coward | more than 6 years ago | (#22450090)

I'm pretty sure computers are already at the level of intelligence of many prominent humans.

The End of Intelligent Design (5, Interesting)

denoir (960304) | more than 6 years ago | (#22450094)

It is not too much of an overstatement to say that the field of AI has not significantly progressed since the 1980's. The advancements have been largely superficial with better and more efficient algorithms being created but without any major insights and much less a road map for the future. While methods that originated as AI research are more common in real-world applications, the research and development of new concepts has made a grinding halt - not that it was ever a question of smooth continuous progress.

It might seem like the lack of AI development is a temporary problem and altogether a peripheral issue. It is however neither - it is a fundamental problem and it affects all software development.

Early in the history of computing, software and hardware development progressed at a similar pace. Today there is a giant and growing gap between the rate of hardware improvements and software improvements. As most people involved in the study of the field of software engineering are aware of, software development is in a deep crisis.

The problem can be summarized in one word: complexity. The approach to building software has largely been based on traditional engineering principles and approaches. Traditional engineering projects never reached the level of complexity that software projects have. As it turns out humans are not very good at handling and predicting complex system.

A good example of the problems facing software developers is Microsoft's new operating system Windows Vista. It took half a decade to build and cost nearly 10 billion dollars. At two orders of magnitude higher costs than the previous incarnation it featured relatively minor improvements - almost every single new radical feature (such as a new file system) that was originally planned was abandoned. The reason for this is that the complexity of the code base had become unmanageable. Adequate testing and quality assurance proved to be impossible and the development cycle became painfully slow. Not even Microsoft with its virtually unlimited resources could handle it.

At this point, it is important to note that this remains an unsolved problem. It would have not been solved by a better structured development process or directly by better computer hardware. The number of free variables in such a system are simply too great to be handled manually. A structured process and standardized information transfer protocols won't do much good either. Complexity is not just a quantitative problem but at a certain level you'll get emergent phenomena in the system.

Sadly artificial intelligence research which is supposed to be the vanguard of software development is facing the same problems. Although complexity is not (yet) the primary problem there manual design has proved very inefficient. While there are clever ideas that move the field forward on occasion there is nothing to match the relentless progress of computer hardware. There exists no systematic recipe for progress.

Software engineering is intelligent design and AI is no exception. The fundamental idea persists that it takes a clever mind to produce a good design. The view, that it takes a very intelligent thing to design a less intelligent thing is deeply entrenched on every level. This clearly pre-Darwinian view of design isn't based on some form of dogma, but a pragmatism and common sense that aren't challenged where they should be. While intelligent design was a good approach while software was trivial enough to be manageable, it should have become blindingly obvious that it was an untenable approach in the long run. There are approaches that take the meta level - neural networks, genetic algorithms etc, but it is thoroughly insufficient. All these algorithms are still results of intelligent design.

So what Darwinian lessons should we have learned?

We have learned that a simple, dumb optimization algorithm can produce very clever designs. The important insight is that intelligence can be traded for time. In a short interval an intelligent human can produce a better design, but given enough time, an optimization algorithm (it doesn't even have to be a good one) will produce superior results. In computer terms time translates to processing power - something that computers have in abundance and that increases exponentially.

We can't sustain humans writing software as little as we can sustain humans going over all bank transactions in the world. We need computers in both cases. What is needed is software developing better software.

The good news is that biology has shown us that the beginnings can be humble. A molecule that could make crude copies of itself in the primordial soup kicked it off. Once the resources for making new copies ran out, copying and subsequently survival became conditional. Among those crude copies there was variation and once the resources became scarce varieties of the molecules that were taking their building blocks from other self-replicating molecules rather than from the soup had an advantage. Other variations included primitive molecular walls that acted as a defense against other self-replicators. Evolution through natural selection was well under way.

In a similar fashion an artificial digital evolution can be created. The goals will of course not be quite the same - we don't want just any evolved thing so our measure of fitness won't just be survival. We already have it in the weak form of genetic algorithms, but we need to take it to a new level where such algorithms not only produce solutions, but where they can recursively create better algorithms. Each new algorithm would be used to create a better one. As natural evolution has shown us, while not necessarily trivial, the first iteration can be simple. In the natural case, it had to be.

Mod parent UP!!! (1)

mcrbids (148650) | more than 6 years ago | (#22450198)

Man, oh man, I wish I had some mod points right about now!

This has to be one of the most insightful posts I've read in a long time - this subject could easily be expanded into a book and I'd buy the book!

Re:Mod parent UP!!! (1)

Jugalator (259273) | more than 6 years ago | (#22450298)

Yeah, I hadn't even really thought about this that way, and it seems quite right to me as a developer too. We no longer have the revolutions in software we had early on. Windows Vista is scarily similar to Windows 95. But Windows 3.1 wasn't to MS-DOS. So it's not just a problem of looking at the wrong timeframes. The timeframes are 11 years in both cases.

The sacred brain and other myths (5, Interesting)

denoir (960304) | more than 6 years ago | (#22450332)

This is a sort of continuation of the parent post.

The comedian Emo Philips once remarked that "I used to think my brain was the most important organ in my body until I realized what was telling me this."

We have tendency to use human intelligence as a benchmark and as the ultimate example of intelligence. There is a mystery surrounding consciousness and many people, including prominent philosophers such as Roger Penrose, ardently try to keep it that way.

Given however what we through biological research actually know about the brain and the evolution of it there is essentially no justification for attributing mystical properties to our data processing wetware. Steadily with increased capabilities of brain scanning we have been developing functional models for describing many parts of the brain. For other parts that need still more investigation we do have a picture, even if rough.

The sacred consciousness has not been untouched by this research. Although far from a final understanding we have a fairly good idea, backed by solid empirical evidence that consciousness is a post-processing effect rather than being the first cause of decision. The quantity of desperation can be seen in attempts to explain away the delay between conscious response and the activations of other parts of the brain. Penrose for instance suggests that yes, there is an average 500 ms delay, but that is compensated by quantum effects that are time symmetric - that the brain actually sees into the future, which then is delayed to create a real-time decision process. While this is rejected as absurd by a majority of neuroscientists and physicists, it is a good example of how passionately some people feel about the role of the brain. It is however painstakingly clear that just like we were forced to abandon an Earth-centered universe we do need to abandon the myth of the special place of human consciousness. The important point here is that once we rid ourselves of the self-imposed veil of mystery of human intelligence we can have a sober view on what artificial intelligence could be. The brain has developed through an evolutionary optimization process and while getting a lot of benefits it has taken the full blow of the limitations and problems with this process and also its context.

Evolution through natural selection is far from the best optimizing method imaginable. One major problem with it is that it is a so called "greedy" algorithm - it does not have any look ahead or planning capabilities. Every improvement, every payoff needs to be immediate. This creates systems that carry a lot of historical baggage - an improvement isn't made as a stand-alone feature but as a continuation of the previous state. It is not a coincidence that a brain cell is a cell like any other - nucleus and all. Nor is it a cell because it is the optimal structure for information processing. It was what could be done by modifying the existing wetware. It is not hard to imagine how that structure could be improved upon if not limited by the biological building blocks that were available to the genetic machinery.

Another point worth making is that our brains are optimized not for the modern type of information processing that humans engage in - such as writing software for instance. Humans have changed little in the last 50,000 years in terms of intellectual capacity but our societies have changed greatly. Our technological progress is a side effect of the capabilities we evolved that increased survivability when we roamed the plains of Africa in small family hunter-gatherer groups. To assume the resulting information processing system (the brain) would the ultimately optimal solution for anything else is not justifiable.

There has been since the 1950's ongoing research to create biologically inspired computer algorithms and methods. Some of the research has been very successful with simplified models that actually did do something useful (artificial neural networks for instance). Progress has however been agonizingly slow because of the uphill battle of a top-down reverse engineering of a bottom-up built system. While we have continuously and rapidly gained knowledge of the inner workings of the brain, our attempts at transforming that information into useful algorithms have been less spectacular.

The study of the brain can and will certainly yield additional insights into how to build new algorithms, we shouldn't expect any miracles and we should certainly not have human intelligence as a goal. There is absolutely no reason to end our ambitions there or to even see biological intelligence as a role model for the artificial kind.

Evolution through natural selection has shown us that a dumb optimization process can result in intelligence but it does not mean that copying it in verbatim is the best solution.

The primary problem of today's approach to software engineering in general and AI research specifically is that it slows down progress. Thirty years ago it might have been an inconvenience, but today it is of critical importance. It is now a world that greatly depends on computers and, more than anything, software. If there is something that we can be certain about then it is that software will play an increasingly important role in just about any imaginable field. A problem with software development is not self-contained but it affects everything else profoundly.

The intelligent design approach to software development is failing and cannot produce what is demanded of it. And it is not a question of just progress for the sake of progress. We have for instance sequenced the human genome and have a massive amount of data that we are at a loss of how to handle. Yes, existing methods help find local solutions, but a global meaningful and useful understanding eludes us. We have the data but lack the software intelligent enough to process it in a meaningful way - and we certainly can't do it with only human intelligence. Innumerable diseases could be cured and prevented in an extremely short time given the right analysis capabilities.

As conventional software development through intelligent design is already now unsustainable, the responsibility falls on the AI research. And that field can only be sustainably successful when the mainstream of it understands that it is ultimately software that should recursively build increasingly better software.

Re:The End of Intelligent Design (1)

scruffy (29773) | more than 6 years ago | (#22450412)

The parent is a great post, but I would substitute the 90's for the 80's because that is when machine learning and probabilistic reasoning got themselves mostly straightened out.

But this decade, can anyone provide any significant breakthrough in AI? It seems that any real results are because of faster and more processors and faster and more memory.

Here is my challenge. Can anyone name an important algorithm or representation from this decade?

intelligence is overrated (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22450112)

It's been my observation that what passes for so called intelligence is highly overrated.
Haven't you notice how self proscribed smart people are generally the least functional folks (e.g, lack EI or street smarts or are just really dumb).

I for one don't look forward to computers that act like rainman...

I don't even think we (as a scientific collective) even know what intelligence is....

I swear to $Deity (0)

Anonymous Coward | more than 6 years ago | (#22450116)

If you guys start the A.I. robot revolution before I lose my virginity, I'm totally haunting your asses when I die and you become slaves to the machines!

Kurzweil has no credibility (1, Troll)

nguy (1207026) | more than 6 years ago | (#22450144)

The guy talks and sells a lot, but he has contributed almost nothing to Artificial Intelligence. Have a look at his publications:

http://scholar.google.com/scholar?as_sauthors=r-kurzweil&as_subj=eng [google.com]

Re:Kurzweil has no credibility (1)

NewbieProgrammerMan (558327) | more than 6 years ago | (#22450208)

I have to say I don't know enough about AI or CS to know what to make of that list. I hear this guy frequently billed as some uber-genius AI scientist...is that just a bunch of self-promoting-nonsense? Can anybody that actually works in AI comment on him?

Re:Kurzweil has no credibility (1)

nguy (1207026) | more than 6 years ago | (#22450264)

I hear this guy frequently billed as some uber-genius AI scientist...is that just a bunch of self-promoting-nonsense?

Yes.

Kurzweil has founded a bunch of moderately successful companies that have sold software vaguely related to AI.

I have to say I don't know enough about AI or CS to know what to make of that list

You don't really have to know that much; just compare what he has done to some other AI people, by searching on Scholar for Peter Norvig, Patrick Winston, Rodney Brooks, Tom Mitchell, or Terry Winograd.

Sounds Familiar? (1)

zdude255 (1013257) | more than 6 years ago | (#22450146)

Didn't they predict the same thing 21 years ago?

Don't think so (1)

DreadPiratePizz (803402) | more than 6 years ago | (#22450148)

Predictions like this have been made in past, and not even come close. This one is no different. The bottom line is that humans process some information in a non-representational way, while computers must operate representationally. So even if the computation theory of mind is true, a microchip can't mimick it. Hubert Dreyfus has wrote a great deal on this topic, and provides extremely compelling arguments as to why we'll never have human type AI. Of course, AI can do a lot of "smart" things and be extremely sophisticated, but it will never pass am unrestricted turring test.

Re:Don't think so (4, Insightful)

bnenning (58349) | more than 6 years ago | (#22450348)

Predictions like this have been made in past, and not even come close. This one is no different.

The difference is that in 20 years we may have sufficiently powerful hardware that the software can be "dumb", that is, just simulating the entire physical brain.

The bottom line is that humans process some information in a non-representational way, while computers must operate representationally.

What prevents a computer from emulating this "non-representational" processing? Or is the human brain not subject to the laws of physics?

Predictions are useless in this case (3, Insightful)

The One and Only (691315) | more than 6 years ago | (#22450150)

It's one thing to predict when a building project will be finished or when we'll reach a certain level of raw processing power because these things proceed by predictable means. But strong AI requires us to make theoretical advances. Theoretical advances don't proceed like a building project--someone has to have a clever idea, fully develop and understand it himself and convince others of it. And it won't occur to someone all at once, so we'll need incremental advances, all of which will happen unpredictably.

Luckily for all of us Kurzweil is Stone Cold Crazy (2, Funny)

liquiddark (719647) | more than 6 years ago | (#22450152)

If you read a Kurzweil book, it's as if he understands hope and has no concept of problems. The man is so good at glossing over difficulties he should patent his methods and join the magazine industry.

Re:Luckily for all of us Kurzweil is Stone Cold Cr (0)

Anonymous Coward | more than 6 years ago | (#22450276)

If you read a Kurzweil book, it's as if he understands hope and has no concept of problems. The man is so good at glossing over difficulties he should patent his methods and join the magazine industry.


Indeed, I wonder why he doesn't have a job at SCO...

Where is the proof of possibility (1)

iamacat (583406) | more than 6 years ago | (#22450166)

Human brain is non-deterministic and works very differently from computers. We experience self-awareness with it's numerous sensations which is certainly related to brain's electrical activity but is nevertheless not fundamentally explained by the same. Who is to say that this 2029 computer will actually have a consciousness, or that it will have a consciousness similar to ours. On the mechanical level, will it match human senses of touch and smell or is the "baby" computer supposed to develop by looking at world with a single webcam.

I say it's a baseless claim until these questions are addressed. It may be meaningful to say that a computer of a given era will be able to perform a complex but deterministic task which is currently the domain of humans, such as, say, driving a car.

But do we really need it? (1)

3seas (184403) | more than 6 years ago | (#22450186)

As someone once wrote "Artificial Intelligence - nothing is naturally that stupid."

But on another note, what is Artificial Intelligence anyway, but the by-product illusion of Automating Information (static, active and dynamic) enough to create the illusion?
What all is involved in Automating Information but simply applying what we already do in creating and dealing with abstractions [abstractionphysics.net] , but through a hard mineral based computer instead of living biological tissue known as the brain.

Then there is another perspective, with the amount of artificially intelligent people we have running around, do we really need or want machines to emulate them?

Too bad he likes to screw the taxpayer. (0)

Anonymous Coward | more than 6 years ago | (#22450244)

Someone who charges this much for educational software is more concerned with helping his wallet then helping students. Kurzweil 3000 for Windows Professional Color Windows-based reading, writing and learning software for struggling students. 4 Details $1,495.00

How pathetic (1, Offtopic)

JulianConrad (1223926) | more than 6 years ago | (#22450258)

It's pretty clear now that the rate of progress has leveled off and fallen well short of the flying cars, space colonization, nanotech assemblers and friendly AI fantasy-future. Physicist Jonathan Huebner has gathered empirical evidence [uri.edu] (PDF) showing that we're pretty much fucked for new, practical technological ideas already, and that includes AI. I'd respect Kurzweil more if he'd stop making an ass of himself with his sci-fi stuff, go back to his lab and work on something useful.

Not a chance (1)

Progman3K (515744) | more than 6 years ago | (#22450302)

We'll certainly have machines that will appear to think and act human, but self-aware?
Nope, I don't buy it.

It's like building a tower to reach the moon: You are able to double the height of the tower every year for the first n years, so based on the rate of growth, you could calculate that we would be on the moon soon, the only problem is that there are implicit limits to how high the tower can become until it is too heavy or too unstable to continue standing.

Everyone claiming we'll have true AI by 2029 is making the same types of mistaken assumptions.

I think human level AI is possible... (0)

Anonymous Coward | more than 6 years ago | (#22450304)

I think human level AI is possible, but not in twenty years, and *not* using binary logic. Binary logic is indeed a part of human intelligence, but only a small part. In contrast, binary logic is the *only* type of logic computers are capable of doing. Yes, they are very good at it, which allows programmers make computers do many things that seem beyond the scope of basic binary logic, but that's all they can do.

When real AI is acheived, it won't be a binary computer, it'll be something that hasn't been invented yet.

Predicted by AI (1)

halcyon1234 (834388) | more than 6 years ago | (#22450324)

The grain of salt here is that the date was predicted by a 2008 AI which, as we all know, are not anywhere near as smart as a 2009 AI.

But will it (1)

EEPROMS (889169) | more than 6 years ago | (#22450336)

bother talking to us. An artificial inteligence system could do trillions of calculations by the time a human asks "are you happy ?"

Only 18 Original Thinkers? (0)

Anonymous Coward | more than 6 years ago | (#22450344)

According to Mr. Boyers, there are only 18 influential thinkers, and Mr. Kurzweil is one of them. One should not simply state as fact such extreme and controversial claims as those, without offering either a reference or an argument in support of them. Please make your case, Mr. Boyers.

That is totally going to (2, Funny)

Lewrker (749844) | more than 6 years ago | (#22450374)

spoil 2029 - the year of the linux for the desktop.

Hello? FDA anyone? (2, Insightful)

Orleron (835910) | more than 6 years ago | (#22450380)

I'm no expert on AI, so for all I know, the technology could reach human intelligence by 2029. But nanobots that crawl through your brain? That I can comment on. Bone Morphogenic Protein (BMP) was discovered by Urist and Reddi in the 1970's, and it took 30 years just to make that product, a simple growth factor, go from bench top to human clinical product. You're telling me that nanobots, a medical device never before seen by the FDA so far, can be approved and ready and in use in humans by then? Let me set the record straight. Even if artificial intelligence reached human level TODAY, there would be no nanobots crawling through our brains by 2029... maybe by 2039 or 2049. Possibly. So whatever year AI reaches human intelligence level, add 30 to 40 years onto that and you'll have your year for a medical product of that magnitude. Remember, the FDA does not care what science and engineering can do, only that they can do it safely and effectively, which is a lot more difficult to show than a simple experiment proving a concept.

AI Insanity (0)

Anonymous Coward | more than 6 years ago | (#22450384)

Before hooking up an AI at human level or beyond, I really hope they make sure it has human level senses. An AI with no body sensation (sense of self), and none of the five major senses, and no way to learn of its environment, is going to have an extremely unhappy time of it. It would likely be even worse if it's been preloaded with information enough to reason in a human-ish way.

What about superhuman hybrid A.I.? (2, Interesting)

robotsrule (805458) | more than 6 years ago | (#22450390)

Whenever I see stories like this and the usual negative rebuttals that follow, I wonder if I am the only person who read Asimov, Clarke, Crichton, Roddenberry, Heinlein and many others. I am starting to believe that it is because we feel we have "dealt" with the bogeyman of "truly aware" A.I., now that it has been confronted handily by Hollywood via The Terminator and its ilk. In the same way that it was almost comforting to embrace the dark specter of biological terrorism as a pleasant relief from the more real and closer danger of nuclear destruction; focusing on the dawn of A.I. is a relief from the true technological tsunami heading our way.

In the midst of all this talk of pure A.I. is the real steady progress being made in hooking mammalian brains to computers. So far it is in the safe yet icky domain of direct control over robots and other advanced technical based prosthetics, but it is the door to the bigger more powerful scenario that may await us compared to the "birth of A.I." to reference The Matrix. What people fail to understand is that we will make huge progress in this area, much faster than in solely silicon A.I. Why? Because we don't have to understand how the mind works to reap powerful benefits from hybrid A.I. like we do with pure A.I. Neurons by their very nature analyze and adapt to patterns and signals, they just need to be connected and protected.

The most disruptive mind-numbing change heading our way is when human brains can connect with each other over a digital conduit like the Internet. What happens when I can expand my consciousness to be able to maintain far more than the average capacity of 4 to 7 active symbols in my mind, by harnessing the brain capacity of others on a shared peer to peer neuronal network? What powerful meta-consciousness will form when your mind can directly alter a visualization held in real time by another, group dreaming as it were? Or perhaps 10 minds, or a thousand? When we unplug, if we ever do, will we feel as if we woke up from a greater more powerful and majestic dream that evaporates as soon as we disconnect because our minds, by themselves and in comparison, are too tiny to hold the more complex patterns a mind cloud can handle? Perhaps feeling like a butterfly who was dreaming that he was a man, now awake and relegated back to simple thoughts of procreation and feeding, to paraphrase Zen?

In closing, what problems which are now intractable to any single human due to their complexity and scope will fall astonishingly quickly to the power of a million minds focused like a laser on their solution? Please don't take the laser analogy lightly. Right now all of us, and any computer programmer knows this all too well, are recomputing and resolving billions of thought problems which are complete duplicates of each other. What happens when all that duplication is virtually eliminated and our minds in unison all take one small slice of a much larger problem and tear it to pieces? Heaven or hell, you decide, but coming a lot sooner than any of us think.

Technological Singularity (1)

icedcool (446975) | more than 6 years ago | (#22450414)

According to Vinge [wikipedia.org] it'll happen before 2030.

We'll see.

Tag: Bullshit (1)

TheMiddleRoad (1153113) | more than 6 years ago | (#22450422)

I wish I could add the tag, "Bullshit!" Call me a disciple of Searles, but brains make minds and anything else a wild guess. We don't have the algorithms. I doubt we will for a long, long, long time. We don't have the hardware. We won't have the hardware. We have the barest understanding of how the brain works.

Speaking of dumb predictions (1, Interesting)

JulianConrad (1223926) | more than 6 years ago | (#22450430)

Back in 1978 Robert Anton Wilson predicted that we'd be "immortal" by now [futurehi.net] . Boy, did he get a serious reality-fucking about a year ago.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...