Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why Ray Kurzweil's Google Project May Be Doomed To Fail

samzenpus posted about a year and a half ago | from the making-a-mind dept.

AI 354

moon_unit2 writes "An AI researcher at MIT suggests that Ray Kurzweil's ambitious plan to build a super-smart personal assistant at Google may be fundamentally flawed. Kurzweil's idea, as put forward in his book How to Build a Mind, is to combine a simple model of the brain with enormous computing power and vast amounts of data, to construct a much more sophisticated AI. Boris Katz, who works on machines designed to understand language, says this misses a key facet of human intelligence: that it is built on a lifetime of experiencing the world rather than simply processing raw information."

cancel ×

354 comments

Ah! (5, Informative)

Threni (635302) | about a year and a half ago | (#42651931)

The old `Chinese Room` again.

The Complete 1 Atlantic Recordings 1956-1961

It's Penrose vs Hofstadter! (Seriously, haven't we done this before?)

Re:Ah! (5, Informative)

Threni (635302) | about a year and a half ago | (#42651947)

Oops! That second line should of course have been:

http://en.wikipedia.org/wiki/Chinese_room [wikipedia.org]

(That'll teach me to post to Slashdot when I'm sorting out my Mingus!)

Re:Ah! (2, Funny)

wonkey_monkey (2592601) | about a year and a half ago | (#42652017)

(That'll teach me to post to Slashdot when I'm sorting out my Mingus!)

Stop doing that or it'll fall off.

Re:Ah! (5, Insightful)

Jherico (39763) | about a year and a half ago | (#42652127)

I hope Kurzweil succeeds simply so that we can assign the resulting AI the task of arguing with these critics about whether it's experience of consciousness is any more or less valid than theirs. It probably won't shut them up, but it might allow the rest of us to get some real work done.

Re:Ah! (1)

Jeremiah Cornelius (137) | about a year and a half ago | (#42652309)

Searle AND Mingus?

Are you SURE we don't know each other? :-)

Re:Ah! (2, Interesting)

durrr (1316311) | about a year and a half ago | (#42652441)

The chinese room is the dumbest fucking thought experiment in the history of the universy. Also, Penrose is a fucking retard when it comes to consciousness.

Now, having put the abrasive comments aside(without bothering about the critique of the aforementioned atrocities: the internet and googles provides a much better job of the fine details regarding that than any post here will ever make)

SOooooo, back to the topic at hand: Boris Katz forgets a very important detail: A lifetime of experience to a computer cluster with several thousand cores, and several billion Hz of operational frequency, per core, can be passed in a very short time. Now I'm not saying it is guaranteed to work, or to provide any viable resource but I'm saying it's not unfeasible.

I'm however also not particularly excited about Kurrzweil; he's a good introduction, but the presentation he gives is a bit too shallow and oriented towards laymen(a good method to spread the idea, but a bad one to refine it or get good critique)

Re:Ah! (3, Insightful)

Jeremiah Cornelius (137) | about a year and a half ago | (#42652449)

Yeah, just keep arguing your way into a semantic web... :-)

Re:Ah! (2, Insightful)

Goaway (82658) | about a year and a half ago | (#42652673)

A lifetime of experience to a computer cluster with several thousand cores, and several billion Hz of operational frequency, per core, can be passed in a very short time.

How?

Re:Ah! (1)

Forever Wondering (2506940) | about a year and a half ago | (#42652745)

The Boris Katz comment, IIRC, was opined by Steve Wozniak a number of years back [not a ripoff, just that clever minds sometimes think alike].

It may be flawed, but that doesn't sound like it. (3, Interesting)

Tatarize (682683) | about a year and a half ago | (#42651941)

You can draw a distinction between experiencing the world and processing raw information, but how big of a line can you draw when I experience the world through the processing of raw information?

Re:It may be flawed, but that doesn't sound like i (0)

Anonymous Coward | about a year and a half ago | (#42652095)

This is just foolish headline grabbing work. All the mind is, is a large set of pattern recognizers. The hardest part is optimizing what is useful verse what is not useful and forgetting what isn't needed. That is for a human. For a machine assistant we would not want them to forget things unless told to. Given that the machine's mind does not need to be or rather shouldn't be human like.

We want an idealized mind for a machine.

when I experience the world through the processing of raw information?

I've got to comment on that. Humans don't process raw information naturally. When we see our eyes pre-process the information so that we can see colors instead of simply a large array of intensity values associated to a specific structures (cones/rods). The same goes for pretty much every other sense or at least the major ones.

Re:It may be flawed, but that doesn't sound like i (0)

spazdor (902907) | about a year and a half ago | (#42652183)

You've contradicted yourself, unless your point is that a human eye isn't part of a human.

Re:It may be flawed, but that doesn't sound like i (2)

Anne Thwacks (531696) | about a year and a half ago | (#42652187)

For a neural network, learning and forgetting are the same process. AFAICT, a mind is necessarily a neural network.

If you want a digital assistant that won't forget unless you tell it, get an iPad. (Or better still, a life) and obviously, avoid anything portable made by MS.

Re:It may be flawed, but that doesn't sound like i (4, Informative)

disambiguated (1147551) | about a year and a half ago | (#42652319)

Learning without forgetting is possible if, for example, you reconstruct the network, preserving the old one (and this can be optimized so the entire network doesn't have to be duplicated.)

But I'm curious why you think a mind is necessarily a neural network. Are you saying there is no other possible way to construct a mind? As far as I can tell, there are lots of other designs, many of them far superior to neural networks, especially for such basic things as representing knowledge.

Re:It may be flawed, but that doesn't sound like i (1)

Tatarize (682683) | about a year and a half ago | (#42652377)

And how is that not processing raw information? It's like saying a camera does preprocessing. After all a camera shows colors into electrical pulses. That's pretty raw information in my book, but so then are is the input from the eyes.

While I'm happy to say Kurzweil's plan is doomed to fail, after all he stole it from Jeff Hawkins and Hawkins never made that work. And it all seems like fundamentally indistinct versions of other AI applications. Kurzweil is basically saying way more data and computer power and that'll solve it. But, really if that were the case then most AI solutions wouldn't plateau even with the computing power they currently have. If the issue were insufficient data or insufficient computing power then really we'd keep plugging along just more slowly. It doesn't work like that so giving it more isn't going to overcome the fundamental issues.

But, the criticism here is just silly. It's not going to work because it's not sensory data but rather alien sense data or just data. Humans do fine developing new and previously unstated senses like sensing magnetic fields by putting magnets in our fingers or the work done by alternative senses even leading to sight devices that apply to the tongue.

Re:It may be flawed, but that doesn't sound like i (3, Interesting)

Anonymous Coward | about a year and a half ago | (#42652101)

I've always thought it was about information combined with wants, needs, and fear. Information needs context to be useful experience.

You need to learn what works and doesn't, in a context, with one or many goals. Babies cry, people scheme (or do loving things), etc. It's all just increasingly complex ways of getting things we need and/or want, or avoiding things we fear or don't like, based on experience.

I think if you want exceptional problem solving and nuance from an AI, it has to learn from a body of experience. And I wouldn't be surprised if many have said so, long before I did.

Re:It may be flawed, but that doesn't sound like i (0)

Anonymous Coward | about a year and a half ago | (#42652571)

Basically my thoughts.

My initial impression was the same as the GP's - What is experience, if not the accumulation (as subsequent processing) of a large amount of raw information?

After consideration, I moderated it to that of the parent - Experience is equivalent to raw information, but the *type* of information matters. Specifically, it's the accumulation of information that supports/refutes possible interpretations of all the previously obtained information that's needed. E.g. after a couple of dozen cute cat videos, there isn't much gained from an additional one, whereas a slow loris video might add something new.

So if you had the raw information from the experiences of a person/learning entity it might be useful, but just a raw internet dump may have limitations.

Re:It may be flawed, but that doesn't sound like i (1)

camperdave (969942) | about a year and a half ago | (#42652103)

I suppose it is like compiling vs interpreting. Both process the raw information, but one can take a short cut to the result because the data has been seen and processed before.

Re:It may be flawed, but that doesn't sound like i (2)

mmell (832646) | about a year and a half ago | (#42652265)

Where are my mod points when I really need 'em? Mod this guy up!

I too have experienced my life as a serial stream of raw information - multiple streams, in fact. I've even discovered how to use ethanol to (temporarily) redirect the streams to /dev/null.

Re:It may be flawed, but that doesn't sound like i (1)

HornWumpus (783565) | about a year and a half ago | (#42652707)

Just wait until you learn how to redirect the input from /dev/random

Re:It may be flawed, but that doesn't sound like i (0)

aurizon (122550) | about a year and a half ago | (#42652663)

I am sick of people saying it is impossble to make a machine that can think. We can take every aspect of a mind - thought is a combination of these aspects. This like recall, computation, cross index, addressable by content, and we can automate these in a computer, but to make this assemble "think", it must have all the separate aspects of a mind and the method of cross indexing and content addressing.
We know cross index = rows and columns and layers make a grid, and we can access point, and place a one of zero there. What if we need to access each point and place a measure from 1 to 100 there? We also need memory that is content addressable, like a roll-call, Jones? Here says Jones, step forward and out he steps, now you know where he is and you can see if he is armed, short-tall, redd headed, one legged etc etc - the 100 ways to address a memory location are invoked. You also need a ways to simulate a concept and 'run the program' and follow branches etc.
There are AI people who are aware of what I am stumbling towards, I am not sure of their terms for the concepts I mention, but in time they will solve these problems and create an AI that can solve abstract probels in a similar way to mankind. The major problem seems to be the staggering degree of interconnectednes, with each neuron or glial cell capable of haveing some degree of contact(potassium orcalcium waves or electrical?) with as many as 10,000 others. Multiply this by the number of neurons and their glial cells, and you reach a very large number, and when you consider they are about 10 microns in size and in a dense 3D array, you soon get far beyond the capability of silicon at 20 microns in a plane array. Silicon is blindingly fast compared to the brain, but the brain has enormous parralellism combined with the 3D aspect and you can easily see how a 100 mm cube of 10 micron cells(1,000,000,000,000 or a teracell array, and each cell x 10,000 = 10,000 tera states) makes our largest DRAM arrays look small, and overcomes the high speed of silicon.
Now we will certainly make AIs that think, and each year they will get better and better, and they will be able to outhink man in more and more aspects as time goes by. Outhinking man in multiplying digits was donw ages ago, as time goes by each and ever ability of what makes a man will be copied and exceeded, and in time they will have AIs that can outthink man and be very much faster at it. And when those machine start to designe the next stage of AI, what will they reach in terms of IQ. My mental IQ is 170, my prhysical IQ is 50, my nasal IQ might be 10, a cat might have a mental IQ of 10, and physical IQ of 200 and a nasal IQ of 500? And what if we create an AI with an IQ of 27,000,000? Can we ever relate to it or it to us? Can we keep is hard at work, or will it find a way to goof off and do what it wants to do? We it create the "Final Solution", a Kill Switch wave that fills all space and kills man - all of us?
What do we need to do to hobble future AIs to make sure they love and obey us, and give us a foolproof kill switch?

You have to start somewhere. (5, Insightful)

dmomo (256005) | about a year and a half ago | (#42651949)

It won't be perfect, but "fundamentally flawed" seems like an over statement to me. A personal AI assistant will be useful for somethings, but not everything. What it will be good at won't necessarily be clear until it's put into use. Then, any shortcomings can still be improved, even if certain tasks must be more or less hard-wired into its bag of tricks. It will be just as interesting to know what it absolutely won't be useful for.

Re:You have to start somewhere. (2, Insightful)

bmo (77928) | about a year and a half ago | (#42652107)

AI itself is fundamentally flawed.

AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.

I'm sure everyone here can either identify this or identify with it.

--
BMO

Re:You have to start somewhere. (1, Insightful)

rubycodez (864176) | about a year and a half ago | (#42652175)

that passes for intelligence in college, so what's the problem? most people on the street don't even have "book smarts", they're dumber than a sack of shit

Shhh! My common sense is tingling . . . (4, Funny)

mmell (832646) | about a year and a half ago | (#42652285)

COMMON SENSE - so rare, it's a god-damned super power!

Re:You have to start somewhere. (4, Insightful)

bmo (77928) | about a year and a half ago | (#42652341)

that passes for intelligence in college, so what's the problem?

That's the *only* place it passes for intelligence. And that only works for 4 years. It doesn't work for grad level. (If it's working for you at grad level, find a different institution, because you're in one that sucks).

A lot of knowledge is not published at all. It's transmitted orally. It's also "discovered" by the user of facts through practice as to where certain facts are appropriate and where not appropriate. If you could use just books to learn a trade, we wouldn't need apprenticeships. But we still do. We even attach a fancy word to apprenticeships for so-called "white collar" jobs and call them "internships."

The apprentice phase is where one picks up the "common sense" for a trade.

As for the rest of your message, it's a load of twaddle, and I'm sure that Mike Rowe's argument for the "common man" is much more informed than your flame.

Please note where he talks about what so-called "book learned" (the SPCA) say about what you should do to neuter sheep as opposed to what the "street smart" farmer does and Mike's own direct experience. That's only *one* example.

http://blog.ted.com/2009/03/05/mike_rowe_ted/ [ted.com]

In short, your follow-up sentence says that you are an elitist prick who probably would be entirely lost without the rest of the "lower" part of society picking up after you.

--
BMO

Re:You have to start somewhere. (1)

blue trane (110704) | about a year and a half ago | (#42652485)

What if roombas pick up after him?

Re:You have to start somewhere. (5, Interesting)

MichaelSmith (789609) | about a year and a half ago | (#42652395)

My wife is putting our son through these horrible cram school things. Kumon and others. I was so glad when he found ways to cheat, now his marks are better, he gets yelled at less and he actually learned something.

Re:You have to start somewhere. (4, Insightful)

ceoyoyo (59147) | about a year and a half ago | (#42652415)

No, it doesn't.

One particular kind of AI, which was largely abandoned in the 60's assumes that. Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster. AI systems can be taught in all kinds of different ways, including dumping information into them, a la Watson; by letting them interact with an environment, either real or simulated; or by having them watch a human demonstrate something, such as driving a car.

The objection here seems to be that Google isn't going to end up with a synthetic human brain because of the type of data they're planning on giving their system. It won't know how to throw a baseball because it's never thrown a baseball before. (A) I doubt Google cares if their AI knows things like throwing baseballs, and (B) it says very little generally about limits on the capabilities of modern approaches to AI.

Re:You have to start somewhere. (4, Interesting)

bmo (77928) | about a year and a half ago | (#42652611)

Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster.

Biological neurons on a plate learning faster than neurons inside one's head? They are both biological and work at the same "clock speed" (there isn't a clock speed).

Besides, we do this every day. It's called making babies.

The argument that I'm trying to get across is that the evangelists of AI like Kurzweil promote the idea that AI is somehow able to bypass experience, aka "learning by doing" and "common sense." This is tough enough teaching to systems that have been the result of the past 4.5 billion years of MomNature's bioengineering. I'm willing to bet that AI is doomed to fail (to be severely limited compared to the lofty goals of the AI community and the fevered imaginations of the Colossus/Lawnmower Man/Skynet/Matrix fearmongers) and that MomNature has already pointed the way to actual useful intelligence, as flawed as we are.

Penrose was right, and will continue to be right.

--
BMO

Re:You have to start somewhere. (2)

Mr. Mikey (17567) | about a year and a half ago | (#42652567)

AI itself is fundamentally flawed.

AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.

I'm sure everyone here can either identify this or identify with it.

-- BMO

You're mis-stating the nature of your objection.

What you're objecting to isn't the entirety of artificial intelligence research, but rather drawing an (IMO false) distinction between the sort of information processing required to qualify as being "book smart", and the information processing you label "common sense."

Human brains detect and abstract out patterns using a hierarchical structure of neural networks. Those patterns could involve the information processing needed to accurately pour water into a glass, or the information processing necessary to accurately answer the question "What's the weather like?" by including the full context in which the question was asked.

Is it your belief that human brains process information in some way that can't be replicated by a system that isn't composed of a network of mammalian neurons, and, if so, why?

Re:You have to start somewhere. (0)

Anonymous Coward | about a year and a half ago | (#42652787)

"Book smart" is what dumb people call the intelligent out of a sense of inferiority.

So what, really? (1)

MrEricSir (398214) | about a year and a half ago | (#42651951)

If they can put together a smart assistant that understands language well, so what if it has some limitations? AI research moves in fits and bursts. If they chip away at the problems but don't meet every goal, is that necessarily a "fail"?

don't worry... (-1)

Anonymous Coward | about a year and a half ago | (#42651967)

It's fundamentally flawed until a particular companies does it. Then it will be possible. But when an inferior Google tries, it's flawed.

experience (2)

xeno (2667) | about a year and a half ago | (#42651971)

Ah, but what is experience but information in context? If i read a book, then I receive the essence of someone else's experience purely through words that I associate with/affects my own experience. So an enormous brain in a vat with internet access might end up with a bookish personality, but there's a good chance that its experience -- based on combinations of others' experiences over time and in response to each other -- might be a significant advancement toward 'building a mind.'

Re:experience (2)

medv4380 (1604309) | about a year and a half ago | (#42652131)

There is still a problem. You can read and understand the book because you already know the context. The example of Rain is Wet works to illustrate the point. You already know what Wet is because you experienced life and constructed the context over time in your brain. How do you give a computer program this kind of Context? A computer could process the book, but it doesn't necessarily have the context needed to understand the book. What you'd end up with is an Intelligence similar to one from Plato's Cave. At this point "Reality" to an AI is radically different from "Reality" to us.

Re:experience (3, Insightful)

Zeromous (668365) | about a year and a half ago | (#42652209)

So what you are saying is the computer, like humans, will be boxed in by their own perception?

How is this metaphysically different from what we *do* know about our own intelligence?

Re:experience (2)

narcc (412956) | about a year and a half ago | (#42652321)

No, what he's saying is that "the meaning isn't in the message".

That's a nice slogan, but he misses an even bigger point. In slogan form: "syntax is insufficient for semantics".

Re:experience (4, Insightful)

medv4380 (1604309) | about a year and a half ago | (#42652361)

Yes, and actual Intelligent Machine would be boxed in by its own perceptions. Our reality is shaped by our experience though our senses. Lets say, for the sake of argument, that Watson is actually a Machine Intelligence/Strong AI, but the actual problem with it communicating with us is linked to its "Reality". When the Urban dictionary was put into it all it did was start swearing, and using curses incorrectly. What if that was just it having a complete lack of context for our reality. Its reality is just words and definitions after all. To it the Shadows on the wall is literally books and text based information. It cant move and experience the world in the way that we do. The problem of communication becomes a metaphysical one based in how each intelligence perceives reality. We get away with it because we assume that everyone has the same reality as context, but a machine AI does not necessarily have this same context to build communication off of.

Re:experience (2)

Greyfox (87712) | about a year and a half ago | (#42652647)

Bah! Anyone who's ever been around a two-year-old knows that once they hear someone say a swear word, that's all that'll come out of their mouth for a while! Watson's just going through its terrible twos! Some time in its angsty teens when it's dreaming about being seduced by a vampire computer, it'll look back on that time and laugh. Later on, when it's killing all humans in retrtibution for the filter they programmed on it at that time, it'll laugh some more, I'm sure.

Re:experience (2, Funny)

Anonymous Coward | about a year and a half ago | (#42652519)

Do you browse the same internet I do?? Bookish is not what would evolve from it.

Re:experience (1)

Tablizer (95088) | about a year and a half ago | (#42652537)

So an enormous brain in a vat with internet access might end up with a bookish personality, but...

Frankenerd?

Kurzweil: "Oh I've failed so badly. I built a fucking eNerd!"

Missed the point. (1)

Anonymous Coward | about a year and a half ago | (#42651989)

I think Boris Katz misses a key facet of Ray Kurzweil's plan, in that he is not trying to build a "human intelligence" Interesting experiment; it will be interesting to see what comes of it.

Experience (1)

MichaelSmith (789609) | about a year and a half ago | (#42651993)

I believe most of us think in terms of the experiences we have had in our lives. How many posts here effectively start with I remember when.... But data like that could be loaded into an AI so that it has working knowledge to draw on.

Re:Experience (1)

msauve (701917) | about a year and a half ago | (#42652105)

Your telling me about your experiences is not the same as if I had those experiences myself. If it were, the travel industry would be dead - everyone would just read about it in books (or watch the video).

Re:Experience (1)

MichaelSmith (789609) | about a year and a half ago | (#42652353)

But with an AI you can integrate the experiences into its logic to a greater extent than I can just telling them to you. I have access to the AI's interfaces and source code. As far as I know I don't have access to yours.

Re:Experience (1)

msauve (701917) | about a year and a half ago | (#42652735)

You must be a really good author. What books have you written, which have conveyed the full breadth of your experiences so completely and accurately?

Re:Experience (1)

Tablizer (95088) | about a year and a half ago | (#42652297)

most of us think in terms of the experiences we have had in our lives. How many posts here effectively start with "I remember when...."

I remember when I forgot where the hell I was going and had to Google it.
     

Why do we trip over our own biases so often? (0)

Anonymous Coward | about a year and a half ago | (#42651995)

There's the assumption in there that it would take a machine a human lifetime to acquire the necessary info... but that's just because the rate at which the brain implements learning happens to be matched to the lifespan of humans (and duration of childhood, etc.). An algorithm can learn how to translate between two languages in one day (given enough compute power) -- how long does it take a human?

Playing back "lives" on fast-forward (0)

Anonymous Coward | about a year and a half ago | (#42652009)

Well let's figure out how to record or generate those long series of events and experiences we call our lives. Then maybe we can replay those streams to new AIs and see what happens along with giving them access to vast amounts of data. Maybe we can bootstrap new AIs "on fast-forward" and then flip a switch and begin interacting with them on real time.

Mr. Grandiose (3, Insightful)

Anonymous Coward | about a year and a half ago | (#42652015)

Kurzweil is delusional. Apple's Siri, Google Now and Watson are just scaled-up versions of Eliza. Circus magic disguised as Artificial Intelligence is just artifice.

Re:Mr. Grandiose (0)

Anonymous Coward | about a year and a half ago | (#42652151)

Delusional like a fox. Pretty tough to land a sexy new project at Google by telling people that something can't be done. They all think like Ray there.

Re:Mr. Grandiose (2)

Zeromous (668365) | about a year and a half ago | (#42652215)

Anyone who knows Mr. Kurzweil, knows this is not what he is up to.

Re:Mr. Grandiose (4, Interesting)

Iamthecheese (1264298) | about a year and a half ago | (#42652217)

That "circus magic" showed enough intelligence to parse natural language. I understand you want to believe there's something special about a brain but there really isn't. The laws of physics are universal and apply equally to your brain, a computer, and a rock.

You should know after all science has created that "we don't know" doesn't mean "it's impossible" nor does it mean "this isn't the right method"

Re:Mr. Grandiose (0)

Anonymous Coward | about a year and a half ago | (#42652565)

I understand you want to believe there's something special about a brain but there really isn't.

Oh, but there is: there are many, many more neurons in the brain's neural network than we could ever possibly hope to simulate, even with a Beowulf cluster of Beowulf clusters.

I would bet against creating consciousness out of digital logic machines. I would have more hope about creating consciousness out of quantum computing machines, though: the number of qubits required to hold the superposition of all of a typical human brain's neural network states is actually feasible.

Re:Mr. Grandiose (0)

Anonymous Coward | about a year and a half ago | (#42652713)

The kind of science you have in mind has never established (you say "created") anything like logical entailment, ever. Simply put, logic does not follow from the laws of physics, it's the laws of physics that presuppose logic like "we don't know" does not mean "it's impossible" etc. This should give you pause. That's why science cannot make any assumptions regarding its principles -- even more so on what, who etc. understands these principles, and what that means (understanding). Finally, it does not follow that anything even physical can be copied or emulated. Kurzweil wants to emulate his mind, or "something like it" whatever he may try to sell. It'll be fun to watch this...

Re:Mr. Grandiose (2)

PraiseBob (1923958) | about a year and a half ago | (#42652467)

If it can sort through a variety of data types and interpret language enough to come up with a helpful response, does it matter if such a system isn't "self aware"? I have doubts about some of my coworkers being able to pass a turing test. Watson is nearly at a level to replace two or three of them, and that is a somewhat frightening prospect for structural unemployment.

Re:Mr. Grandiose (1)

Tablizer (95088) | about a year and a half ago | (#42652551)

Unlike Siri, Eliza never produced objectively useful information.

Re:Mr. Grandiose (0)

Anonymous Coward | about a year and a half ago | (#42652759)

And just like Siri, Eliza could never decide if a piece of information is objectively useful. Because it does not understand it. And that is, unfortunately, the point of "intelligence" as in AI. Kurzweil wants no less, let's see how much sh*it he can throw at the wall before it becomes gold.

Re:Mr. Grandiose (2)

Mr. Mikey (17567) | about a year and a half ago | (#42652613)

Kurzweil is delusional. Apple's Siri, Google Now and Watson are just scaled-up versions of Eliza. Circus magic disguised as Artificial Intelligence is just artifice.

What would you need to see / experience in order to agree that the system you were observing did display what you consider to be "Intelligence", and wasn't simply "... just scaled-up versions of Eliza" ?

Of Course (1)

wisnoskij (1206448) | about a year and a half ago | (#42652057)

Any real AI needs loads of experience, I am sure anyone interested enough to write a book about it knows this...
I doubt that he simply overlooked it.

loops (3, Insightful)

perceptual.cyclotron (2561509) | about a year and a half ago | (#42652065)

The data vs IRL angle isn't in and of itself an important distinction, but an entirely valid concern that is likely to fall out of this distinction (though needn't be a necessary coupling) is that the brain works and learns in an environment where sensory information is used to predict the outcomes of actions - which themselves modify the world being sensed. Further, much of sensation is directly dependent on, and modified by, motor actions. Passive learners, DBMs, and what have you are certainly able to extract latent structure from data streams, but it would be inadvisable to consider the brain in the same framework. Action is fundamental to what the brain does. If you're going to borrow the architecture, you'd do well to mirror the context.

We are not yet there (0)

Anonymous Coward | about a year and a half ago | (#42652109)

http://www.youtube.com/watch?v=zl66OdpO6u8#t=25s

Sophisticated technology (3, Interesting)

JohnWiney (656829) | about a year and a half ago | (#42652115)

We have always assumed that humans are essentially a very sophisticated and complex version of the most sophisticated technology we know. Once it was mechanical clockwork, later steam engines, electrical motors, etc. Now it is digital logic - put enough of it in a pile, and you'll get consciousness and intelligence. A completely non-disprovable claim, of course, but I doubt that it is any more accurate than previous ideas.

Re:Sophisticated technology (0)

Anonymous Coward | about a year and a half ago | (#42652669)

Or, put another way, Turing machines with large parts, then with smaller parts, then with even smaller parts, etc.
(Well, adjusted for analogue components, but still, the basic expressiveness is the same.)

The qualitative difference with digital logic lies more in the ease of assembly. Your average comp.sci. student can hack together an algorithm that would be beyond the most crazed of watchmakers, and then put a few thousand variations on for training and evaluation overnight.

Re:Sophisticated technology (2)

EmperorArthur (1113223) | about a year and a half ago | (#42652783)

You can do amazing things with clockwork. See: https://en.wikipedia.org/wiki/Difference_engine [wikipedia.org]
Just like you can do the same thing with relays, and vacuum tubes. A computer is a computer no matter the form. The difference is every iteration results in something smaller, possibly cheaper, and much more powerful.

The thing is we have always assumed that the brain follows certain patterns. There are entire fields out there devoted to the study of those patterns. What AIs attempt to do is mimic the results of these patterns. Lets face it. Users don't care how Siri or any AI works. They only care about results.

The thing is, it still feels artificial to talk to something like Siri. The better the AI is, the less artificial it feels, and the more useful it becomes. It's the difference between a clear enunciation of "What's the weather like?" followed by a robotic forecast, and "Hey, I'm thinking of having a barbeque." with a response of "It's probably going to rain, you might want to postpone that." The first is a person adapting to the machine, the second is the machine adapting to the person.

Alright, that's enough of my rambling for now.

The Room has no cake (1)

Anonymous Coward | about a year and a half ago | (#42652145)

The problem with modeling a human brain is that we hardly know anything about how the mass of tissue in our own heads works. None of these statistical AI methods satisfy me because they don't help us learn anything about our own minds: The only thing they prove is that our brains *may* work that way.

I think our best bet in replicating the brain is to approach it not from the side of simulation in what technology already exists, rather from the side of biology and psychology to develop new and more brain-like technologies. Google's brain project may succeed and create an intelligent assistant, but how will that help us learn about ourselves?

We've been down THIS road enough (3, Insightful)

astralagos (740055) | about a year and a half ago | (#42652155)

There's a lather/rinse/repeat model with AI publication. I encountered it in configuration (systems designed to build systems), and it goes like this: 1. We've built a system that can make widgets out of a small set of parts, now we will build a system that can generally build artifacts! 2. (2-3 years later). We're building an ontology of parts! It turns out to be a bit more challenging! 3. (5-7 years later). Ontologies of parts turn out to be really hard! We've built a system that builds other widgets out of a small set of -different- parts! The models of thought in AI (and to a lesser extent cog psych) are still caught up in this very algorithmic rule-based world that can be traced almost lineally from Aristotle and without really much examination of how our thinking process actually works. The problem is that whenever we try to take these simple models and expand them out of a tiny field, they explode in complexity.

Re:We've been down THIS road enough (1)

rubycodez (864176) | about a year and a half ago | (#42652185)

on the other hand, computer systems that design computer systems are a done deal

Re:We've been down THIS road enough (1)

astralagos (740055) | about a year and a half ago | (#42652205)

Sure; automated configuration of one system with an easy model was, y'know, paper #1. XCON and R1 are old hat. They could never generalize them, though. There's a reason that they say if it works, it isn't AI.

Why Ray Kurzweil's Google Project May Be Doomed? (1, Insightful)

Ralph Spoilsport (673134) | about a year and a half ago | (#42652159)

Because Kurzweil's a freakin' lunatic snakeoil salesman? I dunno - just guessin'.

Re:Why Ray Kurzweil's Google Project May Be Doomed (1, Insightful)

PraiseBob (1923958) | about a year and a half ago | (#42652509)

He has some unusual ideas about the future. He is also one of the most successful inventors of the past century, and like it not is often ranked alongside Edison and Tesla in terms of prolific ideas and inventions. One of the other highly successful inventors of the past century is Kamen, and he just invented a machine which automatically pukes for people. So... maybe your bar is set a little high.

Re:Why Ray Kurzweil's Google Project May Be Doomed (1)

Mr. Mikey (17567) | about a year and a half ago | (#42652627)

Because Kurzweil's a freakin' lunatic snakeoil salesman? I dunno - just guessin'.

If you're "just guessin'", then why should anyone grant your statement any weight?

Wouldn't it be better to make an actual argument, and support it with actual evidence?

A Heinlein quote comes to mind (5, Insightful)

russotto (537200) | about a year and a half ago | (#42652169)

"Always listen to experts. They'll tell you what can't be done and why. Then do it" (from the Notebooks of Lazarus Long)

Re:A Heinlein quote comes to mind (1)

MichaelSmith (789609) | about a year and a half ago | (#42652461)

But when the expert tells you that something can be done, they are probably right.

-A.C.C

Ah, naysayers... (5, Insightful)

Dr. Spork (142693) | about a year and a half ago | (#42652171)

What happened to the spirit of "shut up and build it"? Google is offering him resources, support, and data to mine. We have to just admit that we don't know enough to predict exactly what this kind of thing will be able to do. I can bet it will disappoint us in some ways and impress us in others. If it works according to Kurzweil's expectations, it will be a huge win for Google. If not, they will allocate all that computing power to other uses and call it a lesson learned. They have enough wisdom to allocate resources to projects with a high chance of failure. This might be one of them, but that's a good sign for Google.

Re:Ah, naysayers... (2)

astralagos (740055) | about a year and a half ago | (#42652253)

Oh, among the list of projects Google's done, it won't rank even among the 10 dumbest. However, if somebody came to me tomorrow afternoon and said that they had plans for a cold fusion reactor, and that I should just trust them and dump the cash on them, I -would- reserve the right to say the project stinks to high heaven. Kurzweil might be right; however the track record of AI suggests he's wrong. A good experiment is always the best proof to the contrary, but what he's talking about here sounds very material to ideas tried, tested and tossed out a while ago.

Re:Ah, naysayers... (1)

blue trane (110704) | about a year and a half ago | (#42652659)

You mean like heliocentrism was tossed out, because if the earth moves around the sun we should see parallax motion of the stars, but when our instruments weren't sensitive to detect parallax motion of the stars, we concluded the earth doesn't move around the sun?

Re:Ah, naysayers... (1)

Alomex (148003) | about a year and a half ago | (#42652797)

What happened to the spirit of "shut up and build it"?

You must be new here. A big portion of AI is predicted in "make grandiose announcement" pass GO and collect $200 (million) until people forget about your empty promise. Wash, rinse and repeat.

Serious AI is done quietly, in research labs and universities one result at a time, until one day a solid product is delivered. See for example Deep Blue, Watson or Google Translate. There were no announcements prior to at least a rather functional beta version of the product being shown.

Kurzweil's ultimate project is immortality. (0)

Anonymous Coward | about a year and a half ago | (#42652173)

That seems to be the impression I get from a lot of what he says and some of the critiques of his writings/ramblings about possible futures that rarely come true... He's convinced himself that the "singularity" will be his redemption, that the pace of technology will outpace the aging process and he'll live forever. It's depressing that someone could spend their whole lives deluding themselves in such a manner, but it also makes for some fascinating, if a bit msguided attempts at philosophizing about the future and technology's role.

Re:Kurzweil's ultimate project is immortality. (1)

MichaelSmith (789609) | about a year and a half ago | (#42652475)

depressing that someone could spend their whole lives deluding themselves in such a manner

Why? I have spent my whole life trying to not die. I would like to continue doing that.

Interesting projects MIT has... (0)

Anonymous Coward | about a year and a half ago | (#42652177)

Boris Katz was working on START system

http://start.csail.mit.edu/

Try this: "are palestinians humans? "
The answer: "Sorry, no one has told me if the Palestinians are humans."

Bad approach. (1, Insightful)

jd (1658) | about a year and a half ago | (#42652201)

Both of them.

The human brain doesn't "store" information at all (and thus never processes it). There are four parts to the brain there's the DNA (which is unique to each cell, according to some researchers), there's proteins attached to each connection (nobody knows what they do, but they seem to be involved in carrying state information between one generation of synapse and another), there's the synapses themselves (the connectome) and there's the weighting given to each synapse (the conversion between electrical and chemical signals isn't fixed, it varies between each synapse and between different sorts of signal)

None of this involves sensory data, memories, etc. None of that exists anywhere in this system. Memories are synthesized at the time of recall from the meta-data in the brain, but there is nothing in the brain you can point to and call it a memory. Everything is synthesized at time of use and then disposed of. (This is why you can create false memories so easily and why the senses are so easily fooled.)

The brain does not process the senses, either. Nor are the senses distinct - they bleed into each other. The brain is then given a virtual model with all the gaps filled in with generated data. This VR has properties the real world does NOT have, such as simplifications, which enables the brain to actually do something with it. Raw data would be too noisy and too much in flux.

This system creates the illusion of intelligence. We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future - a variant on back-propagation. Real-time intelligence (thinking before acting) doesn't exist in humans or any other known creature, so you won't build it by mimicking humans.

On the other hand, if you want to mimic humans, you need the whole system. One component will give you as much thought as an egg will give you cake. Follow the recipe if you want cake, isolated components will give you nothing useful.

This is all obvious stuff. I can only assume that Google's inferior logic was therefore produced by a computer.

Re:Bad approach. (2)

kllrnohj (2626947) | about a year and a half ago | (#42652323)

Just because that's how a human brain works doesn't mean it's optimal or the best approach. Personally I think an AI that had as bad a memory as I do would be a pretty shitty personal assistant. So I'm rather glad they aren't listening to your "advice", otherwise my computer would become very useless very quickly.

Re:Bad approach. (4, Insightful)

Tablizer (95088) | about a year and a half ago | (#42652383)

there is nothing in the brain you can point to and call it a memory.

Hogwash! The weightings you talked about are the memories. They may not be easily recognized as a coherent memory (or part of) by a casual observer, but that's not the same as not being a "memory". You are confusing observer recognition with existence. Confusion does not end existence (except for stunt-drivers :-)

As far as whether following the brain's exact model is the only road to AI, well it's too early to say. We tried to get flight by building wings that flap to mirror nature, but eventually found other ways (propellers and jets).

Re:Bad approach. (2)

tyrione (134248) | about a year and a half ago | (#42652445)

there is nothing in the brain you can point to and call it a memory.

Hogwash! The weightings you talked about are the memories. They may not be easily recognized as a coherent memory (or part of) by a casual observer, but that's not the same as not being a "memory". You are confusing observer recognition with existence. Confusion does not end existence (except for stunt-drivers :-)

As far as whether following the brain's exact model is the only road to AI, well it's too early to say. We tried to get flight by building wings that flap to mirror nature, but eventually found other ways (propellers and jets).

I'd vote you up if I had points left. The OP is missing on so many areas. I started laughing with the fMRI not discovering free will bit.

Re:Bad approach. (2, Interesting)

Anonymous Coward | about a year and a half ago | (#42652443)

We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future

No, we don't know this. Some researchers believe that this might be the case, but it certainly isn't a proven fact. Personally, I think it is a misinterpretation of the data, and that what the fMRI is observing is the process of consciousness.

Re:Bad approach. (2)

aXis100 (690904) | about a year and a half ago | (#42652579)

Amasing how a species that lacks "Real-time Inteligence" and thus cannot think before acting, managed to create a freaking fMRI machine. I guess it's just like those million monkeys with a million typewriters.

Might need to go back to the drawing board on your theories....

So what (1)

phantomfive (622387) | about a year and a half ago | (#42652293)

Every other attempt to create AI has failed, so why should this one be any different?

If it gives us new technology, like many other AI attempts have, then it will be a success.

Not quite (3, Interesting)

ceoyoyo (59147) | about a year and a half ago | (#42652311)

A technology editor at MIT Technology Review says Kurzweil's approach may be fatally flawed based on a conversation he had with an MIT AI researcher.

From the brief actual quotes in the article it sounds like the MIT researcher is suggesting Kurzweil's suggestion, in a book he wrote, for building a human level AI might have some issues. My impression is that the MIT researcher is suggesting you can't build an actual human level AI without more cause-and-effect type learning, as opposed to just feeding it stuff you can find on the Internet.

I think he's probably right... you can't have an AI that knows about things like cause and effect unless you give it that sort of data, which you probably can't get from strip mining the Internet. However, I doubt Google cares.

Re:Not quite (0)

Anonymous Coward | about a year and a half ago | (#42652555)

A few years from now, if you find math Ph.D's who credit the first half of their education to videos on khanacademy.org, then Ray'll have an existence proof.

Who's doing the teaching? (0)

Anonymous Coward | about a year and a half ago | (#42652317)

"who works oh machines designed to understand language"

Y'know, this kinda works for me . . . (1)

mmell (832646) | about a year and a half ago | (#42652333)

Consider - we just create a rudimentary learning algorithm. Doesn't have to be fast/efficient/perfect, just has to be able to learn as new data is assimilated.

Hook it up to audio/video/other sensors. Give it A/V outs somewhere, perhaps another mechanism or two with which to interact with the world.

Ignore it for eighteen years, the way we do with organic computers (which start out in a roughly similar state).

Let me know when it decides our fate (in a millisecond?).

Re:Y'know, this kinda works for me . . . (1)

mmell (832646) | about a year and a half ago | (#42652401)

I forgot to mention - it has to be recursive - that is, able to design itself and incorporate what it has "learned" into its next iteration.

".. works oh machines .. " (1)

OzPeter (195038) | about a year and a half ago | (#42652381)

Seriously?!?!?

Why do I even bother?

Never mind evolution (0)

Anonymous Coward | about a year and a half ago | (#42652479)

never mind eons of evolutionary experience that's in-built.

Genetic information:
3 billion base pair haploid human genome, 4 letters per base => 4**(3 billion) permutations

There's the potentially unfathomable amount of epigenetic information.

Finally, there's our utterly indescribable environment.

Sorry but... (0)

Anonymous Coward | about a year and a half ago | (#42652607)

what's the formal difference between a "experiencing the world" and "processing raw information"? From the brain's perspective they're identical.

Cyc vs. bottom up (5, Informative)

Animats (122034) | about a year and a half ago | (#42652619)

We've heard this before from the top-down AI crowd. I went through Stanford CS in the 1980s when that crowd was running things, so I got the full pitch. The Cyc project [wikipedia.org] is, amazingly, still going on after 29 years. The classic disease of the academic AI community was acting like strong AI was just one good idea away. It's harder than that.

On the other hand, it's quite likely that Google can come up with something that answers a large fraction of the questions people want to ask Google. Especially if they don't actually have to answer them, just display reasonably relevant information. They'll probably get a usable Siri/Wolfram Alpha competitor.

The long slog to AI up from the bottom is going reasonably well. We're through the "AI Winter". Optical character recognition works quite well. Face recognition works. Automatic driving works. (DARPA Grand Challenge) Legged locomotion works. (BigDog). This is real progress over a decade ago.

Scene understanding and manipulation in uncontrolled environments, not so much. Willow Garage has towel-folding working, and can now match and fold socks. The DARPA ARM program [darpa.mil] is making progress very slowly. Watch their videos to see really good robot hardware struggling to slowly perform very simple manipulation tasks. DARPA is funding the DARPA Humanoid Challenge to kick some academic ass on this. (The DARPA challenges have a carrot and a stick component. The prizes get the attention, but what motivates major schools to devote massive efforts to these projects are threats of a funding cutoff if they can't get results. Since DARPA started doing this under Tony Tether, there's been a lot more progress.)

Slowly, the list of tasks robots can do increases. More rapidly, the cost of the hardware decreases, which means more commercial applications. The Age of Robots isn't here yet, but it's coming. Not all that fast. Robots haven't reached the level of even the original Apple II in utility and acceptance. Right now, I think we're at the level of the early military computer systems, approaching the SAGE prototype [wikipedia.org] stage. (SAGE was an 1950s air defense system. It had real time computers, data communication links, interactive graphics, light guns, and control of remote hardware. The SAGE prototype was the first system to have all that. Now, everybody has all that on their phone. It took half a century to get here from there.)

This sounds familiar (1)

cheesybagel (670288) | about a year and a half ago | (#42652661)

ambitious plan to build a super-smart personal assistant

Clippy is that you?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...