Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Jeff Hawkins' Cortex Sim Platform Available

kdawson posted more than 7 years ago | from the build-a-brain-at-home dept.

Science 126

UnreasonableMan writes "Jeff Hawkins is best known for founding Palm Computing and Handspring, but for the last eighteen months he's been working on his third company, Numenta. In his 2005 book, On Intelligence, Hawkins laid out a theoretical framework describing how the neocortex processes sensory inputs and provides outputs back to the body. Numenta's goal is to build a software model of the human brain capable of face recognition, object identification, driving, and other tasks currently best undertaken by humans. For an overview see Hawkins' 2005 presentation at UC Berkeley. It includes a demonstration of an early version of the software that can recognize handwritten letters and distinguish between stick figure dogs and cats. White papers are available at Numenta's website. Numenta wisely decided to build a community of developers rather than trying to make everything proprietary. Yesterday they released the first version of their free development platform and the source code for their algorithms to anyone who wants to download it."

Sorry! There are no comments related to the filter you selected.

Future Plans (2, Funny)

Anonymous Coward | more than 7 years ago | (#18257980)

Someone needs to put this Cortex Simulator in an 8-legged, hydraulic-actuated, 10 ton spider-machine. If you think that's a crazy idea, you suck.

Re:Future Plans (2, Funny)

Anonymous Coward | more than 7 years ago | (#18258302)

Apparently someone thinks it's a crazy idea. That bastard.

Right... (5, Insightful)

Bugpowda (671725) | more than 7 years ago | (#18258010)

I'm still a bit confused as to how he is so confident that this [numenta.com] is how the neocortex works given that this is still one of the 23 unsolved problems in system neuroscience [amazon.com] . But hey, he made a lot of money off Palm, that gives him way more street cred than people who have been working on this problem for their whole lives.

Re:Right... (2, Informative)

not-admin (943926) | more than 7 years ago | (#18258128)

That book was published over a year ago, lots can and has changed in that time.

Plus, he's sure because he's proposing a solution to the 'unsolved problem.'

Re: Not one year, seven or eight years (3, Informative)

stephanruby (542433) | more than 7 years ago | (#18258750)

That book was published over a year ago, lots can and has changed in that time.

Actually, its content was produced seven or eight years ago.

Its publishing date was "December 2005". But publishers will lie about the publication date of a book if it allows them to sell more books. And in this case, I wouldn't be surprised if the book came out hot off the presses in December 2004 with a postdate of "December 2005"

Furthermore, this book was based on the scientific proceedings of a conference which occurred six years before the book was finally edited (or finally published). I'm actually not sure of the year of the scientific conference itself, because the information supplied to sell the book doesn't give the actual year.

Re:Right... (2, Interesting)

bratgitarre (862529) | more than 7 years ago | (#18258154)

Exactly. He gave a presentation at our university and my impression was that he was quite full of it. I'm not saying he doesn't have a neat algorithm, but his claims were quite, uhm, let's say, "ambitious".

Re:Right... (3, Insightful)

fyngyrz (762201) | more than 7 years ago | (#18258582)

his claims were quite, uhm, let's say, "ambitious".

That is a wonderful thing, though. First of all, claims can be tested. They'll either live up to the description, or they won't. If the don't, another path not to go down in a particular manner has been identified, and that is useful. OTOH, if they are verified, then we may have a key to a form of cognition. Whether it is our kind or not is really not as important as just the fact that it is some kind.

Aside from that, I found some very interesting things in his descriptions of the HTM. For instance, I found the following precise description of enabling religious behavior: First, he describes how HTMs handle specific, non-overlapping domains (and of course this doesn't mean that another HTM can't relate those to each other.) One might handle financial markets, another speech, another cars. Then he says "After initial training, an HTM can continue to learn or not" Emphasis mine. So you can set up an HTM in a learning situation where you limit the input to descriptions consisting of sensory data of any arbitrarily limited set of patterns you like, get it to see the world represented by those patterns as you wish, and then disable learning for that particular HTM. Other HTMs can continue to learn, but that one is "frozen." Sounds like the perfect recipe for a priest or supplicant to me. Does that not sound like the very core definition of "unshakable faith"?

For all the doubt being thrown this fellow's way, you know, eventually someone will come up with something like this and it will be a working model of such a system. It's a tough problem, very abstract and requiring a lot of insight, but as with all problems discovered to date where we can actually get our hands on the system under study, there is no indication that any part of it exists in any way outside the sphere of nature and the natural rules we already know - and we know a lot of basic rules.

Kudos to him for sinking his teeth into the problem, and for coming up with results that can be tested, and for letting them loose into the word for such testing. If he's wrong, he's helping. If he's right - he's going to be mentioned in the same breath with a lot of very important people for a very, very long time to come.

Re:Right... (1, Insightful)

Anonymous Coward | more than 7 years ago | (#18259394)

Aside from that, I found some very interesting things in his descriptions of the HTM. For instance, I found the following precise description of enabling religious behavior: [standard description of training a classification algorithm on data] Sounds like the perfect recipe for a priest or supplicant to me. Does that not sound like the very core definition of "unshakable faith"?

No, it sounds more like you should share whatever it is you've been smoking...

What you've described applies equally well to, say, Fisher's Linear Discriminant. [ucsc.edu] You optimize the algorithm on a set of data points, and then you can apply it to some other data if you like. If you think that's an adequate model for human reasoning and consciousness, then maybe you are the one with the strange beliefs...

Re:Right... (1)

fyngyrz (762201) | more than 7 years ago | (#18259442)

Oh, I do have strange "beliefs", if you'd measure them, as most would, by comparing them to the majority outlook. In fact, I try not to have any at all, preferring a confidence-based outlook derived from consensual evidence. So my beliefs... yes, strange or non-existent. You're certainly spot-on about that. :-) The rest, not so much. But you are certainly welcome to your opinion; there's no rule that I know of that says you have to be correct in order to speak out.

Re:Right... (3, Interesting)

Christianson (1036710) | more than 7 years ago | (#18260082)

Caveat: I am a neuroscientist. I am not familiar with the works of Mr. Hawkins.

That is a wonderful thing, though. First of all, claims can be tested. They'll either live up to the description, or they won't.

Most "grand-scale theories of brain operation", in fact, fail to make claims that can be tested, at least not in the foreseeable future. They predict the large-scale algorithms by which the brain operates. They do not make any claims as to the behavior of any individual neurons, and this is the data we have to work with. Moreover, these theories generally fail to provide any explanation for existing data, such as the diversity in neuronal phenotypes, the connectivity architecture, functional segregation, the wealth of neurotransmitters, laminar structure and why the details of this structure varies across the neocortex, differences in histochemical labelling, and so on and so forth. In short, these theories tend to be computer science, and not neuroscience. They might represent major progress in the question, "how do we make a machine that can solve a difficult computational problem?" but they have very little significance in answering the question, "what are the principles that underlie neural performance?"

Aside from that, I found some very interesting things in his descriptions of the HTM. For instance, I found the following precise description of enabling religious behavior: First, he describes how HTMs handle specific, non-overlapping domains (and of course this doesn't mean that another HTM can't relate those to each other.) One might handle financial markets, another speech, another cars. Then he says "After initial training, an HTM can continue to learn or not" Emphasis mine. So you can set up an HTM in a learning situation where you limit the input to descriptions consisting of sensory data of any arbitrarily limited set of patterns you like, get it to see the world represented by those patterns as you wish, and then disable learning for that particular HTM. Other HTMs can continue to learn, but that one is "frozen." Sounds like the perfect recipe for a priest or supplicant to me. Does that not sound like the very core definition of "unshakable faith"?

Not really. The ability to stop learning is a crucial element of learning. In the existing computational literature, this is related to the problem of overfitting: there comes a point where additional learning is dominated by attempts to explain noise in the data, and can actually lead to degraded performance. A classic example of "frozen learning" in living animals include the zebra finch male (who learns one song only in his life, which remains unchanged past adolescence). Anecedotally, you can also think of human accents in speech; most people never lose the accents they develop in childhood, no matter how hard they might try. Of course, both of the examples illustrate that it is not a matter of stopping learning; learning can in fact occur, but much slower and under much more extreme conditions.

From the perspective of neuroscience (and in fact, from the unsupervised learning perspective in general), given that there are lots of models that can learn and stop learning, the much more relevant question is: how can the system switch between these two states?

For all the doubt being thrown this fellow's way, you know, eventually someone will come up with something like this and it will be a working model of such a system. It's a tough problem, very abstract and requiring a lot of insight, but as with all problems discovered to date where we can actually get our hands on the system under study, there is no indication that any part of it exists in any way outside the sphere of nature and the natural rules we already know - and we know a lot of basic rules.

Is the problem a supernatural one? Of course not. It is a very tough problem. The issue is not, at this point, a lack of theory. There is an immense computational literature. The problem is getting data. We don't know a lot of the basic rules of system neuroscience. We're not yet entirely sure there are any, and the whole brain might be an inconsistently jury-rigged hack.

The community of experimental neuroscientists tends to look on work on "large scale theories of the brain" in much the same way as physicists regard philosophers who use quantum mechanical principles to explore the meaning of reality: it might be very interesting for their field, but it's of little use to us. These computational philosophers, I think, seem to skirt very closely to the Platonic fallacy, and do a great disservice to the much larger community of theoretical neuroscientists who work in concert with the data to try and extract the principles that underlie the brain that actually exists.

Re:Right... (1)

lukesl (555535) | more than 7 years ago | (#18263794)

The community of experimental neuroscientists tends to look on work on "large scale theories of the brain" in much the same way as physicists regard philosophers who use quantum mechanical principles to explore the meaning of reality: it might be very interesting for their field, but it's of little use to us.

I'm an experimental neuroscientist, and I think that's a little harsh. I think people who work at the level that Hawkins does (not necessarily him in particular) can provide things that are useful and interesting to us. It's a lot like Hodgkin and Huxley--if there hadn't been work to show that action potentials occurred, they wouldn't have figured out what ion channels were doing. And the action potential is a lot less complicated than ion channels are, considering that the structures have just recently been solved, and rigorous biophysical models of channel function still have not been formulated. Similarly, I think what individual cortical columns or whatever are doing is going to turn out to be much less complicated than the actual neurophysiological substrate underlying it. So I think it's important to try to figure out what brains are doing, not just how they do it. Also, trying to figure out how something works is a lot easier if we have an idea of what it's supposed to do.

Re:Right... (0)

Anonymous Coward | more than 7 years ago | (#18260112)

You're right! Not only does this explain religion, it explains all the times I fail to convince someone of something! Their brains are obviously hard-wired!

captcha: reproach. How appropriate.

Re:Right... (1)

Huggs (864763) | more than 7 years ago | (#18260836)

Sounds like the perfect recipe for a priest or supplicant to me. Does that not sound like the very core definition of "unshakable faith"?

If by "unshakable faith" you mean the ability to do the right thing 100% of the time without ever thinking to do the wrong thing, then I believe you're correct...

However, if by that term you mean the ability to choose, despite strong opposition from the surrounding forces of culture and pier pressure as well against the very nature that corrupts the spirit we have, to do the right thing, and choose to follow, given the same circumstances, a supreme being... I'm going to have to say you're just a little off.

"Faith" from an artificial being is simply action, which on the surface may look good, but when comes down to intent is nothing more than doing the only things it knows to do.

Despite my disagreements with this in the religious realm, however, I can see something like this happen among certain groups who no longer have, among themselves, a desire for self sacrifice. What better solution than to make an artificial being capable of doing their task for them... it's rather abominable, I say.

Re:Right... (1)

toad3k (882007) | more than 7 years ago | (#18262882)

The problem is that you are assuming there is some difference between the knowledge you hold about science and the knowledge a religious person knows about religion.

The same principal that prevents him from converting to another religion (faith) is the same idea that prevents you from suddenly believing that the sun revolves around the earth.

Given that you cannot at any given moment detect the true center of the solar system, you realize you are and always have been taking it as an article of faith that what your teachers/professors/textbooks told you is the truth while the religious person is taking it as an article of faith that what his teachers/priests/tomes told him is the truth.

When it comes down to it, about 99.9% of all the academic knowledge you hold is based on faith, which is why people can hold such a wide variety of beliefs about its true nature all at the same time.

Re:Right... (2, Insightful)

ougouferay (981599) | more than 7 years ago | (#18258824)

Numenta's goal is to build a software model of the human brain capable of face recognition, object identification, driving, and other tasks currently best undertaken by humans.


Surely we have plenty of humans available to do tasks 'curently best undertaken by humans' :)

Seriously though... while it might be useful to develop AI systems in this area as timesaving devices, the examples given above aren't really in that category - IMO AI research could be better applied to tasks humans can't achieve so easily (and maybe provide an insight into why that is the case) - I guess I just don't buy into the whole 'we can make something just like a human - but that isn't one' view of AI.

Re:Right... (1)

jtnw (781334) | more than 7 years ago | (#18259182)

Seriously though... while it might be useful to develop AI systems in this area as timesaving devices, the examples given above aren't really in that category - IMO AI research could be better applied to tasks humans can't achieve so easily (and maybe provide an insight into why that is the case) - I guess I just don't buy into the whole 'we can make something just like a human - but that isn't one' view of AI.

If you WTFV (Watched The F*cking Video) Hawkins explains that it is not putting this algorithm is human situations that is useful, but instead feeding it data from specialized inputs. For example, a weather predictor is fed wind velocity/direction, perhaps temperature and what ever sensors could be useful; over time, the algorithm will be able to recognize patterns and make predictions.

Now this is nothing a human can't do... If your job was to watch visualizations of global weather patterns, in years time you would certainly be able to recognize seasons, hurricanes, etc...

I do, however, agree that we have plenty of humans available to do tasks 'curently best undertaken by humans' though. ;)

jtnw

Re:Right... (1)

bcrowell (177657) | more than 7 years ago | (#18258782)

I thought On Intelligence was a very interesting book, but I didn't feel convinced about some of the strong opinions he expressed by the time I was done reading. It's an interesting insight to think of the human brain as essentially a pattern recognition device, and it's interesting to know that information flows in both directions, e.g., not just from the eyes to the brain but backward from the brain toward the eyes. OTOH, he has this abiding faith that the neocortex is made out of modules that are all interchangeable, and for the life of me, I never understood why he was so convinced of this assertion. Any comments from people with expertise in this area?

Re:Right... (2, Informative)

fyngyrz (762201) | more than 7 years ago | (#18259142)

Any comments from people with expertise in this area?

Yes; his reasoning is laid out in the beginning of this document. [numenta.com] The thinking seems quite reasonable to me, as far as it goes. AI is my area of research.

Re:Right... (0)

Anonymous Coward | more than 7 years ago | (#18260228)

I can't understand how anyone can consider it an insight whatsoever.
That the mind is a pattern recognition device (among other things), is practically common sense.
I'd never heard of this person until reading this article, but so far everything I'm seeing and reading about it makes me think "redundant" and not much else.

Re:Right... (0)

Anonymous Coward | more than 7 years ago | (#18258900)

If we modelled systems as they occur in nature why don't aeroplanes have flapping wings.

Re:Right... (1)

fyngyrz (762201) | more than 7 years ago | (#18259156)

Google "Ornithopter

Re:Right... (5, Interesting)

Walt Dismal (534799) | more than 7 years ago | (#18259398)

I've been working for some time on technology with hierarchical NN architecture like Hawkin's HTM, but mine in part involves SIPO FIFOs with attached neural networks, and the output of the NNs go to the next layer's SIPO+NNs, and so on up the chain. It's intended to extract meaning from symbol flow over time. Like speech primitives into language. Hawkins embeds temporal symbol handling in each HTM layer in a different way. Both of us are trying to emulate some of the processing the neocortex does, but I am less concerned with matching closely the brain and more concerned with outperforming the limitations of the brain. I believe there are classes of problems his architecture will solve, but can't handle others. There's lots of room for people to explore what his technology can do, and I expect it will work well for some things.

Life's work (2, Funny)

CarpetShark (865376) | more than 7 years ago | (#18259522)

But hey, he made a lot of money off Palm, that gives him way more street cred than people who have been working on this problem for their whole lives.


Some people spend their entire adult lives trying to overcome alcohol addiction, or trying not to beat their spouse. To others, it comes naturally.

Why (1)

Timesprout (579035) | more than 7 years ago | (#18258018)

Numenta wisely decided to build a community of developers rather than trying to make everything proprietary.
Its been my experience that the most brilliant people have a fiduciary target at some point, and its oft quoted here that the best are those whole love it and do it for the pleasure, rewards aside. Recent studies re funding of the kernel would bear out my point. Personally i feel a core of dedicated staff with external input will yield the best results (ala firefox) but this is not open per se.

Word usage (I'm a grammar cop). (0)

Anonymous Coward | more than 7 years ago | (#18261740)

"Fiduciary" on whose behalf? Please, look the word up. It's commonly misused, so don't feel bad.

I think you meant "fiscal" or simply "financial".

If you can't take the time to look it up, remember this: "fiscal" has has to do with money, while "fiduciary" has to do with acting on someone's behalf or in their trust.

The reason someone has a "fiduciary" responsibility to shareholders is that the company officers work in the interests of the shareholders. At least they are supposed to -- that's the point of fiduciary responsibility -- it's so important to actually be responsible to people that there should be trustworthy people involved.

Other examples of fiduciary arrangements include trust funds managers, real estate brokers, attorneys at law, executors of estates, auto mechanics, computer security analysts, and police. These people work in the trust of their clients (or the public) for the supposed benefit of their clients (or the public).

This will cause problems (1, Insightful)

TubeSteak (669689) | more than 7 years ago | (#18258020)

I read that and thought "a new, more advanced algorithm for breaking CAPTCHAs"

Re:This will cause problems (2, Interesting)

Anonymous Coward | more than 7 years ago | (#18258496)

Heh, is this some new Slashdot joke I've missed? Somebody proposes a quantum leap in artificial intelligence and you're worried that it'll be able to crack CAPTCHAs? CAPTCHAs are not all that important, you know, and most of the ones that are in use can already be easily broken with a targeted script(the Slashdot one is definitely an example of this).

Re:This will cause problems (0, Redundant)

Maian (887886) | more than 7 years ago | (#18258510)

It could also lead to "a new, more advanced algorithm for blocking spam" too. Intelligence goes both ways :)

Re:This will cause problems (0)

Anonymous Coward | more than 7 years ago | (#18258910)

I read that and thought "a new, more advanced algorithm for breaking CAPTCHAs"

More advanced than copying the CAPTCHA to your own site with the heading "Free pr0n, just enter the letter sequence below"?

Re:This will cause problems (2, Interesting)

cheater512 (783349) | more than 7 years ago | (#18259338)

I've broken many captchas using small PHP scripts to de-mangle the image (GD) and standard free open OCR software.

Re:This will cause problems (1)

Slashamatic (553801) | more than 7 years ago | (#18260832)

Maybe a better method for cracking image attachment spam. What is good for one is also good for the other.

it has to be said (1, Redundant)

6 (22657) | more than 7 years ago | (#18258054)

I for one welcome our open source neocortical robot overlords...

Re:it has to be said (1)

lilomar (1072448) | more than 7 years ago | (#18258172)

Anyone ever read The Singularity is Near buy Ray Kurzweil? Scary stuff, and surprisingly plausible. He argues very well and I always admire that in a sensationalist. (if you are wondering what this has to do with the discussion, this neocortex thing is right on schedule from his timeline)

Books (1)

Garrett Fox (970174) | more than 7 years ago | (#18260734)

I flipped through that (several times!) and found it interesting, but then I ended up buying Douglas Hofstadter's Fluid Concepts and Creative Analogies instead. It's a follow-up to the philosophical Godel, Escher, Bach, in which his research group tries to model creativity using computers. His general technique is different from neural network modeling. Stephen Pinker's How the Mind Works is also interesting, and well-written.

I hang out, via e-mail, with people involved in the Loebner Prize Contest, so I have kind of a skewed view of AI. The people there focus on the "chatterbot" approach, descendants of ELIZA, and some of them actually think that's a good model of intelligence. (They're wrong.) I'd like to see some kind of open-source AI project, but what I know of the existing ones is that each backer has their own fixed theory of the mind, and the confusion of programming languages and other details make it hard to coalesce around any one idea.

High-Quality Video Link (5, Informative)

overeduc8ed (799654) | more than 7 years ago | (#18258102)

High quality versions of Jeff Hawkin's talk at UC Berkeley are available here [archive.org] .

Enter the Matrix (2, Funny)

mastershake_phd (1050150) | more than 7 years ago | (#18258108)

NEOcortex - Begin the Matrix jokes/analogies now...

Re:Enter the Matrix (1)

The Orange Mage (1057436) | more than 7 years ago | (#18258230)

Okay, let's start by adding Kung Fu to the list of "tasks best undertaken by humans."

IQ Captchas (0)

Anonymous Coward | more than 7 years ago | (#18258134)

"Yesterday they released the first version of their free development platform and the source code for their algorithms to anyone who wants to download it."

And can understand it.

Re:IQ Captchas (1)

ACE209 (1067276) | more than 7 years ago | (#18259894)

You can't read code fluently? What are you doing on slashdot? ;)

drawing recognition (1)

Speare (84249) | more than 7 years ago | (#18258150)

It includes a demonstration of an early version of the software that can recognize handwritten letters and distinguish between stick figure dogs and cats.

Yeah, but can it distinguish the invention of PalmOS Graffiti from the invention of PARC Unistroke? That would have been handy...

Software you can really get into... (1)

RyanFenton (230700) | more than 7 years ago | (#18258152)

This sounds REALLY cool. Even if all it amounts to is a set of computationally-expensive toys, it's still the basis for being able to boil down the essentials and costs of self-learning systems. That, and perhaps the stepping stone to being able to have the hitchhiker-like "real people personalities".

Then, of course, there's always the dream of eventually being able to really 'get into the code' and debug it from the inside, leading to the soviet joke where "the code debugs you."

Ryan Fenton

Barrier to entry (1)

Lord_Dweomer (648696) | more than 7 years ago | (#18258180)

As a geek who lacks the advanced education in this field...it is unfortunate that the barrier to entry for people looking to contribute is so high. I wish there were a way us "laymen" could assist even though we might not have the technical knowledge to do so. Can anybody suggest some methods by which I might help? What about some good "entry level" reading material on the subject?

Re:Barrier to entry (5, Insightful)

RyanFenton (230700) | more than 7 years ago | (#18258240)

Don't be so afraid of complexity - Slashdotters make fun of themselves for diving into things uneducated (not reading the articles, not RTFM), but really, the only way to cope with such an informationaly complex landscape such as computing is to sometimes just be willing to go unprepared and be willing to make mistakes, and to ask stupid questions.

Not so much dare to be stupid, but rather the Socratic, don't be afraid of exposing your own ignorance - don't lose your opportunity to learn by merely being embarrassed of people thinking you dumb while you take your first few steps in a new landscape.

But do take notes and research the small topics you are uncertain of after your first adventure into to the topic. Perhaps you'll need to learn a bit about XML/XSL, perhaps you'll need to find out the anatomy of a nerve cell to understand some explanations. If nothing else though - get into it because it is a fun adventure and a lot of cool stuff to learn.

Ryan Fenton

Re:Barrier to entry (1)

stephanruby (542433) | more than 7 years ago | (#18259608)

"Not so much dare to be stupid, but rather the Socratic, don't be afraid of exposing your own ignorance - don't lose your opportunity to learn by merely being embarrassed of people thinking you dumb while you take your first few steps in a new landscape. "

I agree completely.

The same goes for those of us who may hold some kind of expertise in one area already. Every time we explore a new area, we must allow ourselves to start from scratch over and over again. In this thought, I'm often reminded by what George Leonard said in his book "Mastery". In Martial arts, the true Master is the black belt who's willing to forgo his previous honors and wear a white belt everytime he's learning a new art.

Re:Barrier to entry (4, Informative)

Wagoo (260866) | more than 7 years ago | (#18258336)

Hawkins' published a book before this was implemented in code called "On Intelligence". You could do worse than starting by reading through that.

He's also done some lectures available on Google Video [google.com] .

Re:Barrier to entry (2, Interesting)

wrook (134116) | more than 7 years ago | (#18258396)

I haven't really looked into what they are doing specifically, so I can't really comment on that stuff. But I've done some work in neural networks (actually 15 years ago -- I'm sure the state of the art has completely past me by ;-) ).

If you are interested in the field of AI with neural like computing, your best bet is to learn a huge amount of math. Really you can't understand anything without knowing at least 2nd year linear algebra. That's if you just want to basically understand what's going on. If you actually want to contribute, you're going to need a math degree.

This might sound like I'm discouraging you. I'm not. I just want you to understand what you're up against. You can definitely do some toy problems with neural network packages out there. You don't really need to understand what you are doing. But if you actually want to contribute, you don't really start here. You've got to do your basics and get your math chops up.

As for what you should read... Get some basic undergrad linear algebra books. A google search gave me this link:

http://joshua.smcvt.edu/linearalgebra/ [smcvt.edu]

Which looks like it will pretty much give you everything you need for the basics of neural networks (without the neural network part ;-) ) You're also going to need some basic calculus. A quick search didn't show me any introductory free books online, but that doesn't mean they don't exist. However, calculus is used for everything which means you should be able to find used books almost for free...

Once you've got that down (and maybe you already do if you've got a math or CS degree), you can start reading some basic neural network material. The wikipedia entry for perceptron:

http://en.wikipedia.org/wiki/Perceptron [wikipedia.org]

seems pretty good and should give you a start on how neural networks (very, very simple ones) work. It gets surprisingly complicated from there :-) There are some decent introductory texts (aimed at grad students) on the subject, but I'm afraid I'm well out of the loop now.

Things can get pretty harried math-wise once you start getting into learning algorithms. That's because you are basically trying to do a minimization with *a lot* of variables. It's not surprising that most of the innovative algorithms actually come from physics (well, in my day anyway... probably things have changed...) since when they are modeling stuff they tend to need to do the same thing. This is where you get into the scary math with multiple variate calculus and stuff... Way out of my league (I suck at math...)

Of course there are other forms of AI. But if you are trying to model the way the brain works, you need lots of math...

Multivariate linear algebra: try chemometrics (1)

fritsd (924429) | more than 7 years ago | (#18263902)

If you need help with multivariate (non-)linear algebra, I can strongly recommend trying out books from chemistry and psychology (or more specifically, chemometrics and psychometrics) because these are basically somewhat "dumbed-down" descriptions of the most common algorithms to study large datasets.
I say "dumbed-down" in the nicest possible sense, in that they focus on solving practical problems in industry and laboratory, as opposed to rigorous statistical proofs as to why these algorithms work.
Well, it helped me, at least (many years ago). YMMV. Frits.

Re:Barrier to entry (0)

Anonymous Coward | more than 7 years ago | (#18258606)

I suggest reading up on Bayesian Networks, since the HTMs are based on a soecial variation of it. Also read up on Python programming, because it also uses some of that.

There's also a manual for using this at Numenta's website: www.numenta.com

I'm with you here. I want to contribute, but it's a bit over my head right now.

Re:Barrier to entry (1)

Garrett Fox (970174) | more than 7 years ago | (#18260852)

In the AI field, nobody has actually proven they know what they're doing yet, so you can't be too far behind!

As an amateur programmer who's dabbled with AI and game design, some things I found helpful and interesting were:

-Stephen Pinker's How the Mind Works and The Language Instinct (readable and entertaining)
-Douglas Hofstadter's Godel, Escher, Bach (brilliant but baffling, touching on a lot of topics, some of which are worth skipping over; not the best book to pick up lightly)
-Hofstadter's Fluid Concepts and Creative Analogies (more AI-focused, describing how you might model creativity on a computer)
-Chapman's Vision, Instruction and Action (obscure book by an MIT student who built an AI to play a Gauntlet-like game)

-Playing with the Python [python.org] programming language, which is free, multi-platform, easy to learn and use, and has a good game library (Pygame) and a developer community. Look for the free online book "Dive Into Python" as one guide, or just start playing. Why use a super-efficient macho language at this point when the current limitation on AI isn't raw speed?
-Looking up a few of the famous real AIs and thinking about their limitations: ELIZA, ALICE, Cyc, and SHRDLU for instance. A version of Hofstadter's "Metacat" is available online along with the others, I think. Also look into real robots like Stanley, RoboSapien, Kismet, Cog, and Qrio, which may change your perception of what a robot can be like.
-Writing fiction! How do you think an AI should work? How would it deal with real-world problems? Reading SF is good food for thought too; what do you think of Asimov's Laws?
-Joining the Robitron [yahoo.com] discussion list for talk about AI by people associated with the Loebner Prize Contest, though often from a perspective I disagree with.

Let me know if you do get into this! Even if you don't build anything yourself, that material will help tell you what's been going on in the field and some of the ongoing debates.

System Requirements (2, Insightful)

cmacb (547347) | more than 7 years ago | (#18258258)

How unusual to see software that will run on OS X or Linux, but there is no Windows version. Shape of things to come I hope.

Re:System Requirements (1)

Urusai (865560) | more than 7 years ago | (#18258416)

Not so unusual in academia, they still run UNIX not because it's the latest fad but because they haven't upgraded their expertise from the '70s.

Re:System Requirements (0)

Anonymous Coward | more than 7 years ago | (#18258654)

More to the point, it works well and does what the need. Neither can be said of Windows.

Download the source (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#18258324)

NuPIC Algorithms Source License

You must be logged in to download this software.
I hate how many sites do this. I'm not going to give you any valid information and I'll probably never return to your site. Making me create an account, which I will only use once, is very annoying and totally pointless. Just let me download the source code without all the extra bullshit.

Re:Download the source (0)

Anonymous Coward | more than 7 years ago | (#18258408)

You need to check out http://www.fakenamegenerator.com/ [fakenamegenerator.com] .

Here is an example (all information is fictional):
Emily F. Garza
1587 McDowell Street
Nashville, TN 37238

Email Address: Emily.F.Garza@spambob.com [spambob.com]

Phone: 931-299-3591
Mother's maiden name: Mennier
Birthday: August 12, 1950

of course, you might want to use a disposable e-mail like pookmail.com so you can retrieve any e-mail they might send.

Confidentiality agreement a killer (5, Informative)

else58 (529671) | more than 7 years ago | (#18258412)

The download license looked fine until the Confidentiality paragraph. Does it really say that anything I learn from Numenta is confidential property of Numenta?

Confidentiality. 1. Protection of Confidential Information. You agree that all code, inventions, algorithms, business concepts, workflow, ideas, and all other business, technical and financial information, including but not limited to the HTM Algorithms, HTM Algorithms Source Code, and HTM Technology, that you obtain or learn from Numenta in connection with this Agreement are the confidential property of Numenta (Confidential Information). Except as authorized herein, you will hold in confidence and not use, except as permitted or required in the Agreement, or disclose any Confidential Information and you will similarly bind your employees in writing. You will not be obligated under this Section 6 with respect to information that you can document: (i) is or has become readily publicly available without restriction through no fault of you or your employees or agents; or (ii) is received without restriction from a third party lawfully in possession of such information and lawfully empowered to disclose such information; or (iii) was rightfully in your possession without restriction prior to its disclosure by Numenta; or (iv) was independently developed by your employees or consultants without access to such Confidential Information.

Re:Confidentiality agreement a killer (1)

dr.badass (25287) | more than 7 years ago | (#18259946)

Does it really say that anything I learn from Numenta is confidential property of Numenta?

No. That line refers to anything you get from the company. Note that it doesn't say "and" in front of "anything you obtain..." -- it's referring to the same "HTM Algorithms, HTM Algorithms Source Code, etc." described before. It's definitely not referring to anything you learn by using it.

It's pretty easy to misread, I admit.

Re:Confidentiality agreement a killer (1)

psmears (629712) | more than 7 years ago | (#18260612)

I respectfully disagree:

The agreement starts like this:

[...] You agree that all [code, ideas, ...] and all other [information] including but not limited to [HTM stuff], that you obtain or learn from Numenta in connection with this Agreement are the confidential property of Numenta (Confidential Information).

I can't read that any other way than agreeing that anything you learn from them is their confidential property (regardless of whether the information is in the public domain, is patented/copyrighted by someone else, etc). That said, IANAL but I would guess that such an overly-broad clause in a contract would be very hard to enforce...

Re:Confidentiality agreement a killer (1)

dr.badass (25287) | more than 7 years ago | (#18261084)

(regardless of whether the information is in the public domain, is patented/copyrighted by someone else, etc).

They aren't providing either of these under that Agreement.

Re:Confidentiality agreement a killer (1)

Someone (12196) | more than 7 years ago | (#18262008)

It says that everything

that you obtain or learn from Numenta
is their property, where Numenta is the corporation not the technology. I would seem to not cover derived discoveries: ie I assume that if you discover the secret to the universe from a physics HTM you would still own it, but algorithm's and the ideas in their whitepapers are theirs.

This still, like all such contracts, does contaminate you with their IP: restricting what you can later work on, even risking independent work suddenly becoming their property. If you work in even a slightly related field (neuroscience, AI) you might not want to risk it.

On the flip side this is fairly common legalise for a small technology company, the plain spoken description of their policy in the blog seems fairly reasonable if not binding. Consult a lawyer, pay a lot of money, and you probably still wont be sure exactly how it applies.

Starting companies to be heard? (2, Interesting)

Stochastism (1040102) | more than 7 years ago | (#18258482)

Having read the Hierarchical Temporal Memory (HTM) white papers, and knowing something of the area prior to that, it looks like Jeff Hawkin's and his company have take a lot of ideas and algorithms that exist, and hacked them together to implement his neocortex ideas.. there's a bits and pieces of graphical models, time recurrent neural nets, Boltzmann machines, etc.. It does some cool stuff but nothing that AI and machine learning people haven't been doing for years. The difference is that Jeff has taken the entrepreneurial approach to AI. Instead of publishing and allowing the academic community (the original open source movement!) to peer review and contribute, he's formed a company to announce his ideas to the world -- ready or not. This isn't necessarily bad, but the proof of his ideas will be scaling them up to start solving some useful problems. Bring on the face recognition that isn't fooled by dark sunglasses and a false mustache!

Re:Starting companies to be heard? (1)

BuGaBoooo (1072662) | more than 7 years ago | (#18258798)

http://www.numenta.com/for-developers/education/HT M_Comparison.pdf [numenta.com]

"
The purpose of this document is to compare HTMs with several existing technologies for modeling
data. HTMs use a unique combination of the following ideas:

* A hierarchy in space and time to share and transfer learning
* Slowness of time, which, combined with the hierarchy, enables efficient learning of intermediate levels of the hierarchy
* Learning of causes by using time continuity and actions
* Models of attention and specific memories
* A probabilistic model specified in terms of relations between a hierarchy of causes
* Belief Propagation in the hierarchy to use temporal and spatial context for inference

Many of these ideas existed before HTMs and have been part of some of the models we describe below. The power of HTM comes from a unique synthesis of these ideas. "

Re:Starting companies to be heard? (1)

Gearoid_Murphy (976819) | more than 7 years ago | (#18260446)

I completely agree, while its unconstructive to sit back and simply criticise the man, its hard to give him a great deal of credit for redressing functionality that's long established. Fair play for giving it a shot but I think this system will suffer from complexity when they start to scale up, particularly on their method for representing sequences of events, which seemed very sketchy in the white papers. Fair enough, if the sequence a->b->c exists then the occuence of a infers the occurence of c. However, say these sequences exist : a -> b -> c a -> d -> e a -> r -> f Then prediction becomes inaccurate unless some sort of contextual support is built into the system and I haven't seen any mechanism to support that in the documentation, believe me, I looked.

Re:Starting companies to be heard? (1)

MSBob (307239) | more than 7 years ago | (#18262080)

So perhaps everyone else has been dancing close to an answer but Hawkins' model (if it indeed encompasses all those varieties) may be much closer to the ultimate strong AI than anything that came before it?

Before anyone else says it (2, Insightful)

rhythmx (744978) | more than 7 years ago | (#18258548)

Someone needs to immediately train this to catch /. dupes and/or run Linux.

Hmm.... (2, Interesting)

ThePopeLayton (868042) | more than 7 years ago | (#18258646)

As a current student I neuroscience I would love to see this happen however there are a few major problems.

1) All the research into cortical circuitry is done in non-humans. There are definite similarities between our cortex and that of a rat, but there are also drastic differences, if there weren't then rats would be able to talk, think, and reason like we do. (Yes lots of research is being done in non-human primates, but this work is EXTREMELY expensive and even non-human primates have different cortical circuitry then we do)

(Not only are the cortices of different species drastically different, scientists often chose regions of cortex that have no correlation in humans. Many neuroscientists are studying the Barrel Cortex. It is a region of cortex that is specifically designed to integrate the signals from the whiskers of a Rodent. Humans don't have whiskers and we also don't have Barrel Cortex. Anything learned about the circuitry of the Barrel Cortex will not necessarily correlate to human cortex.

2) Intra-population Circuitry research examines very small subsets of neurons that make up a bigger populations. When studying neurons in the visual cortex for example the best anyone can do is look at the firing of about 150 neurons. When you consider that there are over 10,000,000,000 (BILLION) neurons that make up the human brain a small set of 150 neurons is almost nothing. We don't have sufficient technology to examine what each neuron in a specific population is doing.

3) Inter-population circuitry research only looks at what populations are connected to each other. Yes we know what type of neurons project from one area of the brain to the next, however, this only gives a very rough schematic of the circuitry. The circuitry of both the cerebellum and the hippocampus have been described beautifully (they have both been known for well over 50 years). However once we no this circuitry it yields no light on how the circuitry actually accomplishes its task.

4) Failure to integrate both intra and inter population circuitry. I have yet to read a paper that does a good job of integrating these two studies. Most neuroscientist pick one emphasis and stick with it. In order to understand exactly what the cortex is doing you must integrate all levels of research into your studies.

5) Study of the cortex is insufficient. The cortex projects to many regions of the brain whose functions are still unknown. These connections to these brain regions might not appear necessary but if they really weren't necessary why are they there? Back in the day people who had really bad seizures would have what is called a "Corpus Callosomy" This is the cutting of the fibers that connect the two hemispheres of the brain. At first the procedure was called a success. However, after further investigation it turned out that the people on whom this operation was performed had drastic problems. (Example, if a person was holding an object in their left hand (the sensory fibers project from the left hand to the right side of the brain) and if they weren't allowed to see the object, upon request of the examiner of what the person was holding they would respond there is nothing in their hand. ) This example is only to illustrate that upon initial examination many regions of the brain appear to have no function as lesioning these structures has no aversive effects, this is what many people thought about the corpus colosum, however upon further examination this proved untrue. Before we can understand how the cortex fully functions we must understand how the entire brain works with it.

Sorry to be a nay sayer but I have serious doubts whenever someone claims to have figured out how the cortex works.

Re:Hmm.... (0)

Anonymous Coward | more than 7 years ago | (#18259054)

it seems all very good to be visionary and to try new things, but as the saying goes, "you have to learn to walk before learning to fly" so it would seem to me.

i deal with a software system that collects maintenance data for equipment on oil platforms and attempts to provide a future schedule based upon this data.

i haven't yet heard of a package that does this job superbly - they all work, but not perfectly.

i think something like this should be gotten right before attempting something as complex as what jeff is doing.

Re:Hmm.... (1)

Capmaster (843277) | more than 7 years ago | (#18259074)

(Not only are the cortices of different species drastically different, scientists often chose regions of cortex that have no correlation in humans. Many neuroscientists are studying the Barrel Cortex. It is a region of cortex that is specifically designed to integrate the signals from the whiskers of a Rodent. Humans don't have whiskers and we also don't have Barrel Cortex. Anything learned about the circuitry of the Barrel Cortex will not necessarily correlate to human cortex.
Correct me if I'm wrong, but isn't one of the main points of Hawkins' theory that all parts of the neocortex perform the same algorithm, no matter what the input, be it eyes, ears, or even whiskers? I'm not a neuroscientist and I'm definitely no expert on this, but I did read the book and I seem to remember that being a prominent point. Also, I was under the impression (from the book) that the only thing making animals less intelligent than humans was the size of the cortex. Again, please correct my misconceptions.

Re:Hmm.... (1)

fyngyrz (762201) | more than 7 years ago | (#18259228)

I think that's a very good, and very accurate summary. And I am an expert, or at least as much so as anyone in the field is, these days. :)

Re:Hmm.... (1)

HuguesT (84078) | more than 7 years ago | (#18259332)

If that were the case then elephants and whales would be much more intelligent than we are. There is no indication of any large animal being so smart.

Re:Hmm.... (1)

fyngyrz (762201) | more than 7 years ago | (#18259406)

If that were the case then elephants and whales would be much more intelligent than we are.

That's an entirely invalid simplification. There are large variations on structure, on sensory input, etc between species. Any one of which could set back - or set sideways, more interestingly - performance. For instance, bats process sounds into direction one heck of a lot better than we do. Cats and raptors, to name but two, process balance and visual information into far more athletic capability than we do. Some humans process information differently (Darwin, Einstein, Fischer, Hawkins, Newton, Mozart, Musashi, Rodin, Sagan, Sartre, Tammet), and they've got, or had, very similar brain structures to yours. This is a very delicate, highly variable area of function, and throwing sweeping generalizations about size, specialized regions and foggy ideas like intelligence" about in a cross-species manner as if they were definitive of performance simply clouds the issues at hand.

The proof will be in the results. Pay attention to those. Not people's opinions.

Re:Hmm.... (1)

vhogemann (797994) | more than 7 years ago | (#18260280)

This sounds like we have a huge general purpose CPU, while the animals have a tiny one and several special purpose DSPs...

Hmm hmm (1)

tgv (254536) | more than 7 years ago | (#18259776)

If he said that, he's very wrong. The cortex consists of dozens areas with different cyto-architectonic (that means cellular structural) properties, see http://spot.colorado.edu/~dubin/talks/brodmann/bro dmann.html [colorado.edu] for a nice map. Brodmann counted 46 of them and modern views distinguish sub-areas in most of them. E.g., BA44 (Brodmann's Area 44) is considered to be involved in language processing (amongst other things), but is usually divided into 44a, b and c (there are different ways of naming these, too; e.g. the pars opercularis or Broca's Area for BA44).

So, the cortex contains many different areas with different physical properties and these are commonly tied to specific functions: e.g. language processing involves a few areas, and motoric processing involves a few different areas, and these never overlap. Consequently, any model that wants to approach the cortex at neuronal level should account for this.

And the cortex of a whale is much larger than that of humans...

Re:Hmm.... (1)

ThePopeLayton (868042) | more than 7 years ago | (#18261684)

Hawkins' theory that all parts of the neocortex perform the same algorithm
Well that depends on how you define neocortex some scientists call all cortex neocortex while others only refer to the frontal lobe in humans as neocortex. If you look at the circuitry of all the cortical areas then no they don't use the same algorithm and if Hawkins claims this then he is wrong. I haven't studied the frontal lobe that much so I can't say much about the circuitry there but I would be extremely surprised to see if the human specific cortex is a simple as one basic algorithm.

the only thing making animals less intelligent than humans was the size of the cortex
Actually this is a topic for debate. The animal with the greatest cortex area to brain mass is the Dolphin. Their cortex literally dwarfs ours. Yes dolphins are extremely intelligent animals but they lack many cognitive abilities that humans posses.

Re:Hmm.... (1)

tgv (254536) | more than 7 years ago | (#18263070)

Actually, dolphins were found to be rather dumb in a recent study. Search for "Manger" (the author) and "dolphins"...

Re:Hmm.... (2, Insightful)

fyngyrz (762201) | more than 7 years ago | (#18259208)

In order to understand exactly what the cortex is doing you must integrate all levels of research into your studies.

As a current student in neuroscience, you should know better than to make such a sweeping and inaccurate presumption. There are many paths to working models and working theories, and very few of them include "integrating all levels of research" or anything remotely similar. It is entirely possible to code up (for example) a brand new, highly functional sorting method without either knowing all the other methods, the theory underneath them, or even the theory underneath your own. It is possible to move your arm without knowing a thing about partial differential equations, yet you can't really model it easily without them using traditional approaches. So get down off that high horse. I think the thin air has addled your thinking.

Re:Hmm.... (1)

Christianson (1036710) | more than 7 years ago | (#18260200)

(Not only are the cortices of different species drastically different, scientists often chose regions of cortex that have no correlation in humans. Many neuroscientists are studying the Barrel Cortex. It is a region of cortex that is specifically designed to integrate the signals from the whiskers of a Rodent. Humans don't have whiskers and we also don't have Barrel Cortex. Anything learned about the circuitry of the Barrel Cortex will not necessarily correlate to human cortex.

"Barrel cortex" is a descriptive name give to the primary somatosensory cortex of rodents. Humans do have primary somatosensory cortex, and it follows the same general principles of organization in the two species. The "barrels" that correspond to the whiskers are over-representations that correspond to the primary sensory modality of rats and mice. While humans don't have whiskers, and don't have barrels corresponding to their fingers (the functional homologue of the role of the whiskers), you can make an interesting case that many of the features that define barrel cortex are replicated in primate visual cortex -- the dominant sensory modality for those species. So, while humans don't have barrel cortex, we do have a "primary sensory modality cortex," and by studying as many "primary sensory modality cortices" as we can, we can hope to understand principles of organization.

The circuitry of both the cerebellum and the hippocampus have been described beautifully (they have both been known for well over 50 years). However once we no this circuitry it yields no light on how the circuitry actually accomplishes its task.

This is true, but only in the sense that in neither of these areas is the "task" well-understood. When progress is made in understanding what role a nucleus serves, then the knowledge of the architecture generally provides huge insights into how that role is accomplished.

Failure to integrate both intra and inter population circuitry. I have yet to read a paper that does a good job of integrating these two studies. Most neuroscientist pick one emphasis and stick with it. In order to understand exactly what the cortex is doing you must integrate all levels of research into your studies.

I'm not entirely sure what you mean by this. Assuming that you mean that people do not both record the spiking patterns of neurons and determine how those neurons are connected, this is because this is impossible in any system with a neocortex. A given neuron might connect to at most hundreds of other neurons; this means that the odds of selecting two neurons that are directly connected are infinitesmal. It is possible in some invertebrates to do this sort of study, but they do not have a neocortex.

Study of the cortex is insufficient.

Your general point is well made, but the corpus callosum is not a brain area; it's just a fibre tract. The complications arise from sub-cortical nuclei such as the basal ganglia and the cerebellum, that connect to essentially the entirety of the brain, or the thalamus, a near-obligatory relay in getting information from the receptors to the neocortex. There is a vast amount of auditory processing in the cochlear and superior olivary nuclei and inferior colliculus before auditory information even reaches the neocortex; there are visual-receptor-to-motor-output loops that bypass the cortex entirely.

Re:Hmm.... (1)

lukesl (555535) | more than 7 years ago | (#18264282)

All the research into cortical circuitry is done in non-humans. There are definite similarities between our cortex and that of a rat, but there are also drastic differences, if there weren't then rats would be able to talk, think, and reason like we do.

First, the different cognitive abilities of rats could be due either to smaller size or different connectivity of cortical modules that were absolutely identical. I'm not saying either of these is the case, but your argument isn't exactly solid. Second, it's true, their cortex is different, but the really interesting thing about cortex is that its general layout is largely conserved even when its functional tasks are diverse. That implies that cortex (or the thalamocortical circuit) is something that evolution stumbled onto that is modular, either in the sense that it is performing the same computation (i.e. has the same dynamics) in different brain regions, or that relatively small changes to the cytoarchitecture lead to very different dynamics that underlie different computations in different brain regions or organisms. I don't know the answer to that, and I don't think you or anyone else does either...and I think this is a case where rat cortex needs to be studied in order to determine whether or not it was worth studying. Personally, I think it is.

2) Intra-population Circuitry research examines very small subsets of neurons that make up a bigger populations. When studying neurons in the visual cortex for example the best anyone can do is look at the firing of about 150 neurons. When you consider that there are over 10,000,000,000 (BILLION) neurons that make up the human brain a small set of 150 neurons is almost nothing. We don't have sufficient technology to examine what each neuron in a specific population is doing.

Once again, your logic isn't quite solid. The retina is part of the brain--if you could record from 150 retinal neurons, what difference does it make how many cells there are in cortex? I work on developing new technologies for high density multisite recording, and my best guess is that 100 years from now, there will be a new level of abstraction somewhere between action potentials and fMRI, where we will describe local circuit activity in terms of attractor dynamics (something like what Walter Freeman calls the "mesoscopic" level). It's possible that even by recording 150 neurons at a time, we can piece together a sufficient description of local circuit activity so that we will not necessarily need to record the action potential of every single neuron in a given brain region to know what it's doing. This also partially addresses your issue #4.

Back in the day people who had really bad seizures would have what is called a "Corpus Callosomy"

That's what i was taught in college too, but I think most people who had their corpus callosum cut did it because of pituitary tumors. Then used to go in through the top, cutting the CC to get into the sella turcica. Now they use some sort of LaForte procedure, where they actually break your face open (!) and go in through there, which is apparently less traumatic.

Sorry to be a nay sayer but I have serious doubts whenever someone claims to have figured out how the cortex works.

So do I, but to be somewhat fair to Hawkins, his primary claim is that he knows what the cortex does, not how it does it. Of course, even if what he's saying is 100% correct, it still explains only a subset of what the brain (or cortex) does.

Old news, long since debunked (-1)

Anonymous Coward | more than 7 years ago | (#18258762)

This is old old old old old news, and debunked long ago:
http://www.skeptic.com/the_magazine/featured_artic les/v12n02_AI_gone_awry.html [skeptic.com]

Why does the media pass off bunk like this as news?
Answer here:
http://www.csmonitor.com/2007/0305/p09s01-coop.htm l [csmonitor.com]

Re:Old news, long since debunked (1)

fyngyrz (762201) | more than 7 years ago | (#18259284)

That's not a "debunking", that's a closed-minded opinion-fest. Reminds me of Papert's and Minsky's huge rants on how neural nets couldn't do this and that, exemplified by the (incorrect) claim they couldn't even be made to do an XOR. They published, just ran off at the mouth like college kids with their first exposure to ideas orthogonal to their thinking, then were proved soundly wrong by the facts.

Some advice for the closed minded: Judge this fellows work by his actual results; not what other people think his results may turn out to be. He's published the code, and those of us who are working in this area are very interested. That still doesn't mean we'll use his work the way he will, or that we'll get the same results. Just be a little patient and just a little less judgmental. Or not; after all, even Minsky and Papert couldn't change the facts. They turned out to be well educated, highly opinionated, deeply respected fuckups. You want to join them? Jump to conclusions. Nature's got a place for you, too. :-)

Re:Old news, long since debunked (1)

perkr (626584) | more than 7 years ago | (#18260684)

"Perceptrons" by Minsky and Papert was correct regarding perceptrons' limited computational expressiveness. Following that they incorrectly conjectured that their negative result would hold for 3 or more perceptron layers, which later turned out not to be true. So cut them some slack, they gave the research community significant useful results, and probably acted with best judgment.

almost... (3, Interesting)

penguinbroker (1000903) | more than 7 years ago | (#18258830)

This would be great if computational power was dirt cheap. People smarter then you or i have already thought about this.

http://en.wikipedia.org/wiki/Baum-Welch_algorithm [wikipedia.org] http://en.wikipedia.org/wiki/Viterbi_algorithm [wikipedia.org]

The first is an alogorithm which utilizes forward and back-tracking "to find the unknown parameters of a hidden Markov model." The second is a similar algorithm used for learning 'known' causes (for reference).

I work in computational linguistics and the time an algorithm takes to run and the amount of memory it requires are serious limitations. That's why ad-hoc systems are so common.

Re:almost... (1)

rm999 (775449) | more than 7 years ago | (#18259334)

I believe he is working on hardware solutions to that. One of the things he emphasizes in his book "On Intelligence" is that machines need to have more memory to do what AI wants to do.

Re:almost... (1)

Dan D. (10998) | more than 7 years ago | (#18264462)

People smarter then you or i have already thought about this.

So what?

Infringes on Thaler's Neural Network (NN)Patents? (1)

littlewink (996298) | more than 7 years ago | (#18259082)

Hawkins' solutions likely overlap with Dr. Stephen Thaler's patents [imagination-engines.com] for neural networks(NN). In particular, Thaler's algorithms inject noise into a proprietary NN system (actually 2 or more NNs conjoined) to generate novel patterns (that is, to _discover_ new patterns). For example, Thaler trained and used such a NN system to generate thousands of possible musical riffs which he has now copyrighted. Thaler is in business and making money [imagination-engines.com] .

Re:Infringes on Thaler's Neural Network (NN)Patent (1)

fyngyrz (762201) | more than 7 years ago | (#18259242)

And yet again, we see the potential of the patent system to retard progress instead of stimulate it; to favor cashing in over invention; to stifle, crush and force back progress, however isolated from the original inventor such progress may have originated. The PTO is a hive of scum and villainy.

Abolish it. It is out of hand.

If you think that's impressive, just wait. (1)

quixoticsycophant (729112) | more than 7 years ago | (#18259254)

A truly sentient software program is mere child's play compared to the awesome potential of this guy. As the hybrid clone of Stephen Hawking and Richard Dawkins, Jeff Hawkins is destined to become one of the leading minds of the 21st century.

Re:If you think that's impressive, just wait. (1)

Riktov (632) | more than 7 years ago | (#18259482)

If he'd just change his first name to "Stephard" it would be perfect...

Old Code (1)

rm999 (775449) | more than 7 years ago | (#18259290)

I did not rtfa, who has time for that anymore ;) But regardless, in the Slashdot fashion, here is my opinion:

I played around with some of his publicly available code a few months ago. It was pretty impressive on a toy problem (recognizing a small set of characters) but was very, very slow at training (on the order of hours or days to learn the simple problem).

But on the other hand, I can't think of any sort of technology that could do better than it (I am into machine learning and AI.) Also, it is not a big deal if it trains slowly if it can compute fast - the human brain took millions of years to evolve.

From what I could tell, his technology is (was?) a glorified bayes net with time forced into the model. To train the net to recognize images he just moved an image around a bunch and had the net brute-force learn all the possible patterns. Training this could get tedious, to say the least. In theory it sounds like he's on to something, but it came off as a pretty simple modification to an old algorithm. I found his book entertaining, and insightful at times, but not revolutionary. He reasons out some really cool ideas, but it comes off more as philosophy more than science sometimes.

Cortex Sim == Bullsh*t (5, Interesting)

Anonymous Coward | more than 7 years ago | (#18259310)

Hawkins is a rich guy, and no-one feels like telling him that his stuff is crap. He had a few smart people working for him at some point, but when they told him his ideas were half baked and not new, he just fired their asses.

Here is what many people in machine learning and computer vision think about Hawkins stuff:
- it's way, way behind what other people in vision and machine learning are doing. Several teams have biologically-inspired vision systems that can ACTUALLY LEARN TO RECOGNIZE 3D OBJECTS. Hawkins merely has a small hack that can recognize stick figures on 8x8 pixel binary images. Neural net people were doing much more impressive stuff 15 years ago.
- Hawkins's ideas on how the brain learns are not new at all. Many scientists in machine learning, computer vision, and computational neuroscience have had general ideas similar to the ones described in Hawkins's book for a very long time. But scientists never talk about philosophical ideas without actual scientific evidence to support them. So instead of writing popular book with half-baked conceptual ideas, they actually build theories and algorithms, they build models, and they apply them to real data to see how they work. Then they write a scientific paper about the results, but they rarely talk about the philosophy behind the results.

It's not unusual for someone to come up with an idea they think is brand new and will revolutionize the world. Then they try to turn those conceptual ideas into real science and practical technologies, and quickly realize that it's very hard (the things they thought of as mere details often turn out to be huge conceptual obstacles). Then, they realize that many people had the same ideas before, but encountered the same problems when trying to reduce them to practice (which is why you didn't hear about their/your ideas before). These people eventually scaled back their ambitions and started working on ideas that were considerably less revolutionary, but considerably more likely to result in research grants, scientific publications, VC funding, or revenues.

Most people go through that "naive" phase (thinking they will revolutionize science) while they are grad students. A few of them become successful scientists. A tiny number of them actually manage to revolutionize science or create new trends. Hawkins quit grad school and never had a chance to go through that phase. Now that he is rich and famous, the only way he will understand the limits of his idea is by wasting lots of money (since he obviously doesn't care about such things as "peer review"). In fact, many reputable AI scientists have made wild claims about the future success of their latest new idea (Newell/Simon with the "general theorem prover", Rosenblatt with the "Perceptron", Papert who thought in the 50's that vision would be solved over the summer, Minsky with is "Society of Minds", etc......).

No scientist will tell Hawkins all this, because it would serve no purpose (other than pissing him off). And there is a tiny (but non-zero) probability that his stuff will actually advance the field.

    - Anonymous Scientist

Re:Cortex Sim == Bullsh*t (0)

Anonymous Coward | more than 7 years ago | (#18260844)

Existing research doesn't LEARN TO recognize, but RECOGNIZES. It's somewhat a different thing. The first more useless, the second more useful but stagnated.

I read the book and tried the software (1)

MarkWatson (189759) | more than 7 years ago | (#18259416)

After participating in the neural network hype in the 1980s (I spent 1 year on a DARPA committee for NN tools, and was the original author of the SAIC ANsim NN software product) I found Hawkin's book to be light technically, but I really enjoyed reading it.

His work might have been inspired by Kohonen's classic Springer-Verlag book "Self-Organization and Associative Memory".

I downloaded their software last night but have had little time doing anything but building and running two examples. When I get 20 hours to really kick the tires, I will blog about it on my AI blog http://artificial-intelligence-theory.blogspot.com / [blogspot.com]

I am hoping that NTA will really simulate temporal memory and spacial invariance that the neocortex apparently has.

A little off topic, but I love the way they package the NTA software: most of the low level code is C++, that builds into sharable libraries loaded and used in a Python wrapper. Neat stuff. The free license is only for non-commercial use, BTW.

Unfinished business (1)

HW_Hack (1031622) | more than 7 years ago | (#18259578)

He may have founded Palm - and I do encourage him to push forward with Numenta - but I'm still trying to get my Palm to sync on 2 different computers ..............

Old stuff? (1)

sneez (687780) | more than 7 years ago | (#18259618)

I'm no expert on this topic, but this doesn't sound very new or revolutionary to me. It looks very familiar with known model theories of neuronal networks. Aren't concepts of backpropagation or pattern recognition known for ages?

Re:Old stuff? (1)

Zarf (5735) | more than 7 years ago | (#18263076)

I'm no expert on this topic, but this doesn't sound very new or revolutionary to me. It looks very familiar with known model theories of neuronal networks. Aren't concepts of backpropagation or pattern recognition known for ages?

Well, when I read this:

Numenta's goal is to build a software model of the human brain capable of face recognition, object identification, driving, and other tasks currently best undertaken by humans.

I thought: Hey! That's been my goal my whole career. Nobody pays me for that stuff though! The difference between you and I knowing this type of stuff and someone like Jeff Hawkins declaring they are working on this type of stuff is that Jeff Hawkins has the money, time, and intelligence that he might actually pull it off. The problem for people like you and I might be that we want to do those things but we can't actually spend the cash to make them happen because we still worry about making the rent/mortgage and have to work on silly little programs for making money.

Future (0)

Anonymous Coward | more than 7 years ago | (#18260164)

>Numenta's goal is to build a software model of the human brain capable of face recognition, object identification, driving

Great, they're going to invent the Johnny Cab

Jeff Hawkins Q&A (0)

Anonymous Coward | more than 7 years ago | (#18260464)

Here's a Q&A with Jeff Hawkins [cioinsight.com] from May 2006 that asks him to better explain how his theory works and how it can be applied.

Of 2 Minds on Computerized Minds (1)

CurtissWith2esses (1072822) | more than 7 years ago | (#18263112)

Not too long ago two of IT's top original thinkers and innovators, Jeff Hawkins and Ray Kurzweil, appeared at an MIT emerging tech conference to discuss artificial intelligence. Both see computing mirroring the functions of the human brain. But they disagree on how fast scientists and engineers will develop technologies that exhibit the most complex cerebral traits of humans: self-awareness, emotion, and even a sense of one's own mortality.

Because of technology's exponential growth, Kurzweil sees emotion-laden, self-aware machines being developed by mid-century.

Hawkins' view on technology patterned after the human brain is more limited than Kurzweil's prognostications, saying such artificial beings will take centuries, not decades, to create. The brain is just too complex to replicate that quickly. In this video, Hawkins says robots that run amok [youtube.com] , will remain science fiction for a very long time. In a recent magazine interview, Hawkins discusses his theories on building an intelligent machine [cioinsight.com] .

Here's a series of podcasts of Kurzweil's vision [informationweek.com] of a computer that reasons and shows emotion.

Amateur scientist (0)

int2 (1049180) | more than 7 years ago | (#18263130)

High grade scientists use equation to express ideas, weak minds use ton of words.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?