×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

184 comments

employee list (0)

Anonymous Coward | about 9 years ago | (#12036178)

Yankovich, Gore, Neumann, Einstein...

Somewhat Offtopic (2, Interesting)

AKAImBatman (238306) | about 9 years ago | (#12036192)

Can anyone point me toward some research on associative AI? i.e. Instead of AI that trained by nueral nets or genetic algos, does anyone know of research on "scoring" words based on their relation to other words? Extending words into concepts, an AI could become quite intelligent at things like Spam filtering.

Just something I was thinking about lately. Anyone?

Re:Somewhat Offtopic (3, Interesting)

Anonymous Coward | about 9 years ago | (#12036347)

That is part of Natural Language Processing, where the goal is to figure out the meaning of sentances. There has been much progress in this field, including programs that can read news articles and then paraphrase the information.

Google "Natural Language Processing".

Re:Somewhat Offtopic (5, Funny)

Shadow Wrought (586631) | about 9 years ago | (#12036362)

HAL: Dave, do I need a penis enlargement?
Dave: For the millionth time HAL, no. You don't have one, remember?
HAL: But if I did, do you think I would get better functionality if I used Viatroxx?
Dave: No. Now Hal...
HAL: Dave, it looks like there's another poor Nigerian who needs my help.
Dave: Aaaaaaaaaaaaaaaaaarrrrrrrrrrrrrrrrgggggg!
HAL: Dave? What are you doing Dave?

Re:Somewhat Offtopic (1)

leodepisa (870534) | about 9 years ago | (#12037097)

check out Xindong Wu's recent works on databse classification: http://www.cs.uvm.edu/~xwu/home.html

Palm strategy for success (0)

Anonymous Coward | about 9 years ago | (#12037594)

Once they have some potential success, they are going to waste their lead by creating products which are only incrimentally better than their original release.

Then, once their competition starts making products which are superior to their own, they are going to spin their product and AI companies off into separate units. The product unit is going to assume that every man, woman, child, and dog is going to buy something from them within the next six months, and manufacture accordingly. The resulting glut will cost them most of their money, and they will be forced to sell most of their inventory at a loss, which also serves to canabilize the market for their newer products.

They will eventually merge with a company which makes compatible, but far superior, products based on their original design.

In other breaking news (-1, Offtopic)

Anonymous Coward | about 9 years ago | (#12036199)

A low paid IT professional has moved between jobs 4 times in the last 7 years.

No usb driver update for visor edge... (-1, Offtopic)

Anonymous Coward | about 9 years ago | (#12036200)

...that doesn't bluescreen on a multi-processor/HT system? I will never buy another palm, son of palm, or son of handspring product again. Without drivers, your hardware becomes a boat anchor. I will not soon forget.

Palm-Like "AI"? (5, Funny)

Spencerian (465343) | about 9 years ago | (#12036210)

You had to reset Palm PDAs in interesting ways, like poking a tiny button hidden ina hole with a paper clip. Imagine what you'd have to do a bot with Palm-like AI...

"Sir, to reset the machine, you'll need to sharply press its reset button, located at the back of the machine, just before its legs. just quickly pop your foot against it to press it."

"Uh, are you telling me that to reset it, I have to kick its ass?"

"Er...yes, sir."

Re:Palm-Like "AI"? (2, Interesting)

Hachey (809077) | about 9 years ago | (#12036403)

"Uh, are you telling me that to reset it, I have to kick its ass?"

"Er...yes, sir."



Er, if you want an AI's reset to be life-like give it a good swift kick in the balls. Ever seen a guy go down after a good kick? In hindsight, it kinda reminds me of a hard reset...


-----
Check out the Uncyclopedia.org [uncyclopedia.org] , the only wiki source for not-semi-kinda-untruth about things like Kitten Huffing [uncyclopedia.org] and Pong! the Movie [uncyclopedia.org]!

Will This Be Part of the New Palm OS? (5, Funny)

canfirman (697952) | about 9 years ago | (#12036213)

Great, just what I need, an AI app that keeps poping up saying, "You know you should go to that meeting. What do you mean you don't want to go? Did you remember your wedding anniversary? Have you called your wife? Who's this 'Elle' person in your phone book. You should stop playing 'Tetris' so often..."

Really! (0)

Anonymous Coward | about 9 years ago | (#12036239)

That would be kind of cool!

I think someone should make that just for fun :)

hmm sounds like a good summer project.

Re:Will This Be Part of the New Palm OS? (1)

ackthpt (218170) | about 9 years ago | (#12036248)

Great, just what I need, an AI app that keeps poping up saying, "You know you should go to that meeting. What do you mean you don't want to go? Did you remember your wedding anniversary? Have you called your wife? Who's this 'Elle' person in your phone book. You should stop playing 'Tetris' so often..."

Sounds like one of those Disorganizers from Discworld Bingley Bingley beep Insert Your Name Here , it is eight thirty aye em, you have a meeting with the Patrician

Re:Will This Be Part of the New Palm OS? (3, Funny)

CodeBuster (516420) | about 9 years ago | (#12036882)

Just wait until it says, "I'm sorry Dave, but I'm afraid that I just can't do that..."

Re:Will This Be Part of the New Palm OS? (2, Funny)

PMuse (320639) | about 9 years ago | (#12037335)

Me: "This doesn't look like GenCon."
PalmAI: "No, this is your dentist appointment. I only told you it was GenCon so you'd be here."
Me: "But, but . . ."
PalmAI: "Now, be a good girl and go sit in the nice chair."

Here, let me write an AI program in BASIC (1, Funny)

Randy Rathbun (18851) | about 9 years ago | (#12036231)

PRINT "I WILL GUESS YOUR WEIGHT"
FOR I=0 TO 1000
PRINT "DO YOU WEIGH "; I; " POUNDS?"
IF INSKEY="Y" THEN BREAK
NEXT I

Apologies to Penn Jillette [pennandteller.com]

Re:Here, let me write an AI program in BASIC (0)

Anonymous Coward | about 9 years ago | (#12036905)

What if I weigh more than 1000 pounds?

"Why don't you just tell me your weight"

Could this be the key (3, Funny)

affinity (118397) | about 9 years ago | (#12036234)

Re:Could this be the key (0, Troll)

ackthpt (218170) | about 9 years ago | (#12036286)

to the machines taking over...

Sounds like an improvement.

A month or so back Ukraine tosses an inept and corrupt government.
Today Kyrgyzstan chucks it's dubious government.

Who knows, maybe some day the people of the USA will follow suit, if they're not so lazy as to leave it to machines to do.

I'm skeptical that this is ready for prime-time (5, Insightful)

DoctoRoR (865873) | about 9 years ago | (#12036589)

The article gives little detail of the technology, and it's not like the general ideas Hawkins describes haven't been explored by people during the many decades of AI/neural networks research. The Numenta website gives the following:

HTM is "hierarchical" because it consists of memory modules connected in a hierarchical fashion. The hierarchy resembles an inverted tree with many memory modules at the bottom of the hierarchy and fewer at the top. HTM is "temporal" because each memory module stores and recalls sequences of patterns. HTM is hierarchical both temporally and spatially. An HTM system is not programmed in a traditional sense; instead it is trained. Sensory data is applied to the bottom of the hierarchy and the HTM system automatically discovers the underlying patterns in the sensory input. You might say it "learns" what objects are in the world and how to recognize them. Time is an essential element of how HTM systems work. First, to learn the patterns in the world, the sensory data must flow over time just as we move our eyes to see and move our hands to feel. Second, because every memory module stores sequences of patterns, HTM systems can be used to make predictions of the future. They not only discover and recognize objects but they can make predictions about how objects will behave going forward in time.

That sounds like a number of neural network approaches, including Stephen Grossberg's work [bu.edu] at BU. Although Hawkins seems to be a very bright guy, this field is littered with very bright researchers who made bold claims, and none of those efforts have yielded revolutionary businesses. Anyone remember (Stanford AI researcher) Edward Feigenbaum's Fifth Generation book in the 1980s? Doug Lenat's Cyc project?

Remember the huge difference between one neuron's firing rate and the clock speed for processors. The brain operates in a way that's fundamentally different from how we program and how computers operate: massive parallelism with slow components versus (mostly) serial computation. So when a company says they'll market a software solution to something which scientists haven't figured out yet, I am indeed skeptical. This is really more research effort than commercial venture, and Numenta admits this: "It may well take several years before products based on HTM systems are commercially available."

I hope there's something here. I'd love to see an outsider come in with fresh ideas and create a software platform to explore neuro-inspired programs. But let's be realistic and remember the history of AI. A red flag is the lack of any scientific papers available from the Numenta web site. If they are far enough along to make a software development kit, they should have been publishing results in peer-reviewed journals (with appropriate patent filings if necessary). So far, the only literature published is a trade book: On Intelligence.

It's all just a sales gimmic for his book! (0)

Anonymous Coward | about 9 years ago | (#12037656)

Did you notice how the page describing the technology is a link to the main researcher's book?

Clearly, the entire research "company" is a front: it's just a web page set up to promote sales of Hawkin's new book!

Fiendishly clever, isn't he! A guy that clever must have something interesting to say! I think I'll check out his new book...

Re:Could this be the key (0)

Anonymous Coward | about 9 years ago | (#12037086)

No, but it's a smart move...

All the rage... IBM too... (5, Interesting)

tquinlan (868483) | about 9 years ago | (#12036238)

According to news.com.com.com.com, IBM is working on something similar [com.com]...

Re:All the rage... IBM too... (1)

ggvaidya (747058) | about 9 years ago | (#12036462)

According to news.com.com.com.com, IBM is working on something similar [com.com]...

*winces*

neocortex? (5, Interesting)

dhbiker (863466) | about 9 years ago | (#12036241)

Numenta is developing a new type of computer memory system modeled after the human neocortex

surely this technology would be incredibly slow? (this is not a troll, read on before you mod me down!)

From what I remember from my neural networks days the human brain/neocortex works so well because of its massively parallel nature (not because of the processing power of any one neuron), and that computers simply aren't able to exploit this as they aren't designed to work like this - They are instead designed to to massively serial operations using extremely powerful chips (neurons) because the overhead of managing these parallel operations synchronously is too great (the human brain/neocortex work asynchronously)

am I wrong about this or am I missing something great that they've stumbled accross?

Re:neocortex? (0)

Anonymous Coward | about 9 years ago | (#12036329)

i tend to agree with you, the whole part about Hierarchical Temporal Memory (HTM) makes me wonder if this is really the best way to approach true AI, or the next step for AI in computers....and i always tend to associate hierarchical with slow, but i mean no real good reason....

Re:neocortex? (4, Insightful)

AKAImBatman (238306) | about 9 years ago | (#12036353)

From what I remember from my neural networks days the human brain/neocortex works so well because of its massively parallel nature (not because of the processing power of any one neuron), and that computers simply aren't able to exploit this as they aren't designed to work like this

Computers aren't *normally* designed like this. They can be however, and in recent years have been moving in that direction. When neural networks were first being researched, a Cray supercomputer was about the closest you could get to that sort of parallelism. Fast forward to today and we find that Intel (Pentium), AMD (AMD64), Sun (Sparc), and Sony (Emotion Chip) are all building machines that are highly parallel in nature.

Even more interesting is that today you can build yourself a custom, massively parallel computer on a shoestring budget. All you need is a handful of FPGAs, a PCB layout service like Pad2Pad [pad2pad.com], a few other parts, and reasonable VHDL or Verilog skills. That's more or less what OpenRT [openrt.de] did to build their SaarCORE [saarcor.de] architecture. :-)

There's no comparing the parallelism (1)

DoctoRoR (865873) | about 9 years ago | (#12036721)

Computers aren't *normally* designed like this. They can be however, and in recent years have been moving in that direction.

There is a massive difference between the parallel nature of neural processing and that of Intel and AMD chips. Saying we are moving in a massively parallel nature of brain-like proportions is like saying we are five miles outside of Washington D.C. walking towards California. The differences are required just by the elements being used. Look at the operating speed of the neuron versus the the clock rate of our chips.

Re:There's no comparing the parallelism (1)

AKAImBatman (238306) | about 9 years ago | (#12036865)

Saying we are moving in a massively parallel nature of brain-like proportions is like saying we are five miles outside of Washington D.C. walking towards California.

You're still walking toward California. :-)

My only point is that computer design has been slowly moving toward parallelism instead of single thread performance. Granted, examples like Hyperthreading are very primitive forms of this, as they are intended to encourage parallel computations rather than be a serious parallel platform themselves.

However, the Emotion Chip, SaarCORE, and traditional Cray design are all examples of machines that have focused more heavily on parallelism. In fact, there are very few limits on how parallel we can design a system. The primary one happens to be the amount of I/O a system is capable of. The more parallel a system, the more I/O pins that are needed to keep the system running smoothly. Unfortunately, I/O pins tend to be very expensive on a per-chip basis, so most machines attempt to share as much I/O as possible.

In short, if their intent is to build an affordable system that is massively parallel in nature, then I see little standing in their way. The only reason why these machines aren't in common usage is the lack of a market.

Internal communication is a problem as well (0, Troll)

Eternally optimistic (822953) | about 9 years ago | (#12037671)

We can pack large numbers of CPUs on a chip today, considering you can make a simple 16-bit processor in less than 1mm^2. The problem with neural networks is the massive fan-out and fan-in of communication. There are thousands of connections for each neuron, we can't manage that many wires even on-chip. So this ends up being time-multiplexed, meaning you slow things down by a factor of 10,000 or more.

Aside from other problems, e.g. we have trouble modeling a slug with 9 neurons. But i'm not up to speed on that one.

Re:neocortex? (1)

Bearpaw (13080) | about 9 years ago | (#12036406)

From what I remember from my neural networks days the human brain/neocortex works so well because of its massively parallel nature (not because of the processing power of any one neuron), and that computers simply aren't able to exploit this as they aren't designed to work like this ...

Most current computers aren't designed to work like this.

That's what most people belive - and it's false (2, Interesting)

mstroeck (411799) | about 9 years ago | (#12036674)

Yes, you are indeed missing something. But it's probably not your fault, the people who taught you neural networks probably didn't know enough about the brain.

The parallelity of human brains is widely and hugely overestimated.

Just think about the fact that you can easily recognize 2 random objects if you are shown them for as little as a second. In this second, there is only enough time for about 100 of your neurons firing. The path trough your brain therefore _cannot_ be longer than a dozen neurons or "operations".

Any modern CPU does billions(!) of operations per second. So the comparison really isn't very good.

Re:That's what most people belive - and it's false (1)

FleaPlus (6935) | about 9 years ago | (#12037518)

Just think about the fact that you can easily recognize 2 random objects if you are shown them for as little as a second. In this second, there is only enough time for about 100 of your neurons firing.

I suspect you may have misinterpreted whatever you read that said this. This is actually often cited as evidence in favor of massive parallelization. It doesn't indicate that "100 of your neurons" had to fire in this time, but that a 100-step sequence of neurons (with who-knows-how-many neurons involved in each step) fired in that time frame. Computing complex operations in so few time steps requires oodles of parallelization.

Re:That's what most people belive - and it's false (0)

Anonymous Coward | about 9 years ago | (#12037529)

The parallelity of human brains is widely and hugely overestimated.

Just think about the fact that you can easily recognize 2 random objects if you are shown them for as little as a second. In this second, there is only enough time for about 100 of your neurons firing. The path trough your brain therefore _cannot_ be longer than a dozen neurons or "operations".

Any modern CPU does billions(!) of operations per second. So the comparison really isn't very good.


Non Sequitar, unless you can explain how the speed of computer calculations has anything to do with disproving the level of parallel processing (e.g. parallelity) in the human brain.

Numenta = AI Company? (4, Informative)

bigtallmofo (695287) | about 9 years ago | (#12036246)

It appears the article summary might be misleading. From the first sentence of www.numenta.com:

Numenta is developing a new type of computer memory system modeled after the human neocortex. The applications of this technology are broad and can be applied to solve problems in computer vision, artificial intelligence, robotics and machine learning.

They further go on to say:

Numenta is a technology platform provider rather than an application developer. The Company is creating a scalable software toolkit that will allow developers and partners to configure and adapt HTM systems to particular problems.

My reading on this is that they aren't an AI company - they're just developing a technology that could be used for AI or many, many other uses.

Re:Numenta = AI Company? (2, Interesting)

gl4ss (559668) | about 9 years ago | (#12036427)

for me it looks more like they're developing system tha lets you strap on some ai behauvior on whatever project you're working on, so that you can make your systems more adaptable.

remember that in the industry ai is not really about making self aware monsters... what they would be more intrested would be machines that adjust their behauvior.

Re:Numenta = AI Company? (1)

ghoti (60903) | about 9 years ago | (#12036491)

True, but AI just sounds cooler and evokes images from a certain Stephen Spielberg movie. You need to cater for your audience. Just imagine: Apple and Google team up to build AI nanorobots running Linux!

Sounds Similar to Neural Networks (3, Interesting)

Anonymous Coward | about 9 years ago | (#12036253)

By training neurons, they learn to achieve the desired result of a user.

Pretty complex material, anyone wanting to delve into should do some reading on Minsky (proposed neural networks could make dead bodies perform tasks...creepy to say the least) http://en.wikipedia.org/wiki/Marvin_Minsky [wikipedia.org]

When they release a white paper Im sure itll only be the beginning of a prosporus field of study.

~ Jon

A.L.I.C.E Makes for Interesting Conversation (3, Interesting)

Spencerian (465343) | about 9 years ago | (#12036254)

FWIW to ya, A.L.I.C.E [alicebot.org] is an cool webbot AI similar to the old ELIZA bots of old, but with some sophistication that allows it to be programmed to answer specific questions and recognize some words and phrases well. Won't pass a Turing test, but hey, it's free.

The webpage above has an animation that appears to have a bot attached to it. Pretty and cool.

Re:A.L.I.C.E Makes for Interesting Conversation (1)

alienz (863592) | about 9 years ago | (#12036781)

LOL Sorry, but more than one ALICE bot (which uses AIML -Artificial Intelligence Markup Lanugage) has won the Loebner Competition. The competition uses the Turing Test to select a winner. So...they have been evaluated with the Turing Test. Thanks for the BIG laugh I got out of your statement. Btw, I'm not sure what it is you're saying is free? There are several open source AIML interpreters.

Re:A.L.I.C.E Makes for Interesting Conversation (3, Informative)

Quixote (154172) | about 9 years ago | (#12037062)

Nice try, kid. No, neither A.L.I.C.E. nor anyone else has truly passed the Turing Test (read up about it before commenting further; in particular, read what it means to pass the test). The Loebner prize is designed to be _like_ the Turing Test; but the winner of the Loebner Prize is not the 'bot who passes the Turing Test, but the 'bot who scores the most points. So, if 1 'bot scores 1 point and all the others score 0, then the 'bot with the single point wins.

If a 'bot passes the Turing Test, it will be big news, trust me.

Re:A.L.I.C.E Makes for Interesting Conversation (1)

houghi (78078) | about 9 years ago | (#12037433)

If a 'bot passes the Turing Test, it will be big news, trust me.

What? Al Gore didn't pass?

Re:A.L.I.C.E Makes for Interesting Conversation (1)

FooAtWFU (699187) | about 9 years ago | (#12036969)

I tried it. I'd be a lot more impressed if the programmer could learn the difference between "your" , the posessive, and "you're", the contraction. If I say
Your Flash animation on the front page is tacky
I don't want the bot to reply
ALICE: Thanks for telling me that I am flash animation on the front page is tacky.
I realize not everyone has grammar skills, but please, why punish those who do? =b

Better than coffee (4, Funny)

Mrs. Grundy (680212) | about 9 years ago | (#12036259)

Nothing starts my day better than the pleasant scent of vaporware wafting from my computer. We live in a great time. This shows what a kid with nothing but a formalism and a dream can accomplish.

Re:Better than coffee (1)

Bearpaw (13080) | about 9 years ago | (#12036522)

Nothing starts my day better than the pleasant scent of vaporware wafting from my computer. We live in a great time. This shows what a kid with nothing but a formalism and a dream can accomplish.

Yeah, Hawkins has a history of making a big deal of concepts that never get anywhere. I remember a decade ago when there was all sorts of vaporware BS about a programmable handheld electronic organizer that could be operated with a stylus and easily synchronized with a desktop computer. What a farce that turned out to be.

According to the Numeta site, the company is supposedly funded "by its founders, board members, and close associates", but I doubt Hawkins was able to ante up much, given the complete failure of his silly "organizer" thingy.

hierarchical recognition of patterns in temporal (0)

Anonymous Coward | about 9 years ago | (#12036268)

sequences.

Sounds like a job for LISP!

Re:hierarchical recognition of patterns in tempora (0)

Anonymous Coward | about 9 years ago | (#12036360)

Finished RTFA and it still sounds like a job for LISP. There is some good prior art in automated image cataloging. Use architecture OR architectural to narrow down a bit if searching.

Google is your friend. Altavista used to be your friend and for the life of me i don't get google's reluctance to make a proximity function available.

Hawkins' Engineering Approach is Clever (5, Interesting)

filmmaker (850359) | about 9 years ago | (#12036271)

In the book, Hawkins remarks that AI researchers often took the misguided approach that intelligence is a set of principles or properties, when in fact it's strictly a matter of behavior. To be intelligent is to behave intelligently. If he's right, then it's the act of being, wherein which the brain's primary tool is the continuous analogizing of current circumstances to past situations in order to make good predictive decisions, which constitutes intelligence.

He's the first to claim that he's not looking for sentience or to answer the question of sentience, but is instead only looking for a practical engineering approach to building intelligent machines. I think this is doubly clever since the issue of sentience should not be addressed until well after, as Hawkins often remarks, our own brains are understood first, in terms of how they operate. Why they operate, or what motivates us or what makes us 'cognitive agents' don't enter the equation with his approach.

Re:Hawkins' Engineering Approach is Clever (1)

aliens (90441) | about 9 years ago | (#12036428)

Agreed, his book is so straight forward i almost makes too much sense. It's quite easy and quick to read. I suggest everyone grab a copy.

Re:Hawkins' Engineering Approach is Clever (2, Insightful)

ch-chuck (9622) | about 9 years ago | (#12036531)

IMO "AI" research is misguided whatever approach you take. As they say, trying to make a machine think is like trying to make a submarine swim. Maybe it's the modern technological equivilent of the ancient search for god - you either never find it but have a big adventure doing so, or realize they were intelligent all along. Heck, a thermostat is "intelligent" - it senses the enviroment, makes a "decision" and takes action. All you can do it just make things more & more self contained, self sufficient, autonomous and independant.

Re:Hawkins' Engineering Approach is Clever (0)

Anonymous Coward | about 9 years ago | (#12036584)


Computer Science, meet Behaviorism.

Behaviorism slowly cracks a wry smile and says, "What took you so long?"

Re:Hawkins' Engineering Approach is Clever (1)

danila (69889) | about 9 years ago | (#12037394)

I am currently reading Mapping the Mind by Rita Carter, a great book about our current (1998) knowledge about the brain. The most valuable about this book is its factual nature. Reading about countless experiments, observations and other facts, usually with explanations about underlying neural mechanisms about particular behaviours helps realise the nature of the brain - just a complex modular organ that is not unlike computer programs. When you see how easily certain aspects of consciousness (or intelligence) can be broken in people with a minor brain lesion, you very quickly realise that "sentience" doesn't exist - rather there is a large set of behaviours that together are lumped under this label.

Most people who like to endlessly debate about philosophical aspects of sentience as applied to AI, cryonics, mind uploading, etc., usually have no clue whatsoever about how the brain actually works. The same applies to religious people, who instead of following the time-honoured advice of "Learn thyself" and realising the peculiarities and shortcomings of their mind instead are controlled by these very peculiarities.

In light of this, Hawkins's views appear to be quite reasonable.

His Book is Similar to My Approach, But.. (2, Interesting)

Slicker (102588) | about 9 years ago | (#12037694)

I am actually currently reading his book--started about a month ago and am finishing the last of it now (a little every night before bed, when I'm not too tired).

His approach is surprising similar to my own (which I was initially happy to see), but less developed in some important ways. His book sometimes makes reference to being the first to consider this or that--nothing of which was new to me... things I've ready and/or talked about many times with others.

His approach also has a few critical flaws..

Foremost, invariance (the ability to recognize something regardless of where it is seen) cannot be achieved the way he speculates. I've testing this idea (and numerous others) in software years ago.

He illustrates this in the vision cortices where, he suggests, small sub-regions of the brain each learn to recognize something separately but criss-cross to other areas so that recognition can be invariant. I feel stupid admitting that I actually attempted this approach once...but not so alone now that Hawkins is advocating it.

First, each low-level (first to image) sub-region may break between another across the visual field at points within the object--what is going to target them into the fields? This problem can be satisfied farthar up the tree by cross-mixing between regions (and/or layers), but it's not very efficient.

Secondly and the critical point, this criss-cross betweens sub-regions method does not solve, but only moves the problem to a different space. Both the invariant identification and the location of identification are crucial factors to remember. But with the criss-cross method, there will be oodles and oodles of entities representing the same object of which higher level processes will need to somehow discover that they are the same thing......every time it's seen in a different place....

Another major problem is as to how this criss-crossing developes..given universal behavior for all neurons.

Matthew C. Tedder

N-ron: Neurons without you (1, Funny)

pocari (32456) | about 9 years ago | (#12036288)

Wow. I haven't heard anyone use the term "AI" in a long time.

If they're trying to evoke the feeling of being dated and discredited, why not also call the company N-ron?

Re:N-ron: Neurons without you (0)

Anonymous Coward | about 9 years ago | (#12036723)

Maybe an interviewer should ask them: "So Mr. Hawkins, your new company develops engines for video games?"

Foldiak? (3, Informative)

Anonymous Coward | about 9 years ago | (#12036289)

I'm surprised that the short summary, from my brief perusal, does not include reference to work by Peter Foldiak (1991, 199?) and Wallis (1996). Both these authors published numerous papers on temporal and spatial coherence. My MSc in 1996 was also on the same topic followed by human research on the same problem. All of the computational work was with unsupervised learning algorithms varying whether the temporal processing was at the input our output stage.

I guess I'll have to read the original paper. However, the notion of temporal processing has been around for a long time.

Note: My own human research has yielded reliable data that addresses the acquisition of invariant object recognition.

what does the neuroscience crowd think? (0)

Anonymous Coward | about 9 years ago | (#12036300)

is the Hierarchical Temporal Memory (HTM) that he talks about really the breaking ground in neuroscience or is this something that can be more easily brought to the world of AI on computers?

Sounds like a retirement plan (2, Insightful)

14erCleaner (745600) | about 9 years ago | (#12036311)

I guess building spaceships is old-hat for rich techies now, so he's going to blow his millions on AI. I don't expect anything tangible to come from this.

Re:Sounds like a retirement plan (0)

Anonymous Coward | about 9 years ago | (#12036607)

How did this get modded up as insightful?

The summary is even wrong! They are building a new type of memory, not an AI.

Re:Sounds like a retirement plan (1)

14erCleaner (745600) | about 9 years ago | (#12036768)

It is indeed AI; they're building a memory system that learns from its sensory inputs, and stores things in temporal and spatial dimensions.

Nice hobby, but except for maybe DOD/Homeland Security I don't see it getting any funds. Maybe they can use it to recognize hit TV shows. :)

Now there's a name I haven't heard in a long time (4, Interesting)

Dancin_Santa (265275) | about 9 years ago | (#12036321)

Mentifex. The name alone conjures up flamewars of years past on Usenet.

The big question in AI is whether an AI "mind" is more likely to spring up from a handful of rules, or whether a top-down design will bring it about. Mentifex was always in the latter camp.

Those in the former camp, including the Palm founders in the article, always seemed to be on the verge of something, but never seemed to really get any closer to a "mind" than some fuzzy logic.

We're still a long way off from Number 5 Alive.

Re: Mentifex (1)

FleaPlus (6935) | about 9 years ago | (#12036952)

Yeah, I'm quite surprised that the editors managed to get rid of all the links Mentifex [slashdot.org] undoubtedly made to his AI4U project, or whatever it is.

For those unfamiliar with him, check out the The Arthur T. Murray/Mentifex FAQ [nothingisreal.com]. This guy is one of the kook legends.

From the FAQ:

1.2 Who is Arthur T. Murray and who or what is "Mentifex"?

Arthur T. Murray, a.k.a. Mentifex, is a notorious kook who makes heavy use of the Internet to promote his theory of artificial intelligence (AI). His writing is characterized by illeism, name-dropping, frequent use of foreign expressions, crude ASCII diagrams, and what has been termed "obfuscatory technobabble".

Murray is the author of software which he claims has produced an "artificial mind" and has "solved AI". He has also produced a vanity-published book which he touts as a textbook for teaching AI.

1.3 What are Arthur T. Murray's AI credentials?

None of which to speak.

Murray claims to have received a Bachelor's degree in Greek and Latin from the University of Washington in Seattle in 1968 [24]. He has no formal training in computer science, cognitive science, neuroscience, linguistics, nor any other field of study even tangentially related to AI or cognition. He works as a night auditor at a small Seattle hotel [3, p. 25] and is not affiliated with any university or recognized research institution; he therefore styles himself an "independent scholar". Murray claims that his knowledge of AI comes from reading science fiction novels [39].

1.4 What does Arthur T. Murray do?

Murray is notorious for posting thousands of messages to Usenet promoting his AI software, book, websites, and theory. Most of these messages are massively cross-posted to off-topic newsgroups. Murray takes the mere mention of anything vaguely AI-related as an invitation to post a follow-up directing readers to his own work (e. g., [45]). He claims that people are "crying out" for repetition of his message [46].

Murray also heavily promotes himself on public forums on the web. Message boards, private guestbooks, and collaborative encyclopedias are all considered fair game for the showcasing of Murray's ideas. Murray terms this activity "meme insertion"; most everyone else considers it to be spamming.

Before he had regular access to the Internet, Murray used the US postal system to spread his ideas by mass-mailing prominent AI researchers, computing authors, and sometimes even entire university departments. He boasts that he mailed seven thousand letters in 1989 alone [14].

Murray has also been known to cause disruptions in person. In one notable example, he picketed the 2001 International Joint Conference on Artificial Intelligence [34, 35].

Kooks (1)

MightyMartian (840721) | about 9 years ago | (#12037628)

Ah yes, one of the originals. That's the problem. The Internet now has plenty of trolls, but there just aren't the good ol' fashioned delusional kooks like Arthur T. Murray, Ed Conrad, Ted Holden, Archimedes Plutonium and George Hammond.

Hawkins has had brains on the brain for a while (2, Informative)

gearmonger (672422) | about 9 years ago | (#12036358)

It's good to see that we might actually see some commercializable results come out of his research. Jeff's a smart dude Donna really is an excellent business manager, so I expect interesting things to emerge from this new venture.

I mean, heck, if it gets us even one step closer to having competent automated tech support, I'm all for it.

Place your bets (0)

Anonymous Coward | about 9 years ago | (#12036361)

How long before some corporation with clueless management buys them out and runs another great company they founded into the ground?

My money's is on 2 years.

A Breakthrough in AI is just 10 years away... (5, Funny)

Cr0w T. Trollbot (848674) | about 9 years ago | (#12036376)

...just the way it was in 1970.

- Crow T. Trollbot

Re:A Breakthrough in AI is just 10 years away... (2)

WillAffleckUW (858324) | about 9 years ago | (#12036561)

...just the way it was in 1970.

And fusion power is just 20 years away ... just the way it was in 1970.

Re:A Breakthrough in AI is just 10 years away... (1)

mbrewthx (693182) | about 9 years ago | (#12037014)

I remember also hearing that by the turn of the century (last century) the digital office would replace paper.

Re:A Breakthrough in AI is just 10 years away... (1)

WillAffleckUW (858324) | about 9 years ago | (#12037072)

I remember also hearing that by the turn of the century (last century) the digital office would replace paper.

It did, didn't you get the memo? Here, let me print you a copy. Now if I can just get past the giant bag for paper recycling we keep in our office, I can get to the printer ...

Re:A Breakthrough in AI is just 10 years away... (1)

mbrewthx (693182) | about 9 years ago | (#12037182)

Could you print it in color and triplicat? We just went through a major upgrade where I work and added a Color Laser and a networkable Color Copier. Every one was so excited thinking they would get to print in color. But were saddened when they found out the the printer and copier color capibilities would be for the graphic artist and page layout artist. Just because you have the ability doesn't mean you need to.

Re:A Breakthrough in AI is just 10 years away... (1)

WillAffleckUW (858324) | about 9 years ago | (#12037329)

Actually, we use color and print out six copies of our status reports here at work, with different colors and shading indicating priority and importance, so, yes, I could print it out in color and triplicate.

But we mostly use the color printer for printing out Protein Structures and DNA sequences, where color is very very useful. You ever tried to read a DNA sequence in black and white? Great, now do that 5000 times in a single day ...

So, while some of our files may now be electronic, we still use paper, just as we still are promised AI in ten more years every five years or so, and commercial cheap fusion power in twenty more years every ten years or so. But tomorrow never comes.

Re:A Breakthrough in AI is just 10 years away... (1)

danila (69889) | about 9 years ago | (#12037426)

And in 10 years you will still be a moron repeating tired old soundbites.

Just what we need, an intelligent palm (1)

F3u3r-Fr3i (856738) | about 9 years ago | (#12036377)

"You're doing it all wrong!"

Re:Just what we need, an intelligent palm (0)

Anonymous Coward | about 9 years ago | (#12036398)

"You're doing it all wrong!"

No kidding. I mean, we already have wives and girlfriends for that.

Belief Propagation (4, Insightful)

songbo (614466) | about 9 years ago | (#12036495)

The idea seems simple enough. Create a hierarchical inference structure. Train it on some data. Let the nodes learn what are the most frequent data. This forms the basic alphabet set. Propagate this up the hierachy. Learn the conditional probability distribution. Voila, you have a working visual recognition system. Problem is, the system will be slow, unless you have a processor capable of parallel or vector processing. Try implementing the system on Matlab with a 320x200 image, and see your processor crawl to a halt. Now, imagine doing this on a 320x200 video, and pray! Well, that's why we need a different processor architecture to make this work. But the theory is simple.

Re:Belief Propagation (0)

Anonymous Coward | about 9 years ago | (#12037401)

Hawkins and George differ by incorporating the notion of transitions and motions in the hierarchical model. There is also the notion of "surprise", by upper level nodes sending down predictions of what occurs next, with this being incorporated into the model. Not sure if they've got this in their system yet though, but it certainly makes sense.

I'm not a vision guy, but it makes sense in that you're looking for invariant features under motion, which is similar to what humans do. From what I understand, when we view an image, our eyes are actually performing tons of micro-movements across the image. Apparently studies where the eyeball was "frozen" (no idea how they did that) showed that recognition of characters and icons goes to nil without that motion.

Anyways, the "vision under motion" seems to make sense, but I heard that vision researchers did not go that route mostly due to the realities of the data they had tow ork with, namely cruddy single frame shots from surveillance satellites and the like.

Visual Recognition .. um, isn't this Terminator? (1)

WillAffleckUW (858324) | about 9 years ago | (#12036528)

Just pointing this out.

How do we know that the DOD isn't funding this as yet another excuse to reduce our privacy levels to those of Russia?

Misguided ! -- No mention of Space-Variance (2, Insightful)

Wisp (1763) | about 9 years ago | (#12036629)

After reading the Tech Report (note -- not a published paper in a respected journal) its clear that they are not presenting anything new here.

Its surpising that a) its news and b) they anyone is founding a company based on these ideas since they have to date not been sucessful in solving "the vision problem."

Firstly, the main ideas that they use have had a long history in visual modelling and statistical pattern recognition. The assertion that visual processing operates so cleanly at "levels" is far from clear although an idea with quite a long history -- See Marr for instance...Or spatial frequency channels as another example of competing partition of function.

One main issue is that they never mention what an explicit representation of visual object actually is, let alone how they might be reflected in cortex. Their approach follows the typical learning ideas of the perceptron, etc.. but those systems are known to be unstable!

More seriously, their whole argument doesn't demonstrate they understand the realities of the structure and functional architecture of visual cortex. That the visual system is highly space-variant is a fact that makes simplistic rectilinear statistical pattern matching a daunting problem. Although it is possible that their _may be_ an invariant representation, the jury is still out since its far from clear how orientation maps, occular dominance columns and the other peculiarities of the visual areas might produce such a thing when you foveate.

In summary, it seems much more like these guys were brought on board for advertising fanfare.

Hey if you can't sustain profitability (2, Funny)

ClosedSource (238333) | about 9 years ago | (#12036653)

of something as complex as a PDA, try something really simple like AI.

Re:Hey if you can't sustain profitability (1)

Anita Coney (648748) | about 9 years ago | (#12036701)

Are you kidding?! Once AI is created, Jeff Hawkins will take over the world and rule as our supreme overlord. At that time "proft" will become as meaningless as "justice" or "freedom."

That is, until the AI gets intelligent enough to kill him. That's when the REAL fun will begin.

Cerdibility ? (4, Interesting)

shashark (836922) | about 9 years ago | (#12036682)

None of the founders [numenta.com] of Numenta other than Jeff Hawkins have any experience in AI or for that matter have any background in hardcore computer science.

Dileep George [numenta.com] is an Electrical Engineering graduate, while the CEO Donna Dubinsky is a hardcore salesperson and holds an MBA. Interestingly, the page also mentions that Jeff Hawkins " currently serves as Chief Technology Officer at palmOne, Inc [palmone.com]". Fishy!

Next Generation AI ? Who are we kidding ?

Remind me never to invest in any of your startups (1)

yoz (3735) | about 9 years ago | (#12037421)

"Dubinsky holds a B.A. from Yale University in History, and an M.B.A. from the Harvard Business School. She currently serves as a director of palmOne and of Intuit Corporation."

Sounds like a suitable CEO to me. You hire CEOs for their management capabilities. You don't hire them to do your programming.

"Dileep George was a Graduate Research Fellow at Redwood Neuroscience Institute, and a graduate student in Electrical Engineering at Stanford University. His research interests include neuronal coding, information processing, and the modeling of cortical functions."

Sounds like a suitable Chief Engineer to me. I have no idea where you got that "no background in hardcore computer science" from, unless you're unbelievably narrow-minded about skill domains.

Next Generation AI?

Oh, wait, you haven't actually read what the company does. Okay, explains it.

On Intelligence is a GREAT read (2, Insightful)

xtal (49134) | about 9 years ago | (#12036755)

If you are at all interested in your brain, artificial intelligence, and artificial thought - you owe it to yourself to get a copy of this book.

I've been experimenting with neural networks implemented on FPGAs for awhile as a hobby - not much commercial interest in these systems just yet - but there is a lot of interesting work being done.

Remember 15 years ago, when people thought it would take decades and decades to sequence the human genome? Then someone came along and figured out a much faster technique. This same kind of thing is starting to happen in artificial intelligence; people from backgrounds OTHER than computational AI and biology are starting to get involved, and the new perspectives have brought new ideas IMHO.

Anyway, if you're interested in AI, get Hawkin's book 'On Intelligence'. It's damn good. One of the best I've read on the genre, and the references in the book will save you a lot of time delving further.

Re:On Intelligence is a GREAT read (2, Insightful)

DoctoRoR (865873) | about 9 years ago | (#12036953)

Remember 15 years ago, when people thought it would take decades and decades to sequence the human genome? Then someone came along and figured out a much faster technique. This same kind of thing is starting to happen in artificial intelligence; people from backgrounds OTHER than computational AI and biology are starting to get involved, and the new perspectives have brought new ideas IMHO.

I think there's a lot of hubris on this board. The brain is a very complex organ. Solving it will take hundreds of mental leaps equivalent to shotgun sequencing. And it's not correct to say that brain science is only now starting to get people of different backgrounds. This field has been highly interdisciplinary for decades: physicists, philosophers, psychologists, computer scientists, linguists, anthropologists, etc, etc.

The work Hawkins describes has roots in research on perceptrons back in the 1950s. There was a wave of resurgence in those ideas in the 1980s, probably due to the backpropagation algorithm. Although scientific research progresses along, popularity seems to have peaks every couple of decades, so maybe we are due.

Numentia = Cyberdyne (0)

Anonymous Coward | about 9 years ago | (#12036854)

HTM = neuralnet processor

Better start learning survival techniques. The machines are coming.

AI? Now, where have I seen this before .... ? (1)

zazelite (870533) | about 9 years ago | (#12037058)

Wow, yet another proposed prosthetic for human cognition. In terms of advancement of the field, this is no more revolutionary than, say, the intelligent spam filter. I'd wager that both these technologies (spam filtering and visual processing) are approaching and will continue to approach the human proficiency level asymptotically, and a rather shallow asymptote at that. So all you brainiacs can relax - no one's beaten you to the punch ... yet.

Reviews of "On Intelligence" (2, Informative)

FleaPlus (6935) | about 9 years ago | (#12037427)

As the submission noted, this work will be building on what Hawkins wrote about in his recent book, On Intelligence [wikipedia.org]. The companion web site for the book is here: [onintelligence.org]

There are also a some reviews of the book:
http://blogger.iftf.org/Future/000605.html [iftf.org]
http://www.computer.org/computer/homepage/0105/ran dom/index.htm [computer.org]
(By Bob Colwell, who was Intel's chief IA32 architect)
http://www.techcentralstation.com/112204B.html [techcentralstation.com]
http://www.corante.com/brainwaves/archives/026649. html [corante.com]

A quote from his book:

The agenda for this book is ambitious. It describes a comprehensive theory of how the brain works. It describes what intelligence is and how your brain creates it. The theory I present is not a completely new one. Many of the individual ideas you are about to read have existed in some form or another before, but not together in a coherent fashion. This should be expected. It is said that "new ideas" are often old ideas repackaged and reinterpreted. That certainly applies to the theory proposed here, but packaging and interpretation can make a world of difference, the difference between a mass of details and a satisfying theory. I hope it strikes you the way it does many people. A typical reaction I hear is, "It makes sense. I wouldn't have thought of intelligence this way, but now that you describe it to me I can see how it all fits together." With this knowledge most people start to see themselves a little differently. You start to observe your own behavior saying, "I understand what just happened in my head." Hopefully when you have finished this book, you will have new insight into why you think what you think and why you behave the way you behave. I also hope that some readers will be inspired to focus their careers on building intelligent machines based on the principles outlined in these pages. ...

Weren't neural networks supposed to lead to intelligent machines?
Of course the brain is made from a network of neurons, but without first understanding what the brain does, simple neural networks will be no more successful at creating intelligent machines than computer programs have been.

Why has it been so hard to figure out how the brain works?
Most scientists say that because the brain is so complicated, it will take a very long time for us to understand it. I disagree. Complexity is a symptom of confusion, not a cause. Instead, I argue we have a few intuitive but incorrect assumptions that mislead us. The biggest mistake is the belief that intelligence is defined by intelligent behavior.

What is intelligence if it isn't defined by behavior?
The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence. I will describe the brain's predictive ability in depth; it is the core idea in the book.

How does the brain work?
The seat of intelligence is the neocortex. Even though it has a great number of abilities and powerful flexibility, the neocortex is surprisingly regular in its structural details. The different parts of the neocortex, whether they are responsible for vision, hearing, touch, or language, all work on the same principles. The key to understanding the neocortex is understanding these common principles and, in particular, its hierarchical structure. We will examine the neocortex in sufficient detail to show how its structure captures the structure of the world. This will be the most technical part of the book, but interested nonscientist readers should be able to understand it.

What are the implications of this theory?
This theory of the brain can help explain many things, such as how we are creative, why we feel conscious, why we exhibit prejudice, how we learn, and why "old dogs" have trouble learning "new tricks." I will discuss a number of these topics. Overall, this theory gives us insight into who we are and why we do what we do.

Can we build intelligent machines and what will they do?
Yes. We can and we will. Over the next few decades, I see the capabilities of such machines evolving rapidly and in interesting directions. Some people fear that intelligent machines could be dangerous to humanity, but I argue strongly against this idea. We are not going to be overrun by robots. It will be far easier to build machines that outstrip our abilities in high-level thought such as physics and mathematics than to build anything like the walking, talking robots we see in popular fiction. I will explore the incredible directions in which this technology is likely to go.

My goal is to explain this new theory of intelligence and how the brain works in a way that anybody will be able to understand. A good theory should be easy to comprehend, not obscured in jargon or convoluted argument. I'll start with a basic framework and then add details as we go. Some will be reasoning just on logical grounds; some will involve particular aspects of brain circuitry. Some of the details of what I propose are certain to be wrong, which is always the case in any area of science. A fully mature theory will take years to develop, but that doesn't diminish the power of the core idea.


The actual framework Kandel proposes is pretty cool.

zerg (2, Interesting)

Lord Omlette (124579) | about 9 years ago | (#12037502)

I predict that the first AI they produce will work so well, that no one who buys one will ever need a replacement, so the company will spiral into obsolesence while Microsoft et al mkae a mint on AIs that are much easier to develop for...

Not A.I., Neuroscience (0)

Anonymous Coward | about 9 years ago | (#12037645)

Jeff's company isn't doing AI, they are doing neuroscience. They are investigating how the human brain processes visual information.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...