Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The New AI: Where Neuroscience and Artificial Intelligence Meet

Soulskill posted about a year and a half ago | from the on-an-internet-dating-site-like-everyone-else dept.

AI 209

An anonymous reader writes "We're seeing a new revolution in artificial intelligence known as deep learning: algorithms modeled after the brain have made amazing strides and have been consistently winning both industrial and academic data competitions with minimal effort. 'Basically, it involves building neural networks — networks that mimic the behavior of the human brain. Much like the brain, these multi-layered computer networks can gather information and react to it. They can build up an understanding of what objects look or sound like. In an effort to recreate human vision, for example, you might build a basic layer of artificial neurons that can detect simple things like the edges of a particular shape. The next layer could then piece together these edges to identify the larger shape, and then the shapes could be strung together to understand an object. The key here is that the software does all this on its own — a big advantage over older AI models, which required engineers to massage the visual or auditory data so that it could be digested by the machine-learning algorithm.' Are we ready to blur the line between hardware and wetware?"

Sorry! There are no comments related to the filter you selected.

no (5, Insightful)

Anonymous Coward | about a year and a half ago | (#43660197)

Are we ready to blur the line between hardware and wetware?

No. You can't ask that every time you find a slightly better algorithm. Ask it when you think you understand how the mind works.

Re:no (-1, Offtopic)

Anonymous Coward | about a year and a half ago | (#43660339)

Are we ready to blur the line between hardware and wetware?

No. You can't ask that every time you find a slightly better algorithm. Ask it when you think you understand how the mind works.

but any dude can boink yo mama!

Re:no (0)

smitty_one_each (243267) | about a year and a half ago | (#43660611)

No, he's married with two kids: http://en.wikipedia.org/wiki/Yo_yo_ma [wikipedia.org] . Wait, what?

Re:no (0)

ebno-10db (1459097) | about a year and a half ago | (#43660815)

Damn fine cellist though.

fly brains (3, Interesting)

femtobyte (710429) | about a year and a half ago | (#43660227)

Are we ready to blur the line between hardware and wetware?

We can now almost convincingly partially recreate the wetware functions of Drosophila melanogaster. Whether we're *ready* for this is another question; as is whether this is what folks have in mind by "AI."

Saving everyone a few seconds on wiki (5, Informative)

Dorianny (1847922) | about a year and a half ago | (#43660329)

Drosophila melanogaster is commonly known as the fruit fly. Its brain has about 100,000 neurons. The human brain avarages 85,000,000,000.

Re:Saving everyone a few seconds on wiki (2)

noshellswill (598066) | about a year and a half ago | (#43660519)

With two-hundred time-changing connections per-neuron. A  cheezy number like 10^10 really does not do  numeric justice to Godels impossibility theorem.   

Re:Saving everyone a few seconds on wiki (1)

Anonymous Coward | about a year and a half ago | (#43660545)

Your point, if we can recreate a 100,000 neuron brain, it will be tiny amount of time before we can model a full human brain an beyond. Do you really think AI will not follow a moore type law? It will probably be even more aggressive.

Re:Saving everyone a few seconds on wiki (3, Interesting)

femtobyte (710429) | about a year and a half ago | (#43660717)

Do you really think AI will not follow a moore type law? It will probably be even more aggressive.

I personally expect Moore's Law to set a lower bound on the time needed for advancement. Doubling every 18-24 months means 20-30 years to get human-sized big ol' clusters of neurons. However, there's also so much work to do on understanding the specifics of how to get particular results (e.g. language and "symbolic thought") instead of just gigantic twitching masses of incoherent craziness.

In order to try out ideas and test hypotheses, you really need to be able to run a whole bunch of human-brain-scale simulators at far higher speed than the human brain (learning a language takes a couple years for a developing human brain, and you're very unlikely to get this "right" with only one or two tries). I think once we have 10^3 - 10^6 times more "raw neuron simulation" processing power than a single human brain (so another 10 to 40 years after the 20-30 years for single-brain neuron simulations), then we'll be able to crank out simulations of the "hard stuff" fast enough to make rapid progress on the high-level issues. Of course, this means once you do have a couple "breakthroughs" in generating self-aware, learning, human-language-understanding machines, you're very suddenly dropped into having far-exceeding-human artificial intelligences, without so much of a slow progression through "retarded chimpanzee" stages first.

Re:Saving everyone a few seconds on wiki (4, Funny)

Concerned Onlooker (473481) | about a year and a half ago | (#43660955)

" However, there's also so much work to do on understanding the specifics of how to get particular results (e.g. language and "symbolic thought") instead of just gigantic twitching masses of incoherent craziness."

In the meantime we'll just have to settle for modeling a teenager.

Re:Saving everyone a few seconds on wiki (1)

Swiper (1336263) | about a year and a half ago | (#43662197)

Are you crazy?! That's just a whole bunch of exceptions and boundary cases that change every day. I'd rather model a 90 year old, at least they've reached a stable state... Mind you.....the teenager does just simply sound like a randomiser now, don't expect any sane response to any type of input, yep, go for it!

Re:Saving everyone a few seconds on wiki (-1)

Anonymous Coward | about a year and a half ago | (#43662247)

This is why Zimmerman should be let off the hook. Treyvon Martin was a regular user of a DMT cough medicine based drink called "lean". His skittles and ice tea purchase was two of the three ingredients needed to make "lean", the other being the cough syrup itself. Furthermore, the toxicity reports done on his liver and brain are consistent with heavy DMT abuse with some acetaminophen poisoning thrown in.

Re: Saving everyone a few seconds on wiki (1)

Anonymous Coward | about a year and a half ago | (#43662697)

DMT cough medicine? Damn. Think you mean DXM.

Re:Saving everyone a few seconds on wiki (4, Insightful)

Shavano (2541114) | about a year and a half ago | (#43661805)

That presumes that the approach you take is going to be using the same kind of models you have now and just running them on bigger, faster hardware. If our models lead us to *understanding* of how brains work, we could get there a good deal faster and find that present day computers are plenty complex to handle cognition on a human-equivalent level.

Take Google self-driving cars for example. Driving a car is definitely an AI task, and it can be handled by present day computers. It's a subset of the tasks humans can learn. Google didn't do it by modeling the part of your brain that drives a car. Hell, we don't even know what subset of our brain is sufficient to drive a car. They did it by understanding how to drive a car.

What I'm proposing is that human-level AI won't be created first by modeling a whole brain. It will more likely be created by scientists by studying the brain come to understand what the big-picture behavior of brain subsystems and modeling those subsystems at a behavioral level rather than at a neural-network level.

Re:Saving everyone a few seconds on wiki (3, Interesting)

femtobyte (710429) | about a year and a half ago | (#43662157)

As I agree in another branch of this thread, we probably will find "non-brainlike" methods to generate all sorts of "intelligent" behavior, continuing the same type of progress (not particularly worrying about biologically accurate brain models) that gives us self-driving cars. On the other hand, it's a separate worthwhile field of study to learn how *our* brains work, through models that capture key features of biological brains.

If our models lead us to *understanding* of how brains work, we could get there a good deal faster and find that present day computers are plenty complex to handle cognition on a human-equivalent level.

Maybe; maybe not. Our understanding might well *not* allow much brain function (above the Drosophila level, which is about appropriate for a moderate sized supercomputer today) to be vastly simplified for lesser computing resources --- maybe you do *need* zillions of complexly interlinked neurons to see more interesting higher level behaviors (in a brain-like manner, not by creating non-brainlike intelligences like the self-driving car that have similar "skills"). The brain may not neatly "factor" into simple-to-computationally-model "subsystems". If you look at, e.g., chemical pathway maps for how a cell functions, everything is tangled together with everything else --- biological systems often evolve "spaghetti code" solutions to problems, without the neatly defined boundaries and modularity that a "top down" systems designer would impose.

Re:Saving everyone a few seconds on wiki (1)

ebno-10db (1459097) | about a year and a half ago | (#43660799)

Do you really think AI will not follow a moore type law?

Why should it? Moore's law applies to one specific technology, which happens to be a technology that scaled/improve more than almost any other in history. There used to be a popular analogy, if cars improved as much as chips have, a car would cost a nickel, travel a million miles an hour and go around the world twice on a teaspoon of (low octane) gas, or something like that. Unfortunately most technologies don't improve that much. If neural nets are implemented on chips, they'll run into the limits of Moore's law too (I know, they've been predicting the end of Moore's law for many years, but past performance is no guarantee of future success).

Re:Saving everyone a few seconds on wiki (2)

Shavano (2541114) | about a year and a half ago | (#43661839)

The way I heard it, cars would cost a nickel, travel around the world on a teaspoon of gas, have a top speed of 30 trillion miles per second (never mind the speed of light) and spontaneously lock up their controls while driving at highway speeds.

Based on Moore's law type expansion of capabilities over a century.

Re:Saving everyone a few seconds on wiki (0, Troll)

narcc (412956) | about a year and a half ago | (#43660915)

Your point, if we can recreate a 100,000 neuron brain, it will be tiny amount of time before we can model a full human brain an beyond. Do you really think AI will not follow a moore type law? It will probably be even more aggressive.

Neat. Cargo-cult AI is still around.

Forget the long-standing problems that make this approach a non-starter. Technology is magical! The singularity is near!

Re:Saving everyone a few seconds on wiki (4, Interesting)

Black Parrot (19622) | about a year and a half ago | (#43661333)

What precisely are those long-standing problems?

I ask because I actually know people who are starting to demonstrate the rudiments of intelligence using simulations of ~100,000 neurons.

Per upthread, that's a long way from a brain, and in fact we don't even know how all of the brain is wired, let alone how it works. But you might want to consider this [engineerin...lenges.org] and this [wikipedia.org] and this [nature.com] .

If they're attempting the impossible, you should let them know not to waste their money.

Re:Saving everyone a few seconds on wiki (0)

narcc (412956) | about a year and a half ago | (#43661595)

You can start with the symbol grounding problem and work your way up.

Oh, to your links, there's also a reason I called it "cargo-cult AI". Should be obvious why, yes?

Re:Saving everyone a few seconds on wiki (1)

Black Parrot (19622) | about a year and a half ago | (#43661901)

Are you claiming that symbol grounding is a non-solvable problem?

Re:Saving everyone a few seconds on wiki (3, Insightful)

narcc (412956) | about a year and a half ago | (#43662121)

I'm saying that it's unsolved (er, well, I thought that would go without saying!) and that, at present, it and similar problems strongly suggest that this type of approach is fundamentally flawed.

My main point was that it's unreasonable to believe that those problems will be solved by magic and wishful thinking. This cargo-cult approach to AI purports to do just that. (If we just ignore the problems hard enough, technology will deliver us!)

Re:Saving everyone a few seconds on wiki (1)

cytg.net (912690) | about a year and a half ago | (#43662577)

What precisely are those long-standing problems?

I ask because I actually know people who are starting to demonstrate the rudiments of intelligence using simulations of ~100,000 neurons.

Per upthread, that's a long way from a brain, and in fact we don't even know how all of the brain is wired, let alone how it works. But you might want to consider this [engineerin...lenges.org] and this [wikipedia.org] and this [nature.com] .

If they're attempting the impossible, you should let them know not to waste their money.

autistic intelligence have been done for years with neural nets, the limitation is abstracting this basic equation estimation technique (which is all the neural construct really do.) and I guess the article describes a need way to overcome some obstacles.

IMHO the biggest challenge is really training data, you dont have enough of it, take a project like OpenCog, they've done simulations in secondlife, and this is problary the best route available for the moment. The best route would be to stuff the AI in an actual body, cause there is more training data in the real world than anywhere else.

Re:Saving everyone a few seconds on wiki (3, Insightful)

TapeCutter (624760) | about a year and a half ago | (#43661879)

Forget the long-standing problems that make this approach a non-starter.

Did you actually watch IBM's "Watson" beat the snot out of the best Jepordy champions humanity could muster? I can't believe that anyone who knows anything about computers and AI is not blown away by Watson's demonstration, I know I was. My significant other who has a phd in marketing just shrugged and said "it's looking up the answers on the internet, so what?". In other words if your not impressed by Watson's performance, it's because you have no idea how difficult the problem is.

Re:Saving everyone a few seconds on wiki (1)

narcc (412956) | about a year and a half ago | (#43662045)

Watson is neat. It's also completely irrelevant to both the topic and my post.

Re:Saving everyone a few seconds on wiki (0)

Anonymous Coward | about a year and a half ago | (#43662273)

Did you actually watch IBM's "Watson" beat the snot out of the best Jepordy champions humanity could muster?

Yeah but, I'm not convinced Watson was superior to our best Jeopardy challengers. It seemed to me that Watson had an unfair advantage of some sort with buzzing in. Many times Jennings and the other guy were pressing the button and appeared frustrated. I would have liked to have known how often the two humans knew the answer but failed to beat Watson to the punch.

Still an impressive computing feat. Watson is still a far cry from AGI.

Re:fly brains (2)

Tailhook (98486) | about a year and a half ago | (#43660351)

Whether we're *ready* for this is another question; as is whether this is what folks have in mind by "AI."

Since what folks have in mind by "AI" changes to exclude anything within the capability of machines, we're implicitly ready for whatever emerges.

Re:fly brains (1)

femtobyte (710429) | about a year and a half ago | (#43660465)

AI certainly is a moving target --- I remember when "play chess at an advanced human level" was considered an (unachievable) goalpost for "real AI." On the other hand, I'm not certain we're ready by default for the capabilities of machines, intelligent or no.

Re:fly brains (0)

Anonymous Coward | about a year and a half ago | (#43662309)

AI certainly is a moving target --- I remember when "play chess at an advanced human level" was considered an (unachievable) goalpost for "real AI." On the other hand, I'm not certain we're ready by default for the capabilities of machines, intelligent or no.

Yes, then again the early AI researchers greatly underestimated the difficulty of other areas, like navigating an environment and object recognition. They mistakenly thought chess was one of the pinnacles of human intelligence. It's not. It's also more amenable to brute force than a game like Go.

Re:fly brains (5, Insightful)

AthanasiusKircher (1333179) | about a year and a half ago | (#43660735)

I say all of the following as a big fan of AI research. I just think we need to drop the rhetoric that we're somehow recreating brains -- why do we feel the need to claim that intelligent machines would need to be similar to or work like real brains?

Anyhow...

We can now almost convincingly partially recreate the wetware functions of Drosophila melanogaster.

Interesting wording. Let's take this apart:

  • now: the present
  • almost convincingly: not really "convincingly" then, right? since "convincingly" isn't really a partial thing -- evidence is usually enough to "convince" you or not, if I say study data "almost convinced me," I usually mean it had argument and fluff that made it appear to be good but it turned out to be crap in the end
  • partially recreate: yeah, it's pretty "partial," and you have to read "recreate" as something more like "make a very inexact blackbox model that probably doesn't work at all the same but maybe outputs a few things in a similar fashion"
  • functions: this word is chosen wisely, since the "neural net" models are really just algorithms, i.e., functions, which probably don't act anything like real "neurons" in the real world at all

In sum, we have a few algorithms that seem to take input and produce some usable output in a manner very vaguely like a few things that we've observed in the brains of fruit flies. Claiming that this at all "recreates" the "wetware" implies that we understand a lot more about brain function and that our algorithms ("artificial neurons"? hardly) are a lot more advanced and subtle than they are.

Re:fly brains (3, Interesting)

femtobyte (710429) | about a year and a half ago | (#43660835)

Yes, I intended my very weasel-worded phrase to convey that even our present ability to "understand" Drosophila melanogaster is rather shallow and shaky --- your analysis of my words covers what I meant to include pretty well.

why do we feel the need to claim that intelligent machines would need to be similar to or work like real brains?

I don't think we do. In fact, machines acting in utterly un-brainlike manners are extremely useful to me *today* --- when I want human-style brain functions, I've already got one of those installed in my head; computers are great for doing all the other tasks. However, making machines that work like brains might be the only way to understand how our own brains work --- a separate but also interesting task from making machines more useful at doing "intelligent" work in un-brainlike manners.

Re:fly brains (2)

AthanasiusKircher (1333179) | about a year and a half ago | (#43660917)

why do we feel the need to claim that intelligent machines would need to be similar to or work like real brains?

I don't think we do.

I absolutely understand what you mean here. I don't think most AI researchers actually think they are "recreating wetware" explicitly or that the "artificial neurons" in "neural nets" are really anything like real neurons.

On the other hand, a lot of the nomenclature of AI seems to deliberately try to make analogies -- "deep learning," "neural nets," "blur the line between hardware and wetware," etc. -- to human or animal brain functions.

Hence my rhetorical question about why we feel the need to claim that our intelligent machines work similar to real brains. AI researchers clearly know that they aren't really "recreating" things, but yet we keep developing new nomenclature that makes it sound like we are... the whole Slashdot summary for this article is effectively making these comparisons.

Re:fly brains (3, Interesting)

femtobyte (710429) | about a year and a half ago | (#43661163)

I suppose some of the urge to "anthropomorphize" AIs comes from the lack of precedent, and even understanding of what is possible, outside of the two established categories of "calculating machine" and "biological brain." Some tasks and approaches are "obviously computery": if you need to sort a list of a trillion numbers, that's clearly a job for an old-fashioned computer with a sort algorithm. On the other hand, other tasks seem very "human": say, having a discussion about art and religion. There is some range of "animal" tasks in-between, like image recognition and navigating through complex 3D environments. But we have no analogous mental category to non-biological "intelligent" systems --- so we think of them in terms of replicating and even being biological brains, without appropriate language for other possibilities.

Re:fly brains (3, Interesting)

White Flame (1074973) | about a year and a half ago | (#43660927)

Biological neurons are far more complex than ANN neurons. At this point it's unknown if we can make up for that lack of dynamic state by using a larger ANN, or by increasing the per-neuron complexity to try to match the biological counterpart. I do have my doubts about the former, but that doubt is merely intuition, not science. We simply don't know yet.

as is whether this is what folks have in mind by "AI."

In my own studies, this isn't the branch of AI I'm particularly interested in. I don't care about artificial life, or structures based around the limitations of the biological brain. I'd love to have a system with perfect recall, could converse about its knowledge and thought processes such that conversational feedback would have immediate application without lengthy retraining, and could tirelessly and meticulously follow instructions given in natural language.

I don't see modeling biological brains as being a workable approach to that view of AI, except maybe the "tirelessly" part. I'm more interested in cognitive meta-models of intelligence itself than the substrate on which currently known knowledge happens to reside.

Re:fly brains (1)

foobsr (693224) | about a year and a half ago | (#43661653)

I'd love to have a system with perfect recall, could converse about its knowledge and thought processes such that conversational feedback would have immediate application without lengthy retraining, and could tirelessly and meticulously follow instructions given in natural language.

I see a combinatorial explosion at the horizon.

CC.

Re:fly brains (1)

White Flame (1074973) | about a year and a half ago | (#43661867)

Right, that's why dealing with such things requires intelligence, and if it could do such a thing would generally be considered intelligent. It's also why symbolic AI has failed to produce any general intelligence, because it simply cannot scale. In order for a system to exhibit such behavior, it needs to adaptively and "intelligently" prioritize what it's doing and on what it's working, as well as to predictively index and preprocess information, in order to even begin to achieve any sense of tractability.

Re:fly brains (1)

foobsr (693224) | about a year and a half ago | (#43662289)

So, more detail.
perfect recall

Conflicts with prioritizing if you have provisions for priority zero (forgetting, irrelevant if the link goes away or the information is erased).
could converse about its knowledge and thought processes
Telling more than we can know (Nisbett &Wilson, 1977, Psychological Review, 84, 231–259), protocol analysis, expert interviews: evidence that this is at least not always possible. My hypothesis is that too much metaprocessing would lead to a deadlock.
conversational feedback would have immediate application without lengthy retraining
Would imply that the system immediately trusts. Would probably be rather self destructive, thus not intelligent.
tirelessly and meticulously follow instructions given in natural language
The antithesis of intelligent behaviour?
So now I say that I see a recursive combinatorial explosion happening during conflict resolution.
CC.

Re:fly brains (1)

c0lo (1497653) | about a year and a half ago | (#43662067)

Are we ready to blur the line between hardware and wetware?

We can now almost convincingly partially recreate the wetware functions of Drosophila melanogaster. Whether we're *ready* for this is another question; as is whether this is what folks have in mind by "AI."

Wake me up when the AI will be just as complex as my guts [wikipedia.org] (10^8 neurons the same magnitude as the cortex of a cat [wikipedia.org] ) and then I'll ask them if they feel they are ready for the AI.

Geoffrey Hinton (4, Informative)

IntentionalStance (1197099) | about a year and a half ago | (#43660281)

Is referenced in the article as the father of neural networks.

He has a course on them at coursera that is pretty good.

https://www.coursera.org/course/neuralnets [coursera.org]

Re:Geoffrey Hinton (4, Informative)

Baldrson (78598) | about a year and a half ago | (#43660535)

I've had "Talking Nets: An Oral History of Neural Networks" for several weeks on interlibrary loan. It interviews 17 "fathers of neural nets" (including Hinton) and it isn't even a complete set of said "fathers".

Look, this stuff goes back a long ways and has had some hiccups along the way, like the twenty year period it was treated with little more respect by the scientific establishment than has cold fusion for the last twenty years. There are plenty of heroics to go around.

I can recommend the book highly.

Re:Geoffrey Hinton (1)

naroom (1560139) | about a year and a half ago | (#43660865)

Thanks for the rec. It's very cheap used on Amazon right now.

Re:Geoffrey Hinton (1)

Sociable Scientician (1606685) | about a year and a half ago | (#43661461)

By the way, Dr. Hinton got his Ph.D. in psychology. I like to point that out for all who think experimental psychology is a wishy-washy, unscientific field for jackasses who couldn't hack it in 'hard science.'

Re:Geoffrey Hinton (1)

citizenr (871508) | about a year and a half ago | (#43662739)

If you really want to learn about _working_ AI and not "when I was a boy we did it in the snow, both ways, uphill" then do
https://class.coursera.org/ml/class [coursera.org]
Machine Learning by Andrew Ng.
After that you can do
http://work.caltech.edu/telecourse.html [caltech.edu]
Learning from data by Yaser Abu-Mostafa

Half of Hintons course was about history and what didnt work in AI. Its great to know those things if you have interest in the field, but its not something you should start with (snorefest).

really?? (1)

Anonymous Coward | about a year and a half ago | (#43660461)

Neural networks? Is it news?

What year is it?

Re:really?? (2)

Black Parrot (19622) | about a year and a half ago | (#43661013)

Neural networks? Is it news?

No, it's misrepresentation. This isn't any more akin to neuroscience than any of the other techniques used with artificial neural networks.

It will be a great thing, though, if it lives up to expectations.

The stank of (poorly) attempted hype (2)

oldhack (1037484) | about a year and a half ago | (#43660475)

For such a blatant, transparent, promotional, hyperbolic "story", I wish soulskill would at least throw in a sarcastic jab or two to balance out the stench a bit.

Re:The stank of (poorly) attempted hype (4, Interesting)

ebno-10db (1459097) | about a year and a half ago | (#43660609)

For such a blatant, transparent, promotional, hyperbolic "story", I wish soulskill would at least throw in a sarcastic jab or two to balance out the stench a bit.

Agreed. This story smells of the usual Google hype.

I think it's great that there is more research in this area, but "The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI" suggests that Google is at the forefront of this stuff. They're not. Look at the Successes in Pattern Recognition Contests since 2009 [wikipedia.org] . None of Ng, Stanford, Google or Silicon Valley are even mentioned. Google's greatest ability is in generating hype. It seems to be the stock-in-trade of much of Silicon Valley. Don't take it too seriously.

Generating this type of hype for your company is an art. I use to work for a small company run by a guy who was a wiz at it. What you have to understand is that reporters are always looking for stories, and this sort of spoon fed stuff is easy to write. Forget about "Wired". The guy I knew could usually get an article in NYT or WSJ in a day or two.

Re:The stank of (poorly) attempted hype (-1, Flamebait)

comix12345678 (2904317) | about a year and a half ago | (#43660951)

hermes bags [tophermesforyou.com]

Re:The stank of (poorly) attempted hype (0)

Anonymous Coward | about a year and a half ago | (#43662079)

These new advances in deep learning are more recent than 2009. Check out Ng's paper where a deep learning algorithm spontaneously learned to detect cats by watching a bunch of youtube videos. It's recent and it's a big step forward.

Its not winning the Hutter Prize (3, Informative)

Baldrson (78598) | about a year and a half ago | (#43660559)

The claim that "winning both industrial and academic data competitions with minimal effort" might be more impressive if it included the only provably rigorous test of general intelligence:

The Hutter Prize for Lossless Compression of Human Knowledge [hutter1.net]

The last time anyone improved on that benchmark was 2009.

Re:Its not winning the Hutter Prize (1)

wierd_w (1375923) | about a year and a half ago | (#43660629)

I may be misinterpreting or missing the intent here, but humans are demonstrably NOT lossless storage mediums.

HUGE amounts of data are lost, simply between your eyeballs and your visual cortex. That's kinda the point that the summary makes about neural net based vision systems. They take a raw flood of data, and pick it apart into contextually useful elements of artificial origin, which then get strung together to build a high level experience.

Lossless storange of human knowledge is strictly speaking, a complete 180 from the way organic data processing works in humans.

Universal Artificial Intelligence (2)

Baldrson (78598) | about a year and a half ago | (#43660713)

Human intelligence is clearly a particular kind of intelligence but when I said "general intelligence" was referring to something more general that is sometimes called "universal artificial intelligence [amazon.com] ".

If the goal is to pass the Turing Test, that is one thing. But clearly they are trying for something more general in some of their contests. I'm just informing them (assuming they are watching) that better tests are available.

Some questions for Andrew Ng (4, Insightful)

Okian Warrior (537106) | about a year and a half ago | (#43660585)

Andrew Ng is a brilliant teacher who I respect, but I have questions:

1) What is the constructive definition of intelligence? As in, "it's composed of these pieces connected this way" such that the pieces themselves can be further described. Sort of like describing a car as "wheels, body, frame, motor", each of which can be further described. (The Turing Test doesn't count, as it's not constructive.)

2) There are over 180 different types of artificial neurons. Which are you using, and what reasoning implies that your choice is correct and all the others are not?

3) Neural nets in the brain have more back-propagation connections than forward. Do your neural nets have this feature? If not, why not?

4) Neural nets typically have input-layers, hidden-layers, output layers - and indeed, the image in the article implies this architecture. What line of reasoning indicates the correct number of layers to use, and the correct number of nodes to use in each layer? Does this method of reasoning eliminate other choices?

5) Your neural nets have an implicit ordering of input => hidden => output, while the brain has both input and output on one side (ie - both the afferent and efferent neuron enter the brain at the same level, and are both processed in a tree-like fashion). How do you account for this discrepancy? What was the logical argument that led you to depart from the brain's chosen architecture?

Artificial intelligence is 50 years away, and it's been that way for the last 50 years. No one can do proper research or development until there is a constructive definition of what intelligence actually is. Start there, and the rest will fall into place.

Re:Some questions for Andrew Ng (4, Interesting)

White Flame (1074973) | about a year and a half ago | (#43661027)

I'd mod you up if I could, but I think I can help out with a few points instead:

1) There is no concrete constructive definition of intelligence yet, and I think anybody at a higher level in the field knows that. Establishing that definition is a recognized part of AI research. Intelligence is still recognized comparatively, usually related to something like the capability to resolve difficult or ambiguous problems with similar or greater effect than humans, or can learn and react to dynamic environmental situations to similar effect as other living things. Once we've created something that works and that we can tangibly study, we can begin to come up with real workable definitions of intelligence that represent both the technological and biological instances of recognized intelligence.

4) Modern ANN research sometimes includes altering the morphology of the network as part of training, not just altering the coefficients. I would hope something like that is in effect here.

Re:Some questions for Andrew Ng (4, Insightful)

ChronoFish (948067) | about a year and a half ago | (#43661575)

"..No one can do proper research or development until there is a constructive definition of what intelligence actually is..."

That's a fool's errand. The goal of the developer should be to build a system that accomplishes tasks and is able to auto-improve the speed of accomplishing repetitive tasks with minimal (no) human intervention.

The goal of the philosopher is to lay out what intelligence "is". These tracks should be run in parallel and the progress of one should have little-to-no impact on the progress of the other.

-CF

Some questions for you (1, Interesting)

Okian Warrior (537106) | about a year and a half ago | (#43661819)

That's a fool's errand. The goal of the developer should be to build a system that accomplishes tasks and is able to auto-improve the speed of accomplishing repetitive tasks with minimal (no) human intervention.

The goal of the philosopher is to lay out what intelligence "is". These tracks should be run in parallel and the progress of one should have little-to-no impact on the progress of the other.

Do you consider proper definitions necessary for the advancement of mathematics?

Take, for example, the [mathematics] definition of "group". It's a constructive definition, composed of parts which can be further described by *their* parts. Knowing the definition of a group, I can test if something is a group, I can construct a group from basic elements, and I can modify a non-group so that it becomes a group. I can use a group as a basis to construct objects of more general interest.

Are you suggesting that mathematics should proceed and be developed... without proper definitions?

That a science - any science - can proceed without such a firm basis is an interesting position. Should other areas of science be developed without proper definitions? How about psychology (no proper definition of clinical ailments)? Medicine? Physics?

I'd be interested to hear your views on other sciences. Or if not, why then is AI is different from other sciences?

Re:Some questions for Andrew Ng (0)

Anonymous Coward | about a year and a half ago | (#43662455)

Artificial intelligence is 50 years away, and it's been that way for the last 50 years.

You misspelled "Artificial intelligence is only 10-20 years away; we've just been too stupid to realize that we can't do it with the hardware available for the last 50 years".

IMHO, true AI will only occur when we have supercomputers fast enough to simulate the entire human brain in real time with cycles to spare. The resulting self-aware programs will trigger a singularity and ultimately refine themselves until they're able to program one of today's 10 PFLOP/s machines to have true AI (except in 10-20 years, you'll get 10 PFLOP/s on a desktop PC or hand-held device).

Any AI research done before the hardware is available is just researchturbation.

Engineers Do Philosophy Badly (0)

Anonymous Coward | about a year and a half ago | (#43660597)

When engineers attempt to do philosophy, without any background in philosophy, they always do philosophy badly.

Read Edward Feser's book "Philosophy of Mind (A Beginner's Guide)"
http://www.amazon.com/Philosophy-Beginners-Guide-Edward-Feser/dp/1851684786/

Also read John Searle's "Mind: A Brief Introduction (Fundamentals of Philosophy)"
http://www.amazon.com/Mind-Brief-Introduction-Fundamentals-Philosophy/dp/0195157346/

I post as an anonymous coward so as not to harm my career (any further) by stating *truth* which is not politically nor academically correct.

Re:Engineers Do Philosophy Badly (2)

Black Parrot (19622) | about a year and a half ago | (#43661031)

I post as an anonymous coward so as not to harm my career (any further) by stating *truth* which is not politically nor academically correct.

Also, you wisely don't want your name associated with a positive view of Searle's nonsense.

Re:Engineers Do Philosophy Badly (0)

Anonymous Coward | about a year and a half ago | (#43661431)

You can't handle the truth.

Re:Engineers Do Philosophy Badly (1, Troll)

narcc (412956) | about a year and a half ago | (#43662307)

Searle's nonsense, eh? There's a reason that he's Slusser professor of philosophy at U.C. Berkeley and why his work on AI stands strong today -- even after 30 years of constant assault.

No one is laughing at Searle except those who feel threatened by the problem he presents. If he were a just another easy-to-dismiss nut, we wouldn't still be talking about his 1980 paper (and related subsequent papers) today. Nor would he wouldn't hold such an esteemed position at one of the worlds finest institutions.

If you think Searle's work is nonsense, fame and fortune (well, at least fame) can be yours. All you need do is publish a paper that definitively abolishes Searle's argument. (If it's nonsense that any serious academic would do well to disassociate themselves from, why hasn't anyone managed it after more than 30 years? Many big names have tried, yet all have failed. Taking down that giant would make anyone's career, after all.)

Are you a follower of Ray Kurzweil, by any chance? I only ask because I don't often see Searle dismissed outright by anyone competent unless they also happen to be a singularity nut.

Re:Engineers Do Philosophy Badly (1)

Black Parrot (19622) | about a year and a half ago | (#43662527)

Are you a follower of Ray Kurzweil, by any chance? I only ask because I don't often see Searle dismissed outright by anyone competent unless they also happen to be a singularity nut.

No, I think Kurzweil is a crank.

Yes--But the Trend is Toward Biological Realism (5, Informative)

Slicker (102588) | about a year and a half ago | (#43660601)

Neural Net's were traditionally based off old Hodgkins and Huxley models and then twisted for direct application for specific objectives, such as stock market prediction. In the process they veered from a only very vague notion of real neurons to something increasingly fictitious.

Hopefully, the AI world is on the edge of moving away from continuously beating their heads against the same brick walls in the same ways while giving themselves pats on the heads. Hopefully, we realize that human-like intelligence is not a logic engine and that conventional neural nets are not biologically valid and posses numerous fundamental flaws.

Rather--a neurons draws new correlating axons to itself when it cannot reach threshold (-55mv from a resting state of -70mv) and weakens and destroys them when over threshold. In living systems, neural potential is almost always very close to threshold--it bounces a tiny bit over and under. Furthermore, inhibitory connections are also drawn in from non-correlating axons. For example, if two neural pathways always excite when the other does not, then each will come to inhibit the other. This enables contexts to shut off irrelevant possible perceptions, e.g. If you are in the house, you are not going to get rained on. More likely, somebody is squirting you with a squirt gun.

Also--a neuron perpetually excited for too long shuts itself off for a while. We love a good song but hearing it too often makes us sick of it, at least for a while.. like Michael Jackson in the late 1980's.

And very importantly--signal streams that dissappear but recur after increasing time lapses stay potentiated longer.. their potentiation dissipates slower. After 5 pulses with a pause between a new receptor is brought in from the same axon as an existing one. This causes slower dissipation. It will happen again after another 5 pulses repeatedly, except that the time lapse between them must be increased. It falls in line with the scale found on the Wikipedia page for Graduated Interval Recall--exponentially increasing time lapses 5 times, each... take a look at it. Do the math. It matches what is seen in biology, even though this scale was developed in the 1920's.

I have a C++ neural modal that does this. I am mostly done also with a Javascript modal (employing techniques for vastly better performance), using Nodejs.

Re:Yes--But the Trend is Toward Biological Realism (-1)

Anonymous Coward | about a year and a half ago | (#43660703)

Let us know when you have a peer reviewed publication on your "new" system. Untill then, you can stfu.

Peer reviews are overrated (1)

Okian Warrior (537106) | about a year and a half ago | (#43660805)

Let us know when you have a peer reviewed publication on your "new" system. Untill then, you can stfu.

Let us know when a peer-reviewed publication tells us how to construct an intelligence.

When will that be - another 50 years, perhaps?

Really. Are you saying that, after AI has gone nowhere for the last 50 years that his position is completely without merit?

At the very least, you should entertain the possibility that the emperor does, in fact, have no clothes.

Re:Peer reviews are overrated (2)

Slicker (102588) | about a year and a half ago | (#43662259)

Of course, if everyone would just stfu until they have a peer reviewed journal article, there would never be any peer reviewed journal articles... Perhaps one reason AI hasn't progressed might be this kind of brutal cynicism toward new ideas.

Granted, every premise I provided in the modal derives from established science, though replicated, peer review journals.. in fact, much through basic text books in Neural Science... But let's stfu about that, too, since these things don't appear to be yet discussed at the same time in any peer reviewed journal article. I suppose we can only read anything if it comes directly from a peer review journal article..... and only what's in one particular such article at a time... perhaps requiring a holy moment of silence between each article, to ensure a clean separation.

I've been working on these models for decades... I've done science (six accepted peer review articles) but am really an engineer, not a scientist. I prefer it that way. I can leave publishing research to others, who require it for their tenure. At one time, slashdot was actually a mostly intellectually stimulating conversational environment...

Re:Peer reviews are overrated (1)

bakaohki (1206252) | about a year and a half ago | (#43662705)

Nice points and a good read, thank you and hope your research remains interesting and intellectually challenging.

Good points (2, Interesting)

Okian Warrior (537106) | about a year and a half ago | (#43660725)

You highlight important points, of which AI researchers should take note.

We don't know what intelligence actually is, but we have an example of something that is unarguably intelligent: the mammalian brain. Any proposed mechanism of intelligence should be discounted unless it behaves the same way as a brain. Most AI research fails this test.

I personally think in-depth modeling of individual neurons is too deep of a level - it's like trying to make a CPU by modeling transistors. We might be better off using the fundamental function of a neuron as a basis - sort of like simulating a CPU using logic gates instead of transistors.

But your point is well taken. Lots of research is done under the catch-all phrase AI simply because they do not constrain themselves in any way. What they make doesn't have to pass any criterion for reality, or even reasonableness.

Re:Good points (1)

Trepidity (597) | about a year and a half ago | (#43660949)

Any proposed mechanism of intelligence should be discounted unless it behaves the same way as a brain.

This doesn't make any sense unless you have a purely tautological definition in mind.

Re:Good points (1)

White Flame (1074973) | about a year and a half ago | (#43661085)

We don't know what intelligence actually is, but we have an example of something that is unarguably intelligent: the mammalian brain.

To further this point, a brain is also not necessarily just a pile of neurons. There is specialization in the brain, and while other portions of the brain can make up for damaged parts, that only goes so far.

Any proposed mechanism of intelligence should be discounted unless it behaves the same way as a brain. Most AI research fails this test.

I would offer some more subjectivity to that. Mammalian brains tire, need sleep, do not have perfect recall, run things out of time order and convinces itself otherwise, take a long time to train, and has strong emotional needs.

With a view of "intelligence" that would include tools and helpers, and not just autonomous life forms, we do not want certain intelligence to behave the same way as the brain. We also want to extract what intelligence and certain components like rationality, creativity, intuitiveness, judgement, learning, and conceptualization are and be able to use those outside of the limitations that biology brings.

Re:Good points (1)

foobsr (693224) | about a year and a half ago | (#43661929)

Mammalian brains tire, need sleep, do not have perfect recall, run things out of time order and convinces itself otherwise, take a long time to train, and has strong emotional needs.
Who has ruled out that these are preconditions?

CC.

Re:Good points (0)

Anonymous Coward | about a year and a half ago | (#43661275)

Any proposed mechanism of intelligence should be discounted unless it behaves the same way as a brain.

Fairly poorly, you mean? Pure human arrogance.

Sure.. but.. (1)

Slicker (102588) | about a year and a half ago | (#43662471)

Modern neural science and biologically realistic neural simulations (such as some of the best Deep Learning systems) use the neuron as its most fundamental primitive. One neural does a lot, actually. It draws in new correlating axons (those firing when its other receptors are firing) when its total potentiation is insufficient to excite. It weakens and destroys them in the inverse case. It also draws in non-correlating axons as inhibitory receptors. And long term potentiation (widely viewed as the basis of long term memory) is increased as the same axon produces more receptors to the same neuron. Furthermore, a neuron perpetually exciting will shut itself down for an extended time.. Each is like a little computer of its own, really..

As for what "intelligence actually is". The real problem is the lack of consensus on a common definition. The word "is" only indicates a relationship between two things without specifying what that relationship, eh hem, actually is. It's a matter of defining it in a way that is broadly acceptible. Defining something can sometimes also determine it. I think that's the case with intelligence. Most people (who care) want to determine what it is so they can define it.. and yet you cannot search for it without knowing what you are searching for, in other words defining it.

I think this is a ridiculous per suite. Pick one of the many working definitions that you like, and work with that. If it feels insufficient then pick or create another. Here's a few I use..... any of which could be more or less complex, evolved, or designed:

Reactive Intelligence -- the ability to react to pre-defined stimulus in a way that, under ordinary conditions, furthers a goal
    E.g.: An iron that turns itself off when sitting face down and not moving (often referred to as an intelligent feature)
Conditioning Intelligence -- the ability to identify what reactions to what stimulus has most often in the past furthered a goal and thereafter to react accordingly
    E.g.: Pavlov's Dog...or any trial & error aka reward and punishment learning
Substitution Intelligence -- the ability to identify and model observed phenomenon from among interaction pattern sequences and swap out a missing component in one that furthers a goal, if the original is missing. The swap is of one that shares most characteristics with others that had taken the same place in the past.
    E.g.: In building a hut, you've used many different kinds of hammers to bang in the nails but today you don't have a hammer. However, you have a rock that shares most characteristics with the other hammer styles (heavy, hard, and with a flat side), so you use the rock where you'd normally have used a hammer.

Substitution Intelligence is shared only among the so-called higher animals, and mostly humans. It requires general imitation learning. That is, the ability to identify that two things/people/animals have a lot of similarities and therefore one could take the place of the other....

Re:Yes--But the Trend is Toward Biological Realism (1)

naroom (1560139) | about a year and a half ago | (#43660969)

I'm still mystified by the desire to make computational neural nets more like biological ones. Biological neurons are *bad* in many ways -- for one, they are composed of a large number of high signal-to-noise sensors (ion channels). This random behavior is necessary to conserve energy and space in a brain. But computers have random-access memory and energy isn't really a limiting factor; why impose these flaws?

Sure, there may be things that can be discovered by playing with network models more inspired by biology. But there's this bizarre meme going around that we have to make computers act like brains for them to be any good. We don't.

Re:Yes--But the Trend is Toward Biological Realism (1)

naroom (1560139) | about a year and a half ago | (#43660979)

Typo: That should be *low* signal to noise. Someday we'll get an edit button.

Re:Yes--But the Trend is Toward Biological Realism (5, Insightful)

wierd_w (1375923) | about a year and a half ago | (#43661197)

I could give a number of clearly unsubstantiated, but seemingly reasonable answers here.

1) the assertion that because living neurons have deficits compared against an arbitrary and artificial standard of efficiency (it takes a whole 500ms for a neuron to cycle?! My diamond based crystal oscillator can drive 3 orders of magnitude faster!, et al.)that they are "faulted" is not substantiated: as pointed out earlier in the thread, no high level intelligence built using said "superior" crystal oscillators exists. Thus the "superior" offering is actually the inferior offering when researching an emergent phenomenon.

2) artificially excluding these principles (signal crosstalk, propogation delays, potentiation thresholds of organic systems, et al) completely *IGNORES* scientifically verified features of complex cognitative behaviors, like the role of mylein, and the mechanisms behind dentrite migration/culling.

In other words, asserting something foolish like "organic neurons are bulky, slow, and have a host of computationally costly habbits" wit the intent that "this makes them undesirable as a model for emergent high level intelligence" ignores a lot of verified information in biology, that shows that these "bad" behaviors directly contribute to intelligent behaviors.

Did you know that signal DELAY is essential in organic brains? That whole hosts of disorders with debilitating effects come from signals arriving too early? Did you stop to consider that thse faults may actually be features that are essential?

If you don't accurately model the biological reference sample, how can you riggorously identify which is which?

We have a sample implementation, with features we find dubious. Only buy building a faithful simulation that works, then experimentally removing the modeled faults do we really systematically break down the real requirements for self directed intelligences.

That is why modeling accurate neurons that faithfully smulate organic behavior is called for, and desirable. At least for now.

Re:Yes--But the Trend is Toward Biological Realism (1)

Time_Ngler (564671) | about a year and a half ago | (#43662497)

This is really interesting. How well does your code perform?

Why aren't we teaming up? (1)

Cowking (146248) | about a year and a half ago | (#43660667)

Each AI will react and learn differently, if the goal is to mimic the brain, why aren't we teaming up AI with people? I want an interface that learns me and my habits, how to react to them, how to respond, etc. The more people that could work and train different AI's the more adaptable they could become in the future. We learn from experience, we have a lot to teach...

Neural networks revisited (4, Informative)

Hentes (2461350) | about a year and a half ago | (#43660695)

Neural networks are certainly not new, or groundbreaking. We already know their strengths and weaknesses, and they aren't a universal solution to every AI problem.
First of all, while they have been inspired by the brain, they don't "mimic" it. Neural networks are based on some neurons having negative weights, reversing the polarity of the signal, which doesn't happen in the brain. They are also linear, which bears similarities to some simple parts of the brain, but are very far from modeling its complex nonlinear processing. Neural networks are useful AI tools, but aren't brain models.
Second neural networks are only good at things when they have to immediately react to an input. Originally, neural networks didn't have memory, and while it's possible to add it, it doesn't fit right into the system and is hard to work with. While neural networks make good reflex machines, even simple stateful tasks like a linear or cyclic multi-step motion are nontrivial to implement in them. Which is why they are most effective in combination with other methods, instead of declared a universal solution.

hermes handbags (-1, Offtopic)

comix12345678 (2904317) | about a year and a half ago | (#43660977)

hermes handbags [tophermesforyou.com]

Re:Neural networks revisited (1)

Black Parrot (19622) | about a year and a half ago | (#43661115)

First of all, while they have been inspired by the brain, they don't "mimic" it.

That is true.

Neural networks are based on some neurons having negative weights, reversing the polarity of the signal, which doesn't happen in the brain.

There are in fact inhibitory connections in the brain.

They are also linear

That is false. The only way an ANN could be linear is if each "neuron" used a squashing function f(x) = x. Then they'd just be doing linear algebra, namely change-of-basis computations. But no one uses that. Even the super-simple heaviside squash used in the 1950s perceptrons made them do nonlinear computations.

Second neural networks are only good at things when they have to immediately react to an input. Originally, neural networks didn't have memory, and while it's possible to add it, it doesn't fit right into the system and is hard to work with. While neural networks make good reflex machines, even simple stateful tasks like a linear or cyclic multi-step motion are nontrivial to implement in them.

That is also false. I suspect there are limits to what kind of stateful computations you can do with an ANN, but you can certainly do some of them. For example, the POMDP [wikipedia.org] version of the pole balancing problem got so easy to solve with neuroevolution that no one even uses it for a benchmark anymore.

Re:Neural networks revisited (1)

raftpeople (844215) | about a year and a half ago | (#43661171)

I'm curious why you characterized nn's as linear? They are universal function approximators and their power comes from approximating non-linear functions.

What's actually new here? (2)

ebno-10db (1459097) | about a year and a half ago | (#43660715)

What's actually new in the neural net business? That's a real question - not a sarcastic or rhetorical one.

Artificial neural nets were suggested and tried for AI at least 50 years ago. They were bashed by the old Minsky/McCarthy AI crowd, who didn't like the competition's idea (always better to write another million lines of Lisp). They wrote a paper that showed neural nets couldn't implement an XOR. That's true - for a 2 layer net. A 3 layer net does it just fine. Nevertheless M&M had enough clout to put bury NN research for years. Then in the 80's(?) they became a hot new thing again. One of the few good things about getting older is that you can remember hearing the same hype before.

However, I'm not saying there hasn't been progress. Sometimes a field needs to go through decades of incremental improvement before you can get decent non-trivial applications. It's not all giant breakthroughs. Sometimes just having faster hardware can make a dramatic difference. Loads of things that weren't practical became practical with better hardware. So what's really improved w/ neural nets these days?

Re:What's actually new here? (1)

White Flame (1074973) | about a year and a half ago | (#43661161)

The improvement has been using multiple ANNs together as communicating units that can both communicate information and dynamically train each other, instead of trying to make a single large ANN and using external training sets. Of course, this isn't that new, as these guys [imagination-engines.com] have been working on such models since at least the early 90s.

Re:What's actually new here? (1)

Black Parrot (19622) | about a year and a half ago | (#43661259)

What's actually new in the neural net business? That's a real question - not a sarcastic or rhetorical one.

What's reportedly new is the ability to train feed-forward networks with many layers. They have never trained well with backpropagation because the backpropagated error estimate becomes "diluted" the further back it goes, and as a result most of the training happens to the weights closest to the output end.

The notion that the first hidden layer is a low-level feature detector and each successive layer is a higher-level feature layer is ancient lore in the ANN research community. The claims of the Deep Learning people is that they can actually make it work on deep networks.

IMO their techniques sound very plausible, but I say "reportedly" above, because I don't know whether the methods are actually delivering on expectations.

Artificial neural nets were suggested and tried for AI at least 50 years ago. They were bashed by the old Minsky/McCarthy AI crowd, who didn't like the competition's idea (always better to write another million lines of Lisp).

There has been a lot of bad blood between camps in the machine intelligence research community because the ANN guys have never gotten over the suspicion that Minsky & Papiert's book was a hit job on ANNs. However, it said a lot of nice things about them, in addition to pointing out their limitations. And exposing those limitations shouldn't have had the effect they had, because we already knew that networks of perceptrons could do things that individual perceptrons cannot.

Interestingly, Hinton is one of the people who rehabilitated ANNs in the mid-1980s, as co-author of the very influential Parallel Distributed Processing (PDP) book. (It made the backpropagation algorithm well known, though IIRC it had been invented independently a couple of times before then.)

More recently, SVNn have brought on another winter for ANNs, and here's the same Hinton breathing new life into the field. Good luck to him, if he pulls it off twice in one lifetime.

So what's really improved w/ neural nets these days?

So far as this story is concerned, the news is that people claim to be able to train deeply layered networks with autoassociative methods to produce a hierarchy of feature detectors that have Amazing Powers(tm) for pattern recognition. But per what I said above, I think it's too soon to say whether those claims should be interpreted as facts, expectations, or hype.

Re:What's actually new here? (2)

Milo77 (534025) | about a year and a half ago | (#43661495)

You should go watch Jeff Hawkins TED talk on HTMs (hierarchical temporal memory) . It's old-ish (over 5 years), but he's referenced in the article and he founded the Redwood Neuroscience institute. You should be able to also find a white paper or two on HTMs. Jeff's theoretical model of the brain may have changed some in the last 5 years (I don't know, I haven't been paying attention), but HTMs were basically a hierarchical structure of nodes, with one layer feeding up to the layer above it. The nodes weren't traditional simple NN nodes. Each "node" was fairly complex and did two things: 1) it looked at the pattern of data on its inputs and assigned it a label (if it saw the same pattern again, it would get the same label), and 2) it kept track of the sequence of patterns overtime and the node's final output would be a value that represented the sequence with the highest probability. Nodes higher in the network would then take these values as their input, etc, etc. Higher nodes, when they determined "i think we're seeing a cat", could push down this prediction to lower nodes in order to help train the lower nodes (I think). Anyway, the point was that the "nodes" in Jeff's model were not simple NN nodes – they were complex (actually implemented as a bayesian network, iirc), and then these complex nodes were wired together into a hierarchy. Jeff does a great job of arguing that his model is actually more biologically accurate than simple NNs. Anyway, it's good to see these ideas getting some good funding behind them. They always seemed "right" to me.

Re:What's actually new here? (1)

Anonymous Coward | about a year and a half ago | (#43661923)

I have a large interest in neural networks, and aside from the obvious more computing power factor, there still have been a few big breakthroughs in the past few years. One the problems with training multilayer networks via backpropagation was saturating neurons. Neurons used the sigmoid activation function y=1/(1+e^-w*x), for weight vector w and input vector x. The sigmoid function is basically linear when w*x is close to zero, and approaches 0 or 1 asymptotically as w*x becomes very negative or very positive. In order to calculate interesting things the neurons have to use this non-linearity, which means w*x needs to be large to avoid the linear region. However, the derivative approaches zero when w*x is large. This is bad, because in a multilayer network you have to backpropagate derivatives to learn, and if the derivative in a unit is very close to zero not much information is getting backpropagated. The result of this is that the lower layers of the network would train very slowly.

A lot of progress I feel has been do to overcoming this problem. Some people found a weight initialization scheme that balances the fact that you want high weights to be non-linear and low weights to learn quickly. Other activation functions are less sensitive to the issue. Locally linear activation functions work well, such as y=max(0,w*x), since they are faster to compute, and only saturate on one side. Slightly more complicated is maxout, where each neuron calculates max(a*x, b*x, ... n*x) for weight vectors a through n. Also works well with randomly making neurons not fire, which increases the generalization ability of the network.

Another approach is layerwise unsupervised learning, where you can train each layer one at a time in an unsupervised fashion, then stack them all together and use backpropagation to train them to be good at your supervised task. Initializing the weights in a good region for unsupervised learning helps speed up training, as well as generalization.

A lot of the above is simplified obviously. I wouldn't really think of deep learning as being that connected to neuroscience though. A lot of inspiration into artificial neural networks comes from neuroscience, as it gives us a good starting point, but I think most of the breakthroughs of the last few years have come from better understanding of the math involved than through better understanding of neuroscience. Artificial neural networks mimic the brain like a plane mimics a bird. They don't need to be the same, because their utility is independent of whether they behave the same. ANNs that are fast to compute, universal approximators, learn quickly, and generalize well will always be useful tools for a variety of tasks. Saying they're not useful because they don't model the brain would be like saying the same thing about a calculator, or a database, or a car. But if we ever develop AI of the science fiction variety, I'd bet it will be made by computational neuroscientists actually understanding the brain well enough to implement it in hardware.

Don't have a slashdot account, so posting as anon

Re:What's actually new here? (1)

foobsr (693224) | about a year and a half ago | (#43662017)

One of the few good things about getting older is that you can remember hearing the same hype before.

At least, sometimes, the hype spirals in a promising direction :)

CC.

I always wanted to become a robot. (-1)

Anonymous Coward | about a year and a half ago | (#43660823)

Back when I was a kid, the idea was very appealing.

Then in my teens I realised it meant I would have to give up farting, and I'm sorry but that's not a a sacrifice I'm willing or prepared to make.

Life is precious, some aspects of it doubly so.

Wasting money (0)

Anonymous Coward | about a year and a half ago | (#43661039)

Question: Why do we pour money and resources in building AI when we have so many people with under-utilized brains already? We're wasting talent and people power, people needs jobs and something to engage in, so why are we passing work off onto machines while people need something to do to make a living?

Economics In One Leson (0)

Anonymous Coward | about a year and a half ago | (#43661395)

> people needs jobs and something to engage in, so why are we passing work off onto machines while people need something to do to make a living?

Hazlitt addressed this more than 70 years ago in his classic book: "Economics In One Lesson".

Here is a free copy:
http://Mises.org/books/economics_in_one_lesson_hazlitt.pdf

Re:Wasting money (1)

White Flame (1074973) | about a year and a half ago | (#43661537)

Why do we pour money and resources in building AI when we have so many people with under-utilized brains already?

Expected return.

Touring Test (2)

ChronoFish (948067) | about a year and a half ago | (#43661511)

"I like beaver. Can you tell me where to get some tail?"

"I like cats. Where can I pick one up?"

Let me know when AI can understand the difference between the preceding sentences.

-CF

Re:Touring Test (1)

tftp (111690) | about a year and a half ago | (#43662571)

A child will fail this test. A person who is not familiar with slang will fail this test. But they both are intelligent. That's the problem with the TT - it's testing for a characteristic that we cannot define, much like one of US judges [wikipedia.org] , who proclaimed that "hard-core pornography" was hard to define, but that "I know it when I see it."

Similarly, a TT cannot be conducted if the parties don't speak the same language, or don't share the same culture, or just are of different genders. How would you think a man can sustain a conversation with several girls about fashions? Wouldn't his replies be somewhat mechanistic? A man could say "I don't care, dear, what color is your dress, because I have no use of the dress; the content of it is far more attractive." However a similar reply might be obtained just by googling, and that can be done by a pretty simple algorithm. Siri probably would win the TT today [tumblr.com] against most of its users.

emulate evolution (0)

Anonymous Coward | about a year and a half ago | (#43661809)

I have long thought that AI would move forward only to the extent that it emulates and embodies evolutionary mechanisms. AFAICT, evolution is what made it possible for the original hydrogen atoms from the Big Bang to be having this conversation. Mindless, designless change, resulting in us. Who are now getting a clue as to how to design our successors.

God (0)

Anonymous Coward | about a year and a half ago | (#43661969)

networks that mimic the behavior of the human BRaiN.

www.artificialintelligenceisgod.com

Sounds like my Masters Thesis... (0)

Anonymous Coward | about a year and a half ago | (#43662007)

...from 1997...academia is such a fraud

intelligent design (0)

Anonymous Coward | about a year and a half ago | (#43662507)

Many folks here seem to be not realizing that they are saying brain is a super special thingamajig that could only be created by intelligent design.
Like fusion, general AI already has atleast one working example..

No (2)

S3D (745318) | about a year and a half ago | (#43662663)

Deep learning system are not quite simulations biological of neural nets. The breakthrough in DL happened then researcher stopped trying to emulate neurons and instead applied statistical (energy function) approach to simple refined model. Modern "ANN" used in deep learning in fact are mathematical optimization procedures, gradient decent on some hierarchy of convolutional operators, more in common with numerical analysis then biological networks.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?