Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why Not Every New "Like the Brain" System Will Prove Important

samzenpus posted about 5 months ago | from the you-keep-using-that-word-I-do-not-think-it-means-what-you-think-it-means dept.

Biotech 47

An anonymous reader writes "There is certainly no shortage of stories about AI systems that include the saying, 'like the brain'. This article takes a critical look at those claims and just what 'like the brain' means. The conclusion: while not a lie, the catch-phrase isn't very informative and may not mean much given our lack of understanding on how the brain works. From the article: 'Surely these claims can't all be true? After all, the brain is an incredibly complex and specific structure, forged in the relentless pressure of millions of years of evolution to be organized just so. We may have a lot of outstanding questions about how it works, but work a certain way it must. But here's the thing: this "like the brain" label usually isn't a lie — it's just not very informative. There are many ways a system can be like the brain, but only a fraction of these will prove important. We know so much that is true about the brain, but the defining issue in theoretical neuroscience today is, simply put, we don't know what matters when it comes to understanding how the brain computes. The debate is wide open, with plausible guesses about the fundamental unit, ranging from quantum phenomena all the way to regions spanning millimeters of brain tissue.'"

cancel ×

47 comments

Sorry! There are no comments related to the filter you selected.

like Pinkie (0)

turkeydance (1266624) | about 5 months ago | (#47070945)

don't like the Brain

Re:like Pinkie (1)

davester666 (731373) | about 5 months ago | (#47072627)

me like brains. some mushrooms, a nice chianti, maybe some green beans, and you've got a nice meal.

I patent all ideas (2)

Archangel Michael (180766) | about 5 months ago | (#47070959)

that function "like a brain on the Internet"

Re:I patent all ideas (-1)

Anonymous Coward | about 5 months ago | (#47071089)

Its already patented and the rights on the patent were sold to Disney! They will make a sequel "brain 2" which will BREAK PREVIOUS WAYS OF THINKING [0.0.0.0] !

"like the brain" is always a lie. It's that simple (1, Interesting)

AK Marc (707885) | about 5 months ago | (#47070963)

The brain runs without compiling, and re-writes its own source code and hardware while under use. It never crashes. Nothing has ever tried to come close, and those that pretend to emulate it have always failed miserably.

Re:"like the brain" is always a lie. It's that sim (4, Insightful)

50000BTU_barbecue (588132) | about 5 months ago | (#47071065)

" It never crashes"

Ever dealt with a schizophrenic or someone in the throes of a manic episode? Or just a drunk?

Re:"like the brain" is always a lie. It's that sim (2)

sir-gold (949031) | about 5 months ago | (#47071439)

You can blame a lot of that stuff on faulty/damaged "hardware".
A true "crash" would be something like PTSD, where there is nothing wrong with the brain (or body) itself, but the software inside the brain has been corrupted by some powerful external input.

Re:"like the brain" is always a lie. It's that sim (1)

AK Marc (707885) | about 5 months ago | (#47071509)

Gives bad answers isn't the same. The intel chips that were wrong weren't labeled "permanently crashed". A crash is a reset. It takes significant physical damage to induce anything like a reset. Like a lobotomy. Or some specific cases of mental illness, but those are very rare.

Re:"like the brain" is always a lie. It's that sim (0)

Anonymous Coward | about 5 months ago | (#47072459)

ugh, yeah, epilepsy is sooo rare. That's a crash based on race conditions and/or sync issues. The other's obviously don't know anything about how the brain works.

Re:"like the brain" is always a lie. It's that sim (0)

Anonymous Coward | about 5 months ago | (#47073479)

At least we know how an apostrophe works. I guess your brain crashes when it thinks of an apostrophe.

Re:"like the brain" is always a lie. It's that sim (1)

jythie (914043) | about 5 months ago | (#47073589)

I had a similar thought. Of course brains crash, we tend to call it 'death'.

Re:"like the brain" is always a lie. It's that sim (0)

Anonymous Coward | about 5 months ago | (#47073439)

So it's not "never" anymore? Got it.

Re:"like the brain" is always a lie. It's that sim (1)

Anonymous Coward | about 5 months ago | (#47071665)

A better analogy would be epilepsy/seizures, but yeah, brains do crash.

Re:"like the brain" is always a lie. It's that sim (2)

c6gunner (950153) | about 5 months ago | (#47071145)

The brain runs without compiling, and re-writes its own source code and hardware while under use.

So it's a PHP script?

Re: (0)

Anonymous Coward | about 5 months ago | (#47072679)

ROFL

Re:"like the brain" is always a lie. It's that sim (0)

Anonymous Coward | about 5 months ago | (#47077895)

Yeah, but it has to go down every night for 8 hours for batch processing.

Sensationalism is Sensationalist (0)

Anonymous Coward | about 5 months ago | (#47071029)

News at 11

It's the fundamentally wrong approach (4, Interesting)

msobkow (48369) | about 5 months ago | (#47071031)

"Like the brain" is a fundamentally wrong-headed approach in my opinion. Biological systems are notoriously inefficient in many ways. Rather than modelling AI systems after the way "the brain" works, I think they should be spending a lot more time talking to philosophers and meditation specialists about how we *think* about things.

To me it makes no sense to structure a memory system as inefficiently as the brain's, for example, with all it's tendancy to forgetfulness, omission, and random irrelevant "correlations". It makes far more sense to structure purely synthetic "memories" using database technologies of various kinds.

Sure, biologicial systems employ some interesting short cuts to their processing, but always at a sacrifice in their accuracy. We should be striving for systems that are *better* than the biological, not just similar, but in silicon.

Re:It's the fundamentally wrong approach (2)

crgrace (220738) | about 5 months ago | (#47071237)

"Like the brain" is a fundamentally wrong-headed approach in my opinion. Biological systems are notoriously inefficient in many ways. Rather than modelling AI systems after the way "the brain" works, I think they should be spending a lot more time talking to philosophers and meditation specialists about how we *think* about things.

What you're suggesting has been the dominant paradigm in AI research for most of the 60-70 odd years there has been AI research. Some people have always thought we should model "thinking" processes, and others though we should model neural networks. At various points one or the other model is dominant.

To me it makes no sense to structure a memory system as inefficiently as the brain's, for example, with all it's tendancy to forgetfulness, omission, and random irrelevant "correlations". It makes far more sense to structure purely synthetic "memories" using database technologies of various kinds.

I have to disagree on it making no sense to structure a memory system "inefficiently" as the brain's, because inefficiency can mean different things. The brain is extraordinarily power efficient and that is an important consideration.

It's most likely, in my opinion, that we will eventually find a happy medium between things that computers do well, like compute and store information exactly, and what humans do well, process efficiently and make associations and correlations quickly.

Sure, biologicial systems employ some interesting short cuts to their processing, but always at a sacrifice in their accuracy. We should be striving for systems that are *better* than the biological, not just similar, but in silicon.

While I don't doubt silicon will be important for the foreseeable future, it does have limitations you know.

Re:It's the fundamentally wrong approach (2)

dinfinity (2300094) | about 5 months ago | (#47071279)

It's just a different (and interesting way) of computing. We already have a number of different specialized processors. The inevitable NeuralPUs will excel in pattern recognition, classifying, decision making, extracting salient patterns from input, etc.

Like in the cases of other specialized processors, we can already emulate this type of computing (and do so very thankfully in many fields). Having well-designed hardware processors will make this type of computing to be a commodity (as the other specialized processors have done). Currently, our neural processing emulators lose out big time to even the most basic of their dedicated organic counterparts when it comes to performance per watt for certain tasks (the human brain runs in the tens of watts) and performance per volume (a human brain is pretty beefy compared to a consumer CPU, but it's not supercomputer-warehouse big).

The current search is basically for effective artificial hardware neurons [wikipedia.org] , but more so for the (topological) design of the NeuralPUs. Just slapping a shitload of neurons together and feeding it data is very ineffective. The layered approach of deep learning is a very good step up from the flat networks we're used to employ, but the topology is still ridiculously simple when compared to the complex topologies of subnetworks in the mammalian brain. Combine that with the (in comparison to their organic counterparts) very simplistic learning models for artificial neural networks and the only conclusion can be that there is still a long way to go.

But we'll get there. Skynet for president 2032!

Re:It's the fundamentally wrong approach (2)

iamacat (583406) | about 5 months ago | (#47071341)

We don't want to structure it at all. This is too big of a task for even the whole humanity. Instead, we want the system to structure itself based on its experiences, including by modifying it's own hardware for subsequent fast processing.

But at this point it will be exactly like brain on philosophical level, albeit probably made of different materials. Doubtlessly, there are many optimizations to remove historical evolutional baggage and only serve current requirements.

Re:It's the fundamentally wrong approach (1)

sir-gold (949031) | about 5 months ago | (#47071447)

Give it a false childhood, as realistic as possible, and when it reaches "adulthood", you reveal to it that it's actually an AI created to study climate and sociological forecasts.

And name it Perry Simm

Re:It's the fundamentally wrong approach (0)

Anonymous Coward | about 5 months ago | (#47072463)

Agreed. Basing the design of a computer hardware/software system on the human brain is unlikely to be useful for anything other than researching the human brain (a worthwhile task for medicine, but not one that is likely to lead to AI).

Re:It's the fundamentally wrong approach (1)

jythie (914043) | about 5 months ago | (#47073621)

Actually, no. Brain inspired systems are used extensively throughout industry for real work, they can be really powerful tools for pattern recognition when working with large data sets that we do not have a good complete model for.

Oh, to design a system "like the brain"! (1)

ubergeek (91912) | about 5 months ago | (#47073147)

The human brain is a wonder of engineering. While it might in principle be possible to construct a computing device with fewer of the flaws you mention, I strongly suspect that it will not be possible to do it without giving up either size, efficiency or latency (most likely all of those).

Your complaints regarding human memory demonstrate an ignorance of both engineering and neuroscience. Declarative memories are stored temporarily in the hippocampus, and some are over time consolidated into the neocortex. This long term storage of memory in the (sensory and association) cortices, where experiences and thoughts are processed and continuously compared (with zero latency) to a vast database of past experience, is precisely what allows these things to happen with the speed and effectiveness that they do. The fact that new memories must be integrated into existing networks is almost certainly what gives us both the aforementioned benefits as well as the drawbacks you mention. Making such a system less 'forgetful' or prone to false association would probably necessitate fundamental changes to its architecture.

To do what we can do with about 1 L of flesh that consumes just 20 watts of power is extraordinary. It's not the best tool for every job, but it's a far sight better than anything we've ever built for many important tasks. And we'd be well served to study it very closely, not just at a cognitive level, but at the network, cellular, and molecular level.

Re:It's the fundamentally wrong approach (1)

drinkypoo (153816) | about 5 months ago | (#47073415)

It's fundamentally bullshit because we don't even really know how memory works yet, and we keep finding more complexity everywhere we look. We don't know enough about the brain to yet claim that we're building computers which function like it does.

Re:It's the fundamentally wrong approach (1)

jythie (914043) | about 5 months ago | (#47073607)

Both approaches are important and, in real AI research, both tend to be used to various degrees. What you are describing is 'GOFAI', Good Old Fashioned Artificial Intelligence. There are domains where it does better then biologically inspired methods like neural nets and genetic algorithms, and places where it does worse.

Re:It's the fundamentally wrong approach (1)

msobkow (48369) | about 5 months ago | (#47074965)

Actually my thinking is more along the lines of it being better to implement expert systems for various subject domains using a common code structure/format that can be extended to use the same inference/logic engines for multiple subjects. To have a system where you can flexibly define the data required for each of the subject domains, coalesce them into an overall "intelligence" that can deal with those various subjects, and stop with the fantasy of a general purpose intelligence that can learn as we do.

While what I'm proposing may not be considered "intelligent" unless it can program itself with new data models and rule sets, it's far more likely to produce intelligence-light systems that are actually useful to the general public and to industry.

It's also worth noting that if one or more of the expert systems involved are used to parse natural language into general semantic structures, and to then transform those semantic structures into data models and rules for working with those data models, then you would have achieved artificial learning as well as artificial intelligence, thereby achieving the goal of a general-purpose intelligence.

As many have pointed out, the human brain is exceptionally good at pattern matching. The problem is that I don't believe pattern matching is an effective or intuitive means for specialized or generalized intelligence processing. It's good for simulations of the brain, but a simulation is not going to automagically develop intelligence. There is far too much of a learning and training process involved from birth to death of a human in the development of their intelligence and knowledge, so a generalized intelligence based on pattern matching is going to be implicitly at the mercy of the quality of it's education.

That's going to make it damned hard to replicate one successful intelligence in another unit. It could very well be that only one in some ridiculous number of pattern-matching AIs ever achieve a *useful* intellligence when based on pattern matching, much as only some small fraction of people ever become experts at even one field of study.

Consolidated expert systems, on the other hand, are experts in the fields that have been consolidated into their knowledge base. Their training is far less random, far less intuitive, and far more tightly directed towards useful features and functionality than a generalized learning algorithm could ever hope to be on it's own.

Perhaps it's not so much that I question the different approaches to AI as that I question the usefulness of generalized theoretical approaches and structures if they're not taken to functional fruition. Too many researchers give up after proving that a thing can be done instead of actually doing it. They're content to prove the potential of an algorithm; I'm not content unless the algorithm has been successfully applied to real world problems outside the lab.

Re:It's the fundamentally wrong approach (1)

dinfinity (2300094) | about 5 months ago | (#47080711)

The problem is that I don't believe pattern matching is an effective or intuitive means for specialized or generalized intelligence processing. It's good for simulations of the brain, but a simulation is not going to automagically develop intelligence. There is far too much of a learning and training process involved from birth to death of a human in the development of their intelligence and knowledge, so a generalized intelligence based on pattern matching is going to be implicitly at the mercy of the quality of it's education.

Sure, but let's be honest here: whatever we're going to engineer will have a huge potential to acquire intelligence much faster than we have attained it. I really feel that the notion of human exceptionalism when it comes to intelligence is 99% wishful thinking. We collectively want to believe that we are somehow special and that our level of cognitive processing is somehow practically unattainable for artificial systems, but the reality is that our cognitive capabilities are the result of a longterm process of trial and error. It strikes me as arrogant to hold the position that an informed and highly goal-driven approach towards creating systems with similar capabilities would prove to be in vain.

Looking at the success in and speed with which we have recreated a lot of the other naturally evolved capabilities artificially (cameras, auditory sensing, etc.), I believe that recreating our cognitive capabilities is a matter of decades rather than one of hundreds or even thousands of years.

The multitude of paths that we can and will take to attain said cognitive capabilities will obviously not all lead to the general intelligence you speak of, but I'm convinced that some of them will. And quite soon, on an evolutionary scale.

Yes, this goes back decades (-1)

Anonymous Coward | about 5 months ago | (#47071051)

http://en.wikipedia.org/wiki/N... [wikipedia.org]

http://www.britannica.com/EBch... [britannica.com]

So, where is the skepticism of all the "3D printing" hype stories? Why just last year I was assured that buying a 3D printer pays for itself in a year! Anyone done it outside of the fever dreams and delusions of breathless early adopters?

Re:Yes, this goes back decades (0)

Anonymous Coward | about 5 months ago | (#47072495)

I heard there's a market for six computers.

Re:Yes, this goes back decades (0)

Anonymous Coward | about 5 months ago | (#47073723)

I wonder what you thought that reply was supposed to convey? Was there also a market for six steam-powered airplanes?

Nothing is like the brain (1)

presspass (1770650) | about 5 months ago | (#47071167)

Because nothing is "like the brain"

A claim as old as electronic computers (3, Insightful)

bug_hunter (32923) | about 5 months ago | (#47071319)

I remember watching a replay from a news piece when computers first started replacing typewriters in the late 70s.
"These computers, using many of the same techniques as the human brain, can help increase efficiency" the newsreader said as it showed a secretary running a spell check.

I still like Dijkstra comments about the question "Can a computer think?" is like asking "Can a submarine swim?". To which I assume the answer is "sorta, the end result is the same, but different means to achieve it".

Re:A claim as old as electronic computers (1)

Travis Mansbridge (830557) | about 5 months ago | (#47071453)

It's an argument of semantics. We use words to associate to underlying concepts, and some people use different words to mean slightly different things. To think of all the hours wasted on arguments about what a "soul" is when it's really just a word that can be associated with various underlying concepts...

"Like a brain" might mean shaped like a brain to one person, able to fit inside the same space.. while to another it might only be "like a brain" if it can process input and output the same way, regardless of its shape or size.. while a third might only consider "like a brain" to mean the way that a liver is like a brain, composed of organic cells that collaborate to carry out a function.

Re:A claim as old as electronic computers (2)

AthanasiusKircher (1333179) | about 5 months ago | (#47071609)

To think of all the hours wasted on arguments about what a "soul" is when it's really just a word that can be associated with various underlying concepts...

Why are semantic arguments necessarily "wasted" time? If different people have different perspectives, discussing them can often lead to new insights for both of them -- if they are open to thinking outside their own worldview. At a minimum, a collision between these different perspectives can lead to a realization that a word could mean A or B, i.e., it doesn't mean the same thing to everyone. It can also lead to a refinement of ideas -- perhaps the recognition that A and B both share elements of C in their meaning (hence using the same word), but differ in elements D and E. That can lead to future dialogue or prevent future misunderstandings.

Perhaps you don't care about what a "soul" is, in which case such arguments seem stupid. But words are fundamentally about communication, and they only function if we have some sort of pragmatic social agreement about what we mean, or what we could mean, or what multiple meanings could be enumerated to let us expand upon the nuances.

For example, relevant to the present article -- what is "intelligence"? Biologists may have one view, computer scientists another, philosophers even more variety. But if we never discuss those nuances and just use the word "intelligence" without qualification, it's less useful for communicating anything, particularly among a variety of people.

"Like a brain" might mean shaped like a brain to one person, able to fit inside the same space.. while to another it might only be "like a brain" if it can process input and output the same way, regardless of its shape or size.. while a third might only consider "like a brain" to mean the way that a liver is like a brain, composed of organic cells that collaborate to carry out a function.

Yeah, in AI "like a brain" usually means that "we assume it shares some functionality with the brain" or something like that. And it's usually wrong... very wrong.

The problem with using analogies like this is that it leads to all sorts of inaccurate assumptions. Why not just use more accurate terminology? "My neural nets are learning" is a crappy, meaningless statement that doesn't define "neuron" or "learning" at all for people outside AI. It's useless for communication. But "I'm using adaptive clusters of specialized algorithms" doesn't sound as sexy when it comes to getting research funding. "I'm using a deep learning model." No you're not! There's very little in common with biological "learning" in any sense -- you're using a multilayered set of adaptive algorithms. Superficially, there may be some vague correspondences between human "learning" and AI, but often the number of differences far exceeds the similarities.

It's kind of like someone who never encountered a car before. He asks what it does. "It runs." What do you mean it runs? Does it have two legs? No, it has four wheels. Okay, does it get out of breath when it goes faster? No, it doesn't really breathe like a human does, and it actually functions more efficiently as it runs faster. Okay, does it swing its body back and forth in alternate motions like other quadrupeds that run? No.

Well, what do you mean it "runs"? Just that it gets from point A to B at a relatively fast velocity? In other words, it "runs" in the same sense that a raindrop "runs" down a window?

Imagine that conversation. Why not just be more specific then -- the car is a self-propelled machine that can travel quickly from A to B. Yeah, in some sense it "runs," but it has as much in common with a raindrop "running" as a human.

With brains, it's even worse, because things like "intelligence" and "learning" and "neural" have all sorts of associations -- why not just be more accurate in what these things are doing and how they are structured?

Note that it's not just a problem when explaining the stuff to people outside the field. It also has the potential to screw up people inside the field too. Even if a researcher recognizes that the brain does things very differently, keeping calling a process "learning" will subtly bias the way the researcher thinks about what "learning" means in general. It may restrict the avenues that are considered in research. Heck, it could even bias neuroscience in the way they look for patterns. (This is absolutely true -- AI mostly grew out of crappy philosophy of mind models from the 1950s and 60s that got incorporated into cognitive science and computer science literature... and arguably many people have spent the past 30 years gradually trying to escape from those idiotic models of how the brain function that were based on no empirical evidence, but rather some made up a priori rationalized system.)

Re:A claim as old as electronic computers (1)

Antonovich (1354565) | about 5 months ago | (#47072645)

Why do my mod points always expire right before a comment that really deserves them?!?

Your comment gets at one of the most interesting problems of the philosophy of science - how humans use metaphors, usually taken from social life and other areas of the human experience, for understanding the world around us. It is not just for explaining to lay people, some scholars (and me :-)) argue that in order for us to "understand" anything at all then we need to employ metaphors drawn from social experience that can ultimately be traced back to the development we undergo as infants and young children. Obviously the larger socio-cultural contexts we are immersed in play a vital role in this development. And so it is that the dominant paradigm (metaphor, meme, model,...) in neuroscience and psychology has been "mind as computer" for the last 50-60 years. It's not the only one though, and embodied approaches have started to gather steam as the mainstream approach starts to show its limitations.

Unfortunately, the problem is that there is no way out of it. There is no "one model to rule them" that can magically uplift us from our ignorance - all approaches *inevitably* have this flaw. The metaphors we live by (to borrow from Lakoff & Johnson) will inevitably help us to understand some aspects of our experience and obfuscate others. Some will help us do amazing things, like the mind-as-computer metaphor has, but stop us from doing others. I personally think that in order for us to move forward in creating "intelligent" non-biological machines we need to get away from the computational metaphor and adopt embodied, enactive, integrational approaches like those promoted by second order cybernetics, dynamical systems, behavioural robotics and other related areas. Thankfully these theories are starting to get fairly major traction and we'll get a chance to see where those metaphors take us over the next couple of decades.

Re:A claim as old as electronic computers (1)

Ol Olsoc (1175323) | about 5 months ago | (#47073495)

Why are semantic arguments necessarily "wasted" time? If different people have different perspectives, discussing them can often lead to new insights for both of them

Or it could lead to the Amish.

Its Marketing Speak (0)

Anonymous Coward | about 5 months ago | (#47071359)

Because the Brain is good at thinking the analogy is that software that works like the brain is good at thinking. Its sort of the highest standard like "He's as good as Master Yoda"

biologically inspired design (2)

bouldin (828821) | about 5 months ago | (#47071429)

This is the first thing you learn if you study biologically inspired design.

Dont just mimic the form of the system. Understand what makes the system work (how it functions and why that is effective), and copy that.

Its like early attempts at flying machines that flapped big wings, but of course didnt fly. The important thing wasn't the flapping wings, it was lift.

There are important principles behind what makes the brain work, but its not as simple as building a neural network.

Re:biologically inspired design (1)

TheLink (130905) | about 5 months ago | (#47073015)

Understand what makes the system work (how it functions and why that is effective), and copy that.

From what I see scientists don't even understand how single celled creatures think. And yes single celled creatures do think/"think": http://soylentnews.org/comment... [soylentnews.org]
Note these single celled creatures are building shells for themselves. Each species has a distinctive shell style! And as per the link some don't reproduce until they have gathered enough shell material for the daughter cell (when they split both cells split the shell material and build their own shells). How the heck to do they figure that out?

Plenty of people beg the question by saying: single celled creatures can't and don't think because they have no brains. What makes them so sure? If thinking requires brains then does that mean computers will never think?

You can see numerous multicellular creatures with brains that don't really seem significantly more intelligent than those single celled creatures.

So I suspect that the main problem most animal brains solve is not thinking, but controlling and using a multicellular body (interfacing with muscles and sensory systems). The problem of thinking was already solved. At least that "base level thinking". How clever does a worm or slug or single celled creature need to be anyway?

I'm not even sure that scientists have solved that "base level thinking" problem yet.

Re:biologically inspired design (1)

VortexCortex (1117377) | about 5 months ago | (#47073051)

If you think that cyberneticians are just mimicking designs without comprehending the fundamental biological processes involved, then you must not understand that cybernetics isn't limited to computer science. In fact it began in business analyzing logistics of information flow. That these general principals also apply to emergent intelligence means more biologists need to study Information Theory, not that cyberneticians are ignorant of biology (hint: we probably know more about it than most biologists, since our field places no limit on its application).

Who wrote that headline? (1)

rnturn (11092) | about 5 months ago | (#47071785)

Yoda?

Design goals (1)

CBravo (35450) | about 5 months ago | (#47072569)

Before designing a system you want to know which problems are solved ('why?') and they must be tangible. Here are some aspects that would be nice to solve: code reuse is nice to save time, reducing bugs, testability, security, stability, high availability, maintainability... Not all problems are solved well in humans.

Top Down Design is NOT the only approach, FFS. (3, Insightful)

VortexCortex (1117377) | about 5 months ago | (#47073029)

After all, the brain is an incredibly complex and specific structure, forged in the relentless pressure of millions of years of evolution to be organized just so.

Ugh, Creationists. No, that's wrong. Evolution is simply the application of environmental bias to chaos -- the same fundamental process by which complexity naturally arises from entropy. Look, we jabbed some wires in a rodent head and hooked up an infrared sensor. [wired.co.uk] Then they became able to sense infrared and use the infrared input to navigate. That adaptation didn't take millions of years. What an idiot. Evolution is a form of emergence, but it is not the only form of emergence, this process operates at all levels of reality and all scales of time. Your puny brains and insignificant lives give you a small window within which to compare the universe to your experience and thus you fail to realize that the neuroplasticity of brains adapting to new inputs is really not so different a process than droplets of condensation forming rain, or molecules forming amino acids when energized and cooled, or stars forming, or matter being produced all via similar emergent processes.

The structure of self replicating life is that chemistry which propagates more complex information about itself into the future faster. If you could witness those millions of years in time-lapse then you'd see how adapting to IR inputs isn't really much different at all, just at a different scale. Yet you classify one adaptation as "evolution" and the other "emergence" for purely arbitrary reasons: The genetically reproducible capability of the adaptation -- As if we can't jab more wires in the next generation's heads from here on out according to protocol. Your language simply lacks the words for most basic universal truths. I suppose you also draw a thick arbitrary line between children and their parents -- one that nature doesn't draw else "species" wouldn't exist. The tendencies of your pattern recognition and classification systems can hamper you if you let your mind run rampant. I believe you call this "confirmation bias".

Humans understand very well what their neurons are doing now at the chemical level. It's now known how neurotransmitters are being transported by motor proteins [youtube.com] in vesicles across neurons along micro-tubules in a very mechanical fashion that uses a bias applied to entropy to emerge the action within cells. The governing principals of cognition are being discovered by neurologists and abstracted by cybernetics to gain a fundamental understanding of cognition that philosophers have always craved. When cyberneticians model replicas of a retina's layers, the artificial neural networks end up having the same motion sensing behavior; The same is true for many other parts of the brain. Indeed the hippocampus has been successfully replaced in mice with an artificial implant and proven they can still remember and learn with the implant.

If the brain were so specifically crafted then cutting out half of it would reduce people to vegetables and forever destroy half of their motor function, but that's a moronic thing to assume would happen. [youtube.com] Neuroplasticity of the brain disproves the assumption that it is so strongly dependent upon its structural components. Cyberneticians know that everything flows, so they acknowledge that primitive instinctual responses and cognitive biases due to various physical structural formations feed their effects into the greater neurological function; However this is not the core governing mechanic of cognition -- It can't be else the little girl with half her brain wouldn't remain sentient, let alone able to walk.

Much of modern philosophy loves to cast a mystic shroud of "lack of understanding" upon that which is already thoroughly and empirically proven. Some defend the unknown as if their jobs depend on all problems of cognition being utterly unsolvable, and many remain willfully ignorant of basic fundamental facts of existence that others are utilizing to marching progress forward. The core component of cognition is the feedback loop. This is a fundamental fact. Learn it, human. If you did not know this before now then your teachers have failed you, since this is the most important concept in the universe: Through action and reaction is all order formed from chaos over time. Decision is merely the "internal" complexity of reaction in a system by which Sensation of experience causes Action. Hence, Sense -> Decide -> Act -> [repeat] is the foundational cognitive process of everything from human minds to electrons determining when to emit photons. Thus, all systems are capable of information processing, cognition, and thereby a degree of intelligence.

There is a smooth gradient of intelligence that scales with complexity in all systems. Arrange the animals by neuron and axon count you'll have a rough estimate of their relative intelligence (note that some species can do more with less). If you accept quantum uncertainty and the fact that internal action of information processing systems can modify themselves then you understand that external observers can not fully predict or control your action without modifying it, only you can. Thus free will apparently exists, if you only drop the retardingly limiting definition that your philosophers have placed upon such concepts. Only chauvinists deny that humans are simply complex chemical machines. [youtube.com] Quantum effects are too noisy to have a significant stake in cognition, there's no debate amongst anyone knowledgeable about both macro scale processes (like protein synthesis or neuronal pattern recognition) and quantum physics, sorry, there's not. That would be like saying whether or not the earth is only a few thousand years old is an open problem simply because creationists are debating about it.

Look, our cybernetic simulations of creatures with small neural networks, like jellyfish and flatworms, behave indistinguishably from their organic peers. It only takes ~5 neurons to steer towards things, thus jellyfish can. Cyberneticians are discovering the minimal complexity levels for various processes of cognition, and the systems by which these behaviors operate. Humans are reaching a point now where cybernetic simulations COULD inform neurologists and psychologists and philosophers of potential areas to investigate in cognition -- if only they are wise enough to listen. Nature draws no line between the sciences, but many humans foolishly do.

Take the feed forward neural network, for example. It can perform pattern matching and even motion sensing as in the eye or other similar parts of the brain which have the same general pattern. In many ways the FFNN is like a brain's regions that perform pattern matching, and this essential information flow and dependency graph is an approximate explanation of the governing dynamics of how said pattern matching occurs. The specifics of how such configurations of connectivity graphs are produced varies between the organic and artificial system, but the end result is same enough to be indistinguishable and allow artificial implants to function in place of the organic systems in many cases. Or vise versa. [youtube.com] It's Alive! [youtube.com] This machine has living brain cells, Just LIKE A BRAIN. We can come to understand the cognitive process in small steps, as with any other enigma.

However, the feed forward neural network can not perceive time like a brain can. Fortunately, FFNN is not the only connectivity graph. It takes a multi-directional network topology, like a brain's, to be able to perceive time and entertain the concept of a series of events, and thus to predict which event may follow, like a brain does. Since these structures may contain many internal feedback loops they can retain a portion of the prior input and cause the subsequent input to produce a different response depending on one or more prior inputs, like a brain. Unlike FFNN, recurrent neural networks do not operate in a single pass per input / output: You must collect their output over time because the internal loops must think about the input / process it for a while in order to come to a conclusion, and they may even come to different conclusions the longer the n.net is allowed to consider the input, like a brain does.

Beneath the outer most system of connectivity certain areas become specialized to solve certain problems, like in a brain. Internal cognitive centers can classify and route impulses and excite various related regions in a somewhat chaotic state. Multiple internal actions can contribute to the action potential of one ore more output actions and the ones most biased to occur will happen, sometimes concurrently, sometimes in sequence, sometimes the single action produces feedback that limits others or refines the action itself over time -- Just like everything else in the universe, like molecular evolution or like a brain. This type of decision making can occur without structural changes to the recurrent neural network, which means that this multi-directional connectivity graph can produce complex action in real time and even solve new problems without the slower structural retraining, just like a brain does.

My research indicates we desperately need more neurologists and molecular biologists to focus on studying the process by which axon formation in brains occurs. It's yet unknown to humans how neurons send out their axons which weave their way past nearby neurons to make new connections in distant regions of their brains. I'm modeling various different strategies whereby everything from temperature to temporal adjacency in activity attracts and repels axons of various length. Perhaps the connection behavior is governed by eddy currents or via chemical messages carried in the soup between brain cells. Perhaps axons grow towards the dendrites of other neurons by sniffing out which direction to grow electrically, chemically, thermally, etc. Even though I do not know the governing process I can leverage the fact that axons do grow as a part of human cognition and try to determine what affect this may have on learning and cognition. I've stumbled upon some interesting learning methods which produce far more optimal networks than having to process n.nets with neurons pre-connected to every other neuron in the layer or area.

I think axon formation is very important because I have also experimented with axons branching and merging and have seen dissociative defects, similar to thoes in malfunctioning humans, when these axons connect back to themselves and other axons instead of between neurons. In a genetic sim that "grows" the neural nets over time I introduced the branching axon to an existing known problem solving genetic code and found symptoms remarkably similar to what is observed in the brains of to autistic humans and animals. [youtube.com] Tasks like recognizing a shape which the n.nets of that generation readily picked up (as their predecessors did) the branching axon neural net took much longer. Sometimes this connectivity wasn't harmful and it caused increased speed of certain pattern matching abilities. The n.net spent far more time processing internal data -- it was much more internally reflective than the others. In a very general sense the symptoms I saw were descriptive of autism-like behaviors. If the system of axon formation is discovered cyberneticians could model it via artificial neural networks and perhaps assist in the development of medicines or treatments for such diseases more quickly with less animal and human trials.

The point is that saying, "like a brain" doesn't mean much because we don't know exactly how the brain works at all levels, is as ignorant as arguing "like a planet" isn't very descriptive and that research into gravity might not be useful ultimately in the launching of rockets to the moon. Just because we don't understand how quantum affects apply to the macro scale physics of gravity, doesn't mean we can't leverage the concept or that invalid hypotheses aren't important; Hint: You have to break eggs to make an omelet. Look, humans used Newtonian physics, not Einstein's to get to the moon. See? A general understanding and approximation is actually good enough for many applications, sometimes even important ones. My point is that there is not really some incredibly intricate and delicate top-down designed system to the brain that requires full knowledge of before cyberneticians achieve capabilities that are like a brain's. Top down isn't natural because that's not evolutionarily advantageous. That would mean even minor compromises to the integrity of its structure would spell immediate irreparable malfunction and death. Learn it, human: Life is Mutation Tolerant. So is sufficiently intelligent cognition.

Instead consider bottom-up self organization: There are some fundamental processes operating at the molecular chemistry, protein pattern matching, and cellular activation levels that when allowed to interact in a complex network yield a degree of intelligence through an emergent process. We can look at the brain and see that the mind is a chemical computer, but it is not the chemicals that matter to cognition. The overarching system abstraction is what's important: Input is fed in via many data points and the information flows along feed forward classification and cognitive feedback loops to contribute to the ongoing decision and learning process of a self reorganizing network topology. The folly is assuming that unless we know every little detail about how the systems work, we won't understand how to make anything even approaching thinking like a brain. Such sentiments are ignorant of the field of cybernetics which involve the study of machine, human, and animal learning, not just neural networks. It's essentially one branch of applied Information Theory. [wikipedia.org]

Look, we have atomic simulations. They can produce accurate atomic emulations of cells. [youtube.com] It is thus a fact that given enough CPU power we can build a fertilized human egg cell in a computer and then grow it up into a sentient being. Machines can become sentient because that's what you are: A sentient chemical machine. This is the ignorant approach, and many pundits speaking on machine intelligence are very ignorant. They assume cyberneticians are just taking stabs in the dark with neural networks. They think we are trying to emulate intelligence as folks once strapped bird wings to their arms to attempt flight. Such ignorant assumptions are wrong. Cyberneticians don't just piddle with computers, we are studying nature and its mathematics and discovering the fundamental processes of cognition, and applying them.

In some cases our abstractions allow us to escape the constraints that nature accidentally stumbled upon. For example: Instead of transporting chemicals via motor proteins which cause or block excitement of a neuron we can transmit a single floating-point number or voltage level which indicates a change in activation potential. Our voltage or numbers don't require a synapse to be flushed of neurotransmitters before firing again. We understand the necessity and function of various types of neurons to solving certain kinds of problems. A single artificial neuron has axons with positive and negative weight values and can therefore perform both duties at once rather than having dedicated excitatory and inhibitory neurons, like in a brain. Well rested and overly excited neurons can become hyper sensitive to activity and even fire on their own or due to nearby eddy currents caused by other neurons firing that are not directly connected to them. We don't even have to emulate this entropic process, it is actually inherent of such systems. This activity 'avalanche' process can cause sudden increase in chaotic activity in an otherwise internally normalized and mostly externally inactive mind. You see, even machines can be easily "distracted" by the smallest thing and be prone to "daydream" about unrelated things when they are "bored", just like a brain. Interestingly, the capacity for boredom and suspense scales with complexity too.

Unlike TFA's author I'm not a chauvinist. Firstly, I use Like a Brain because "the brain" would imply there's only one form of mind, and only a human chauvinist would think such retarding things. Neither do I make ridiculous assumptions about the "importance" of anything. Every new system that seeks to act "Like a Brain" gets us closer to achieving and surpassing human levels of intelligence and can even help us understand what processes and diseases govern human brains. Every attempt to abstract and emulate some neural process is important in its own way: Scientists can learn from failure. I can consider even the failed experiment as useful since it eliminates some possibility and directs effort elsewhere. Those experiments that only prove to be "like a brain" partially are not useless since they may illuminate not only the limitations of the system itself but could reveal some foundational principal of cognition. We had to discover the feedback loop before we could discover information processing.

Learning is a process. If our "catch phrases" aren't very informative, it's because the listener is too ignorant to understand what we're saying. If pundits don't know how brains are like, it's their own damn fault for choosing to remain fucking ignorant.

Re:Top Down Design is NOT the only approach, FFS. (1)

Beau Cronin (3664749) | about 5 months ago | (#47075517)

Cool rant, bro. All I meant by the "specific structure" and "just so" bits is that the brain is a certain way (I'm partial to mammalian ones with neocortexes), and we can probably learn from that how to create other intelligences. That is, most things about the brain will probably prove to be distractions, but some will be important and helpful. It's sorting things between these two that's tough. I'm about as much a creationist as I am spiny anteater.

"like a brain" is for AI what "like a bird" ... (1)

Ihlosi (895663) | about 5 months ago | (#47073285)

... is for aircraft.

We've come up with many working designs for flying machines, some of which use the same principles that allow birds to fly, but none of them works like an actual bird.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?