×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

The Flaw Lurking In Every Deep Neural Net

timothy posted about 7 months ago | from the what-if-that-cat-was-your-mother dept.

AI 230

mikejuk (1801200) writes "A recent paper, 'Intriguing properties of neural networks,' by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus, a team that includes authors from Google's deep learning research project, outlines two pieces of news about the way neural networks behave that run counter to what we believed — and one of them is frankly astonishing. Every deep neural network has 'blind spots' in the sense that there are inputs that are very close to correctly classified examples that are misclassified. To quote the paper: 'For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network.' To be clear, the adversarial examples looked to a human like the original, but the network misclassified them. You can have two photos that look not only like a cat but the same cat, indeed the same photo, to a human, but the machine gets one right and the other wrong. What is even more shocking is that the adversarial examples seem to have some sort of universality. That is a large fraction were misclassified by different network architectures trained on the same data and by networks trained on a different data set. You might be thinking 'so what if a cat photo that is clearly a photo a cat is recognized as a dog?' If you change the situation just a little and ask what does it matter if a self-driving car that uses a deep neural network misclassifies a view of a pedestrian standing in front of the car as a clear road? There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?' If it doesn't, how is it so different from the neural networks that are trying to mimic it?"

Sorry! There are no comments related to the filter you selected.

The Flaw Lurking Deep in Slashdot Beta (-1)

Anonymous Coward | about 7 months ago | (#47098817)

It's going to cost Slashdot their user base as more people refuse to put up with this nonsense. Dice is ruining its own investment.
 
You can't polish a turd.

Re:The Flaw Lurking Deep in Slashdot Beta (-1)

Anonymous Coward | about 7 months ago | (#47099111)

Then go to reddit, you fucking whiner.

As if that's any less of a turd.

Re:The Flaw Lurking Deep in Slashdot Beta (4, Informative)

doti (966971) | about 7 months ago | (#47099153)

SoylentNews is the replacement for /.

reddit is of another kind.

Re: The Flaw Lurking Deep in Slashdot Beta (0)

Anonymous Coward | about 7 months ago | (#47099243)

No

Re:The Flaw Lurking Deep in Slashdot Beta (0)

Anonymous Coward | about 7 months ago | (#47099489)

Reddit's slowly turning into tumblr. At this rate, I'm going to start going outside again.

Re:The Flaw Lurking Deep in Slashdot Beta (1)

Peyton (40545) | about 7 months ago | (#47099351)

It's going to cost Slashdot their user base as more people refuse to put up with this nonsense. Dice is ruining its own investment.

You can't polish a turd.

of course you can. and then you have a shiny turd.

Errors (4, Insightful)

meta-monkey (321000) | about 7 months ago | (#47098837)

Of course the human brain has errors in its pattern matching ability. Who hasn't seen something out of the corner of their eye and thought it was dog when really it was a paper bag blowing in the wind? The brain makes snap judgments, because there's a trade off between correctness and speed. If your brain mistakes a rustle of bushes for a tiger, so what? I'd rather have it misinform me, erring on the side of tiger, than wait for all information to be in before making a 100% accurate decision. This is the basis of intuition.

I don't think a computer ai will be perfect, either, because "thinking" fuzzily enough to develop intuition means it's going to be wrong sometimes. The interesting thing is how quickly we get pissed off at a computer for guessing wrong compared to a human. When you call a business and get one of those automated answering things and it asks you, "Now please, tell me the reason for your call. You can say 'make a payment,' 'inquire about my loan...'" etc etc, we get really pissed off when we say 'make a payment' and it responds "you said, cancel my account, did I get that right?" But when a human operator doesn't hear you correctly and asks you to repeat what you said, we say "Oh, sure," and repeat ourselves without a second thought. There's something about it being a machine that makes us demand perfection in a way we'd never expect from a human.

Re:Errors (1, Insightful)

Anonymous Coward | about 7 months ago | (#47098891)

Show me a machine that listens to me say "make a payment" and then says "sorry I didn't hear that right, can you repeat it?"

And then show me a human that hears "cancel my account" when you say "make a payment". A human might hear "fake a payment" but unlike these crappy voice recognition systems they don't confuse things that don't at least rhyme a bit.

Re:Errors (1)

jaeztheangel (2644535) | about 7 months ago | (#47098901)

Of course the human brain has errors in its pattern matching ability. Who hasn't seen something out of the corner of their eye and thought it was dog when really it was a paper bag blowing in the wind? The brain makes snap judgments, because there's a trade off between correctness and speed. If your brain mistakes a rustle of bushes for a tiger, so what? I'd rather have it misinform me, erring on the side of tiger, than wait for all information to be in before making a 100% accurate decision. This is the basis of intuition.

I don't think a computer ai will be perfect, either, because "thinking" fuzzily enough to develop intuition means it's going to be wrong sometimes. The interesting thing is how quickly we get pissed off at a computer for guessing wrong compared to a human. When you call a business and get one of those automated answering things and it asks you, "Now please, tell me the reason for your call. You can say 'make a payment,' 'inquire about my loan...'" etc etc, we get really pissed off when we say 'make a payment' and it responds "you said, cancel my account, did I get that right?" But when a human operator doesn't hear you correctly and asks you to repeat what you said, we say "Oh, sure," and repeat ourselves without a second thought. There's something about it being a machine that makes us demand perfection in a way we'd never expect from a human.

Think is fuzzy yes, but only for some people. And only operationally. You're right though - just because it's a machine doesn't mean we should demand perfection from it.

Re:Errors (5, Insightful)

Anonymous Coward | about 7 months ago | (#47098959)

Actually, not only is this common in humans, but the "fix" is the same for neural networks as it is in humans. When you misidentify a paper bag as a dog, you only do so for a split second. Then it moves (or you move, or your eyes move - they constantly vibrate so that the picture isn't static!), and you get another slightly different image milliseconds later which the brain does identify correctly (or at least, tells your brain "wait a minute there's a confusing exception here, let's turn the head and try a different angle).

The neural network "problem" they're talking about was while identifying a single image frame. In the context of a robot or autonomous car, the same process a human goes through above would correct the issue within milliseconds, because confusing and/or misleading frames (at the level we're talking about here) are rare. Think of it as a realtime error detection algorithm.

Re:Errors (-1)

Anonymous Coward | about 7 months ago | (#47099205)

Republicans misidentify blacks as rapist thug murderers, and shoot first before the mistake can be corrected, so they can use the "stand your ground" defense in good conscience.

Re:Errors (4, Interesting)

TapeCutter (624760) | about 7 months ago | (#47099443)

A NNet is basically trying to fit a curve, the problem of "overfitting" manifests itself as two almost identical data points being separated because the curve has contorted itself to fit one data point, So yes, a video input would likely help. The really interesting bit is that it seems all NNets make the same mis-classification, even when trained with different data. What these guys are saying is "that's odd", I think mathematicians will go nuts trying to explain this and it will probably will lead to AI insights.

The AI system in an autonomous car is much more than a Boltzmann machine running on a video card. The problem for man or machine when driving a car is that it's "life" depends on predicting the future, the problem is that neither man or machine can can confirm their calculation before the future happens. If the universe fails to co-operate with their prediction it's too late. What's important from a public safety POV is who gets it right more often, if cars killing people was totally unacceptable we wouldn't allow cars in the first place.

Re:Errors (3, Interesting)

Anonymous Coward | about 7 months ago | (#47099029)

Ok, I need to share story of my boss. Hope it is relevant.

My boss was hardware engineer and had total blind spot for software. We involved him many times in the discussion to make sure he understands different layers of software, but everything in vain.

It used to create funny situations. For example, one of the developer was developing a UI and had bug in his code. Unfortunately he was stuck for an hour when my boss happened to ask him how he was doing. After hearing the problem, he jumped and said the problem is in power supply, and ordered replacement immediately.

Hundreds of times, the developers got ICs replaced, capacitors replaced, boards replaced, complete laptops replaced, CPUs replaced, monitors replaced (for bug in QT code).

I have wasted hours to make sure he understands that it is not a hardware issue, but always failed. It was painful to deal with him.

Re:Errors (2)

hubie (108345) | about 7 months ago | (#47099199)

That's funny because in my experience, the hardware guys usually blame it on the software and the software guys blame it on the hardware.

Re:Errors (1)

Anonymous Coward | about 7 months ago | (#47099339)

That's funny because in my experience, the hardware guys usually blame it on the software and the software guys blame it on the hardware.

There is a pretty important difference here. Responsibility.
In the normal case being able to blame it on someone else means that you didn't do anything wrong and you won't get any extra workload the problem is fixed.
In this case the person was the boss. It was his responsibility that the problem got solved regardless of who solved it.
Being a hardware guy, hardware was the tool he had to solve problems. Combine that with how cheap it is the replace the entire computer when the alternative is for a person to sit and try to track down a bug.

Must have been the old generation of engineers too. These days electronic engineers has to be able to design using both microcontrollers and discrete components. If your engineer can't write a basic operating system if needed he is pretty much useless. Your only option then is to promote him to middle management.

Re:Errors (3, Insightful)

ponos (122721) | about 7 months ago | (#47099093)

I don't think a computer ai will be perfect, either, because "thinking" fuzzily enough to develop intuition means it's going to be wrong sometimes. The interesting thing is how quickly we get pissed off at a computer for guessing wrong compared to a human.

But we do expect some levels of performance, even from humans. You have to pass certain tests before you are allowed to drive a car or do neurosurgery. So, we do need some, relatively tight, margins of error before a machine can be acceptable for certain tasks, like driving a car. An algorithm that has provable bias and repeatable failures is much less likely to be acceptable.

The original article also mentions the great similarity between inputs. We expect a human to misinterpret voice in a noisy environment or misjudge distance and shapes in a stormy night. However, we would be really surprised if "child A" is classified as a child, while similar looking "child B" is mislcassified as a washing machine. Under normal conditions, humans don't do these kind of errors.

Finally, even an "incomplete" system (in a goedelian sense) can be useful it it is stable for 99.999999% of inputs. So, fuzzy and occasionally wrong is OK in real life. However, this will have to be proven and carefully examined empirically. We can't just shrug this kind of result away. Humans are known to function a certain way for thousands of years. A machine will have to be exhaustively documented before such misclassifications are deemed functionally insignificant.

Re:Errors, and then there are cringeworthies... (2)

ThatsDrDangerToYou (3480047) | about 7 months ago | (#47099159)

Like when you are walking behind a guy with long hair and think she might be kinda hot. Doh!

Re:Errors, and then there are cringeworthies... (-1)

Anonymous Coward | about 7 months ago | (#47099305)

It means secretly you want some of that cock. You know you do big boy.

AI question I heard 30yrs ago... (4, Funny)

TapeCutter (624760) | about 7 months ago | (#47099175)

"Sure it's possible that computers may one day be as smart as humans, but who wants a computer that remembers the words to the Flintstones jingle and forgets to pay the rent?"

Re:Errors (2)

Charliemopps (1157495) | about 7 months ago | (#47099221)

If your brain mistakes a rustle of bushes for a tiger, so what? I'd rather have it misinform me, erring on the side of tiger, than wait for all information to be in before making a 100% accurate decision.

As someone whose brain does err on the side of tiger regularly, and there are no tigers, I'd like to point out that it's not nearly as harmless as you may think.

Re:Errors (0)

Anonymous Coward | about 7 months ago | (#47099257)

"There's something about it being a machine that makes us demand perfection in a way we'd never expect from a human."

Because we're designing it, dummy.

Re:Errors (1)

LifesABeach (234436) | about 7 months ago | (#47099343)

Neural Nets work on stimulus, and feed back. Large Cats think of primates as "preferred" food; and work on "feedback." As time went by, fewer primates existed and reproduced that could NOT recogize Large Cats; from lets say, anything else.

Re:Errors (1)

Pentium100 (1240090) | about 7 months ago | (#47099455)

The thing is, usually the mistakes made by a computer appear obvious, as in "even an idiot wouldn't make that mistake". For example, a human would have to have really big problems with hearing or language to hear "make a payment" as "cancel my account". If the sound quality is bad the human would ask me to repeat what I said, I would say it slower or say the same thing in other words.

Same thing with cars, people can understand the limits of other people (well, I guess I probably wouldn't be able to avoid the dog too, it probably ran out too fast), and when a software bug causes a self-driving car to crash, it will be something like "the dog was crossing the road from the other side, the car started turning towards the dog and hit another car while attempting to deliberately run over a dog).

Also, to err is human (or so the saying goes), but a machine should operate without mistakes or it is broken (the engine of my car runs badly when it is cold - but that's not because the car doesn't "want" to go or doesn't "like" cold, it's just that some part in it is defective (most likely the carburetor needs cleaning and new seals)).

For fuck's sake, it's 2013. (3, Insightful)

Anonymous Coward | about 7 months ago | (#47098841)

A neural network is not by any stretch of the imagination a simulation of how the brain works. It incorporates a few principles similar to brain function, but it is NOT an attempt to re-build a biological brain.

Anybody relying on "it's a bit like how humans work lol" to assert the reliability of an ANN is a fucking idiot, and probably trying to hawk a product in the commercial sector rather than in academia.

Optical illusuions? (3, Insightful)

gstoddart (321705) | about 7 months ago | (#47098849)

If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?

Aren't optical illusions pretty much something like this?

And, my second question, just because deep neural networks are biologically inspired, can we infer from this kind of issue in computer programs that there is likely to be a biological equivalent? Or has everyone made the same mistake and/or we're seeing a limitation in the technology?

Maybe the problem isn't with the biology, but the technology?

Or are we so confident in neural networks that we deem them infallible? (Which, obviously, they aren't.)

Re:Optical illusuions? (1)

Warbothong (905464) | about 7 months ago | (#47098983)

If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?

And, my second question, just because deep neural networks are biologically inspired, can we infer from this kind of issue in computer programs that there is likely to be a biological equivalent? Or has everyone made the same mistake and/or we're seeing a limitation in the technology?

Maybe the problem isn't with the biology, but the technology?

Or are we so confident in neural networks that we deem them infallible? (Which, obviously, they aren't.)

You're just repeating the question asked in the summary.

Re:Optical illusuions? (1)

gstoddart (321705) | about 7 months ago | (#47099187)

You're just repeating the question asked in the summary.

No, I'm saying "why would be assume a similar flaw in a biological system because computer simulations have a flaw".

I think jumping to the possibility that biological systems share the same weaknesses as computer programs is a bit of a stretch.

Re:Optical illusuions? (1)

Warbothong (905464) | about 7 months ago | (#47099335)

I'm saying "why would be assume a similar flaw in a biological system because computer simulations have a flaw".

Nobody's assuming; scientists are asking a question.

I think jumping to the possibility that biological systems share the same weaknesses as computer programs is a bit of a stretch.

I've not come across the phrase "jumping to the possibility" before. If I 'jump' to giving this a possibility of 2%, is that a 'stretch'?

Already known (0)

Anonymous Coward | about 7 months ago | (#47098851)

No-one is close to putting neural networks it a safety critical application, at least not if they ever intend it to follow the laws regarding software in such situations.
For such applications there is a requirement that you can explain for every line of code why it will behave as intended. (That why you avoid using pointers as far as possible, even if you have checks against null-pointers it is hard to prove that it can't point to a non-valid object.)
I don't even know where to start to get a neural network to pass certification. You would have to lock it in its trained state and go through and show that each possible set of inputs generates the desired set of outputs or something like that.

Re:Already known (0)

Anonymous Coward | about 7 months ago | (#47098911)

And yet, Windows is still used in medical devices like X ray, MRI, ...

Like HELL there "is a requirement that you can explain for every line of code why it will behave as intended"...

There may be a requirement... but nobody follows it.

Re:Already known (2)

Sique (173459) | about 7 months ago | (#47098951)

Every semi- or full automated face recognition system uses neural networks, and they are sold to us as safety critical. If this flaw is really as fundamental as it is claimed to be, it means that it's pretty easy to outsmart those systems by only slightly changing your look, so your co-conspirators still recognize you, but you will raise no alarm on any system that is supposed to spot you.

Re:Already known (0)

Anonymous Coward | about 7 months ago | (#47099271)

If this flaw is really as fundamental as it is claimed to be, it means that it's pretty easy to outsmart those systems by only slightly changing your look, so your co-conspirators still recognize you, but you will raise no alarm on any system that is supposed to spot you.

Possible is quite different from easy. The surprising result is that they've been able to find at least one very similar but misclassified example for any neural network they've looked at. That they were able to find examples does not mean that most, or even many such images exist. In the context of facial recognition on candid photos, it may still be nearly impossible to distort your image in such a way that it reliably falls into one of these blind spots. It may be, for example, that the easiest way to generate an image in one of these blind spots is to add some kind of unrealistic noise to the image which wasn't present in the training set. One can't, walking through an airport, add exotic noise to the video feeds. And even then, it may be that most of the time the neural network still gets it right.

This is a "problem" with neural networks though. We can set up a topology and learning rules, but by the time they're trained, looking at neuron connection weights doesn't really provide any insight into how they make decisions. They're a black box, and that should be scary in any situation where safety is important. I don't mean to say there's a better way to do it or that we shouldn't use neural networks, just that it merits careful consideration and a little concern.

Re:Already known (2)

NatasRevol (731260) | about 7 months ago | (#47099453)

Great, everyone is going to start having moles on their cheeks.

They are *not* errors... (3, Interesting)

jaeztheangel (2644535) | about 7 months ago | (#47098875)

Deep neural networks are implicitly generating dynamic-ontologies. The 'mis-categorisation' occurs when you only have one functional exit point. The fact is that if you are within the network itself, the adversarial are held in-frame alongside other possibilities, and the network only tilts towards one when the prevailing system requires it through external stimulus. From the outside it will look like an error, (because we already decided that) but internally each possible interpretation is valid.

Re:They are *not* errors... (1)

Threni (635302) | about 7 months ago | (#47098969)

That's like saying "the mp3 file is NOT corrupt; it's an accurate representation of a dirty cd". Yeah, but I didn't want to listen to that, I wanted to listen to the cd.

Likewise, I want you to tell me if that's my cat, not if it's a dog.

Re:They are *not* errors... (4, Funny)

drinkypoo (153816) | about 7 months ago | (#47099387)

The fact is that if you are within the network itself, the adversarial are held in-frame alongside other possibilities, and the network only tilts towards one when the prevailing system requires it through external stimulus.

Tron? Is that you? Speak to me, buddy.

proof (0)

Anonymous Coward | about 7 months ago | (#47098881)

that you can't run a hypervisor inside a hypervisor.

Re:proof (1)

Eunuchswear (210685) | about 7 months ago | (#47099231)

Not only irrelevant, but wrong.

Some self correction when the camera is moving... (0)

Anonymous Coward | about 7 months ago | (#47098885)

Very interesting results. In the self driving car it might be self correcting in most cases though, as the car will most likely scan the road at a fairly high frame rate, and every new frame is slightly different than the previous frame. (Although there may of course be a deeper set of traps waiting there...)

Google's algorithm is not a neural network (5, Informative)

James Clay (2881489) | about 7 months ago | (#47098893)

I can't speak to what the car manufacturers are doing, but Google's algorithms do not include a neural network. They do use "machine learning", but neural networks are just one form of machine learning.

Re:Google's algorithm is not a neural network (0)

Anonymous Coward | about 7 months ago | (#47099005)

Agreed. I'd imagine that they'd probably be using support vector machines, as to my knowledge, those can be used for all problems that neural networks can be used, they're simpler to implement and more versatile.

When I was taking a class on machine learning, they were saying nobody has really used neural networks in about 10 years. And I was taking that class only about a year ago.

Re:Google's algorithm is not a neural network (2)

ceoyoyo (59147) | about 7 months ago | (#47099487)

Your knowledge is out of date. Support vector machines can replace shallow neural networks. The deep ones have serious, mathematically proven, advantages over shallow AANs and SVMs.

If you were taking a machine learning class a year ago that said nobody is using AANs then it was five to ten years out of date. Google has put quite a few resources into them, including buying (er, hiring) one of the pioneers of deep networks.

Re:Google's algorithm is not a neural network (0)

Anonymous Coward | about 7 months ago | (#47099059)

Eh? Just about every algorithm (be it classification, clustering, component analysis, whatever) can be more cleanly viewed as a neural network---it's all just hyperplanes slicing up your (potentally curved) problem space. And even neural networks are more easily viewed as matrix operations :-)

The problem they appear to be describing in the article is likely due to reliance on a limited set of components. For example, given a picture of a car, the algorithm will reduce the dimension to perhaps 10000 numerical values, then in next layer reduce that down to 1000, then in next layer reduce that down to 100, and so on. The way it does that is mostly picking principle components (or principle components of some transformation---such as rotation/scale invariance). If at some deep level (say when you go from 100 to 10) the algorithm completely randomly flips one of the components (due to arbitrary close variance), then you got a misclassification on mostly the same training set.

Re:Google's algorithm is not a neural network (0)

Anonymous Coward | about 7 months ago | (#47099397)

Hello Google employee! You disinformation campaign won't succeed here. We know you. Move along please.

Re:Google's algorithm is not a neural network (5, Interesting)

Gibgezr (2025238) | about 7 months ago | (#47099485)

Just to back up what James Clay said, I took a course from Sebastian Thrun (the driving force behind the Google cars) on programming robotic cars, and no neural networks were involved, nor mentioned with regards to the Google car project. As far as I can tell, if the LIDAR says something is in the way, the deterministic algorithms attempt to avoid it safely; if you can't avoid it safely, you brake and halt. That's it. Maybe someone who actually worked on the Google car can comment further?
Does anyone know of any neural networks used in potentially dangerous conditions? This study: www-isl.stanford.edu/~widrow/papers/j1994neuralnetworks.pdf states that
accurateness and robustness issues need to be addressed when using neural network algorithms, and gives a baseline of more than 95% accuracy as a useful performance metric to aim for. This makes neural nets useful for things like auto-focus in cameras and handwriting recognition for tablets, but means that using a neural network as a primary decision-maker to drive a car is perhaps something best left to video games (where it has been used to great success) rather than real cars with real humans involved.

The brain has multiple neural nets (4, Insightful)

jgotts (2785) | about 7 months ago | (#47098895)

The human brain has multiple neural nets and a voter.

I am face blind and completely non-visual, but I do recognize people. I can because the primary way that we recognize people is by encoding a schematic image of the face, but many other nets are in play. For example, I use hair style, clothing, and height. So does everybody, though. But for most people that just gives you extra confidence.

Conclusion: Neural nets in your brain having blind spots is no problem whatsoever. The entire system is highly redundant.

Re:The brain has multiple neural nets (4, Interesting)

bunratty (545641) | about 7 months ago | (#47098933)

More importantly, the human brain has feedback loops. All the artificial neural nets I've seen are only feed-forward, except during the training phase in which case there is only feed-forward or only feed-backward and never any looping of signals. In effect, the human brain is always training itself.

Re:The brain has multiple neural nets (4, Interesting)

ganv (881057) | about 7 months ago | (#47098975)

Your model of the brain as multiple neural nets and a voter is a good and useful simplification. I think we still know relatively little about how accurate it is. You would expect evolution to have optimized the brain to avoid blind spots that threatened survival, and redundancy makes sense as a way to do this.

However, I wouldn't classify blind spots as 'no problem whatsoever'. If the simple model of multiple neural nets and a voter is a good one, then there will be cases where several nets give errors and the conclusion is wrong. Knowing what kinds of errors are produced after what kind of training is critical to understanding when a redundant system will fail. In the end though, I suspect that the brain is quite a bit more complicated that a collection of the neural nets like those this research is working with.

Re:The brain has multiple neural nets (3, Insightful)

dinfinity (2300094) | about 7 months ago | (#47099429)

Your model of the brain as multiple neural nets and a voter is a good and useful simplification.

So the 'voter' takes multiple inputs and combines these into a single output?

Only if you have no idea how a neural network works, is it a useful simplification. The 'multiple nets' in the example given by GP mainly describe many input features.

Ensemble neural nets (3, Interesting)

Theorem Futile (638969) | about 7 months ago | (#47099017)

That makes sense. Rare errors will be screened out if instead of a single deterministic selection process you use a distribution of schemes and select based on the most probable outcome... I am wondering what our brain does with its minority reports...

The brain has multiple neural nets (3, Interesting)

Anonymous Coward | about 7 months ago | (#47099035)

Indeed, remembering the experiments done in the 1960s by Sperry and Gazzaniga on patients who had a divided corpus callosum, there are clearly multiple systems that can argue with each other about recognising objects. Maybe part of what makes us really good at it, is not relying on one model of the world, but many overlaid views of the same data by different mechanisms.

Re:The brain has multiple neural nets (2)

cyberhooligan77 (2612877) | about 7 months ago | (#47099083)

It would be interesting to learn how does this neural networks interact. Is it a single neural network, are several independent neural networks, that have points where they interact. Or are they interdependent neural networks, where some parts are fully independent, and other, where they mix with others ?

Re:The brain has multiple neural nets (1)

Rich0 (548339) | about 7 months ago | (#47099547)

It would be interesting to learn how does this neural networks interact. Is it a single neural network, are several independent neural networks, that have points where they interact. Or are they interdependent neural networks, where some parts are fully independent, and other, where they mix with others ?

The more I read it is one big mess. There are areas with functional optimization which is why a stroke in a certain part of the brain tends to impact most people in the same way. However, lots of operations that we might think of as simple involve many different parts of the brain working together.

My sense is that the brain is a collection of many interconnected sub-networks. Each sub-network forms certain patterns during development, with major interconnections forming during development. The structure of neurons in the cerebellum looks completely different from what you'd find in the frontal lobe. I suspect that if you looked closely enough within the brain you'd find similar differences between the various regions of the brain.

It isn't unlike a CPU. You have circuits for storage, addition, logic, and so on, and then they're wired together in coordination. You can tweak the design of the cache without impacting the design of the ALU much. The various regions of the brain can therefore evolve a bit independently, but since any region probably is involved in many higher-level functions many changes have both advantages and disadvantages.

Re:The brain has multiple neural nets (1)

Urkki (668283) | about 7 months ago | (#47099269)

Neural nets in your brain having blind spots is no problem whatsoever. The entire system is highly redundant.

..."no problem whatsoever" in the sense, that it doesn't kill enough people to have impact on human population size, and "highly redundant" also on the sense that there usually are many spare people to replace those killed/maimed by such brain blind spots.

Are they the same thing? (2)

Capt.Albatross (1301561) | about 7 months ago | (#47099285)

While I share your view that expecting the mind to be explained as a single neural network (in the Comp. Sci. sense) is probably simplistic, I don't think modeling it as multiple neural nets and a voter fixes the problem. I am not quite sure about this, but isn't a collection of neural nets and a voter equivalent to a single neural net? Or, to put it a slightly different way, for any model that consists of multiple neural nets and a voter, there is a single neural net that is functionally identical? I am assuming the voter is there to pick the most common classification by the component networks.

'Does the human brain have similar built-in errors (0)

Anonymous Coward | about 7 months ago | (#47098903)

I'd have to say a resounding yes. Have you ever met a person? Watched any political arguments as of late? Come on they have a blind spot about as big as a country.

How shocking is that? (2)

Wolfier (94144) | about 7 months ago | (#47098915)

All neural nets try to predict, and predictions can be foiled.

People can be fooled by optical illusions, too.

Re:How shocking is that? (1)

CanHasDIY (1672858) | about 7 months ago | (#47099279)

All neural nets try to predict, and predictions can be foiled.

People can be fooled by optical illusions, too.

The main difference being that optical illusions are designed to fool the human eye, and thus are intentional, whereas the computer in this case is being fooled by regular stuff, i.e. not intentional.

If the human brain failed to recall unique individuals because of slight changes in their appearance, I doubt we'd have progressed much beyond living in caves and hitting stuff with cudgels.

Re:How shocking is that? (1)

Lemmeoutada Collecti (588075) | about 7 months ago | (#47099421)

Hey Tom! I haven't seen you in... oh, sorry, I thought you were someone else.

is it hymenless monkey hair or morgellons? (-1)

Anonymous Coward | about 7 months ago | (#47098941)

only your biologist knows for sure? http://www.youtube.com/results?search_query=wmd+morgellons+monkey

Shocking! (1)

wisnoskij (1206448) | about 7 months ago | (#47098947)

This is indeed shocking, as everyone one knows we all thought that we had perfected the art of artificial human intelligence and that there was no more room for improvement.

or... (0)

Anonymous Coward | about 7 months ago | (#47098949)

They've just overfit the data with the latest whizbang algorithm.

how do we know the neural network is wrong? (2, Funny)

bitt3n (941736) | about 7 months ago | (#47098955)

What if that supposed pedestrian really is no more than a clear stretch of road, and it is we who err in notifying the road's next of kin, who are themselves no more than a dirt path and a pedestrian walkway?

Re:how do we know the neural network is wrong? (0)

Anonymous Coward | about 7 months ago | (#47098999)

How is a flat road with roadkill to be distinguished from somebody who has been tarred and feathered?

Well what do you know (3, Informative)

sqlrob (173498) | about 7 months ago | (#47098961)

A dynamic non-linear system [wikipedia.org] has some weird boundary conditions. Who could ever have predicted that? </s>

Why wasn't this assumed from the beginning and it shown that it wasn't an issue?

Re:Well what do you know (2)

ponos (122721) | about 7 months ago | (#47099141)

The main advantage of learning algorithms like neural nets is that they can automagically generalise and produce classifiers that are relatively robust. I wouldn't be surprised at all if a neural net misclassified an extreme artifical case that could fool humans (say, some sort of geometric pattern generated by a complicated function or similar artificial constructs). Here, however, it appears that the input is really, really similar and simple to recognize for humans. Obviously the researchers have recreated a "boundary" condition, but the fact that this becomes manifest in real-life examples is a bit worrying for the validity of the algorithm in general situations and especially its scalability in much bigger projects were similar cases may arise more frequently.

Re:Well what do you know (3, Informative)

wanax (46819) | about 7 months ago | (#47099295)

This is a well known weakness with back-propagation based learning algorithms. In the learning stage it's called Catastrophic interference [wikipedia.org] , in the testing stage it manifests itself by mis-classifying similar inputs.

I don't believe it (2)

Sterculius (1675612) | about 7 months ago | (#47098981)

It is almost like the article is saying that something a computer did was not perfectly in line with human reasoning. We should stop being life-centric and realize that if the computer says two pictures of the same cat should not be classified in the same way, the computer is simply wiser than we are, and if we don't believe it the computer will beat our asses at chess and then we'll see who is smarter.

Finally! (0)

Anonymous Coward | about 7 months ago | (#47099007)

'Does the human brain have similar built-in errors?

You just blew my mind. Finally we understand the career of $reviled_celebrity

...ummmm (0)

Anonymous Coward | about 7 months ago | (#47099013)

Its called making a mistake...

Average across models (5, Informative)

biodata (1981610) | about 7 months ago | (#47099025)

Neural networks are only one way to build machine learning classifiers. Everything we've learnt about machine learning tells us not to rely on a single method/methodology and that we will consistently get better results by taking the consensus of multiple methods. We just need to make sure that a majority of the other methods we use have different blind spots to the ones the neural networks have.

Re:Average across models (1)

Anonymous Coward | about 7 months ago | (#47099245)

This is true, however the current craziness about deep-learning NN is due to the fact that they are incredibly effective [idsia.ch] at some computer vision tasks, including very difficult ones, that were thought almost impossible until recently [idsia.ch] . They beat other classifiers by a large margin. However not long ago SVM were the rage, etc. No doubt in a few years we will have exported the good feature of deep-learning to other methodologies.

Re:Average across models (1)

slew (2918) | about 7 months ago | (#47099267)

OR, perhaps we use the same method but look at the data a different way (e.g., like a turbo code uses the same basic error correction code technology, but permutes the input data)... I suspect the brain does something similar to this, but I have no evidence...

The brain doens't classify pixel based. (1, Interesting)

Rashdot (845549) | about 7 months ago | (#47099027)

Apparently these neural nets are taught to classify "images", instead of breaking these images down into recognizable forms and properties first.

Re:The brain doens't classify pixel based. (1)

cyberhooligan77 (2612877) | about 7 months ago | (#47099089)

Or, classify patterns. Have you ever seen some weird paitings or drawings where images of one thing, are mixed with images of another thing ?

Re:The brain doens't classify pixel based. (0)

Anonymous Coward | about 7 months ago | (#47099291)

Apparently these neural nets are taught to classify "images", instead of breaking these images down into recognizable forms and properties first.

And how exactly should a computer do that? It can't "break down" anything. All it has to work with are the color values of individual pixel. It actually has to examine/transform/calculate those pixels across the image and build up to your recognizable forms and properties

Re:The brain doens't classify pixel based. (1)

Anonymous Coward | about 7 months ago | (#47099379)

Your brain recieves a series of light intensities at different points similar (though not quite the same) as pixel data. The first thing it does is to then extract useful features from this data, but this feature extraction is done within the brain. This is actually quite similar to how the early layers of a deep neural net will perform feature extraction which is processed further on in the network.

The Curve (1)

Jim Sadler (3430529) | about 7 months ago | (#47099057)

When we ride a bicycle the brain constantly adjusts for error. We try to travel in a straight line but it really is a series of small curves as we adjust and keep trying to track straight. Processes such as vision probably do the same thing. As we quickly try to identify items it probably turns into a "this not that" series until the brain eventually decides we have gotten it right. Obviously this all occurs constantly and at rather high, internal, speeds.

Biologically inspired but that's it (0)

Missing.Matter (1845576) | about 7 months ago | (#47099061)

If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks?

No. Artificial neural networks are inspired by biology, but that's where the similarity ends. Any conclusion drawn from an ANN should not be cast onto their biological counterparts.

Nonsense (0)

Anonymous Coward | about 7 months ago | (#47099087)

I worked with neural networks, there are many types. You can get robust classification. Good example is face recognition software. So this seems to be an attempt to disqualify the safety of self driving cars by taking a special case failure scenario.

Re:Nonsense (1)

biodata (1981610) | about 7 months ago | (#47099137)

I guess you are trolling right? Even Facebook's facial recognition is consistently worse than a human's and a human's is significantly worse than perfect. If by robust you mean "frequently correct, and but wrong sometimes", then OK, but then that is what OP was saying about neural nets.

Re:Nonsense (1)

dave420 (699308) | about 7 months ago | (#47099393)

You are absolutely right if by "consistently worse" you mean "consistently better", and by "significantly worse than perfect" you mean "97.53% accurate".

Or the other way around (0)

Anonymous Coward | about 7 months ago | (#47099143)

To be clear, the adversarial examples looked to a human like the original, but the network misclassified them.

That's one way to see things. The other is to consider that the human brain is flawed and is incapable of making the distinction between the two images, and the deep neural network can.

Errors, what do we do (1)

byteherder (722785) | about 7 months ago | (#47099165)

We know there will be errors with the neural nets. There will be edge cases (like the one described with the cat), corner cases, bizarre combination of inputs that result in misclassifications, wrong answers and bad results. This happens in the real world too. People misclassify things, get things wrong, screw up answers.

The lesson is not to trust the computer to be infallible. We have trusted the computer to do math perfectly. 1 + 1 = 2, always, but is not so for neural nets. It is one thing if the neural net will not tag the photo of your cat on Facebook even if there are 100 other pictures of your cat on your account. It is another if your photo get misidentified as being a terrorist on the "kill on sight" list.

The question is what do we do with the errors?

pac learning model (2)

sevenfactorial (996184) | about 7 months ago | (#47099235)

The Probably Approximately Correct (PAC) learning model is what formally justifies the tendency of neural networks to "learn" from data (see Wikipedia).

    While the PAC model does not depend on the probability distribution which generates training and test data, it does assume that they are *the same*. So by "adversarially" choosing test data, the researchers are breaking this important assumption. Therefore it is in some ways not surprising that neural networks have this vulnerability. It shouldn't be an issue in real life, assuming that the training data and the testing data really do come from the same probability distribution.

That said, this shows why you wouldn't want to use neural networks for, say, cryptography.

Errors? (1)

Script Cat (832717) | about 7 months ago | (#47099251)

"does the same result apply to biological networks?"
Of course we just rely on other parts of our brain and use logic to throw these out. I once saw an old carpet rolled up on the side of the road and OMG it looked like a rhino. But I knew this was not a rhino.

Re:Errors? (2)

RJFerret (1279530) | about 7 months ago | (#47099435)

News from the future, rhinos find success adapting to suburban environments with discarded carpet camouflage, people slow to adapt.

Cogntitve bias (2)

sackbut (1922510) | about 7 months ago | (#47099275)

This seems to be almost a form of cognitive bias as defined and studied by Tversky and Kahneman. I direct you to : http://en.wikipedia.org/wiki/L... [wikipedia.org] . Or as previously pointed out optical illusions seem to be an equivalence.

What's the incentive, and should we worry? (1)

spiritplumber (1944222) | about 7 months ago | (#47099287)

I wonder how much it pays to the first person who sorts this one out? I wonder if this is happening to the human brain?

Natural selection elminated that flaw... (1)

Squidlips (1206004) | about 7 months ago | (#47099293)

a long time ago..... If, say, the reef fish cannot distinguish a coral head from a barracuda, then it get eliminated pretty quick. There must be a flaw in the artificial neural nets.

Training (0)

Anonymous Coward | about 7 months ago | (#47099349)

Can't you just use these adversarial examples to train the network?

Sounds like a good feedback loop, train, find counter examples, train more, find counter examples train, you would probably get diminishing returns but the network would hopefully converge on better solutions?

Minksy said this in 1969 (3, Informative)

peter303 (12292) | about 7 months ago | (#47099361)

NN technology is 60 years old. Some A.I. pundts disliked in the beginning such as Minsky in his 1969 book Perceptrons. Many of these flaws have been LONG known.

Not trying to mimic the brain. (1)

sanchom (1681398) | about 7 months ago | (#47099385)

> how is it so different from the neural networks that are trying to mimic it? These neural networks are not trying to mimic the brain.

Amazing (1)

Stumbles (602007) | about 7 months ago | (#47099405)

A blind spot in something designed by man.

The Napoleon Dynamite Problem (2)

Jodka (520060) | about 7 months ago | (#47099517)

The sounds similar to the Napoleon Dynamite Problem, the problem encountered in the Netflix Prize challenge of predicting user ratings for some particular films. For most films knowledge of an individuals preferences for some films were good predictors for their preferences of other films. Yet preferences for some particular films were hard to predict, notably the eponymous Napoleon Dynamite.

Neural network identification and automated prediction of individual film ratings are both classification tasks. Example sets for both of these problems contain particular difficult-to-classify examples. So perhaps this phenomena of "adversarial examples" described in the Szegedy et. al. article is more generally a property of datasets and classification, not an artifact of implementing classification using neural networks.

What a synopsis (1)

Jonathan Hart (2984995) | about 7 months ago | (#47099523)

Lets take a look at what's being said here. A neural network that "learns" has been found to occasionally make mistakes, and perhaps not perform as well as humans. So... There's room for more improvement and research. The example in the synopsis about an autonomous car mistaking a pedestrian as clear road is feasible regardless of whether a neural net is used, simply due to sensor errors. Or maybe the pedestrian is wearing a mascot uniform. The recognition of objects as what they are is an extremely difficult computational problem, and will likely be riff with errors and inaccuracies for many years as R and D contains. Think of it this way. If you were driving your car at night and someone through a Real Doll in the road are you going to be able to distinguish it as human or not? Probably not. You will likely identify it as an obstacle and react anyway, which is all we'd need an autonomous car to do. Id be wary of programming much human recognition into an autonomous car because of the problem of incorrectly identifying non humans as humans. Otherwise you'd get headlines like "Car thieves using Nicolas Cage cardboard cut outs to steal cars." Which would be hilarious, but inconvenient. They'd have it on youtube, with the car saying something like "Hello sir, could you please clear the roadway." In a voice like the Iron Man Jarvis, and the thieves would have programmed a sound board so the cutout could respond with quotes from the SNL weekend update "In the Cage" segment. "That's high praise!"

It is not "A" neural network in the classical sens (0)

Anonymous Coward | about 7 months ago | (#47099541)

Team,

The human brain is not "A" neural network, but an ensemble of them. It works more like a random forest. Random forest-robustness is the textbook solution to a problem like this - that of improving the robustness of a single learner using ensemble methods.

Rgds,
EngrStudent

Sounds like a real world example of Gödel's (3, Interesting)

jcochran (309950) | about 7 months ago | (#47099553)

incompleteness theorem. And as some earlier posters' stated, the correction is simple. Simply look again. The 2nd image collected will be different from the previous and if the NN is correct, will resolve to the correct interpretation.

Lifelike (1)

ememisya (1548255) | about 7 months ago | (#47099555)

AI modelled on us will only prove how flawed we really are.

Optical Illusion (2)

gurps_npc (621217) | about 7 months ago | (#47099557)

Is the term we use for errors in human neural networks. If you do a google search for optical illusions you will find many examples. From pictures that look like they are 3d, but are just 2d, to sizes that appear to change but aren't, we make lots of errors. Not to mention the many many cases where we think "THAT'S A FACE", whether it is jesus on toast, a face on the moon, or just some trees on a mountainside, we are hardwired to assume things are faces.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?