Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Qualcomm to Build Neuro-Inspired Chips

samzenpus posted about a year ago | from the skynet-approved dept.

Technology 43

Bismillah writes "At the MIT Technology Review EmTech conference, Qualcomm announced that the company and partners will design and make neural processing units or NPUs starting next year. NPUs mimic the neural structures and how the brain processes information in a massively parallel way, while being extremely power efficient, and may end up in self-learning devices."

cancel ×

43 comments

Sorry! There are no comments related to the filter you selected.

Obligatory (4, Funny)

MassiveForces (991813) | about a year ago | (#45099343)

"At the MIT Technology Review EmTech conference, Cyberdyne announced that the company and partners will design and make neural processing units or NPUs starting next year."

Re:Obligatory (0)

Anonymous Coward | about a year ago | (#45099515)

thank you kindly

Re:Obligatory (1)

connor4312 (2608277) | about a year ago | (#45100249)

Not sure where that's coming from. The quotations would indicate it's from a 3rd party source, but Google could not find that quote... "Qualcomm chief technology officer Matt Grob said by next year, the company and its partners would design and manufacture neural processing units (NPUs) which function in a completely different manner to current processors." Itnews.

Re:Obligatory (1)

Anonymous Coward | about a year ago | (#45100817)

Not sure where that's coming from

http://www.imdb.com/title/tt0103064/

Re:Obligatory (1)

NoImNotNineVolt (832851) | about a year ago | (#45100865)

Wish I had mod points.

Re:Obligatory (1)

MightyYar (622222) | about a year ago | (#45100981)

With a name like Connor, I'm wondering if you didn't travel back in time to post this message in an attempt to change the future.

Re:Obligatory (1)

zwarte piet (1023413) | about a year ago | (#45107395)

But Cyberdyne uses motorola 6502 processors for everything no?

A little thin on tech detail (3, Interesting)

tttonyyy (726776) | about a year ago | (#45099399)

A quick google fails to reveal any detail about how it works, and TFA's explanatory diagram says very little (a drawing of a brain and some boxes - oh so that's how it works?)

We can only assume this stems from Qualcomm's partnership with Brain Corp http://www.braincorporation.com/ [braincorporation.com]

Re:A little thin on tech detail (4, Informative)

Anonymous Coward | about a year ago | (#45099469)

They're doing fpga's that come with programmer capable of partial programming of the fpga on the fly.

The brain is just marketing.

Re:A little thin on tech detail (3, Informative)

somersault (912633) | about a year ago | (#45099833)

I'd assume that they're building general purpose hardware for running large neural networks [wikipedia.org] into the chips. Usually you'd set a goal for the network, and then "train" it, reinforcing the pathways that lead to successful outcomes. The theory is based on how our own brains learn, and can be very effective at solving certain problems "naturally", rather than the programming having to come up with an effective algorithmic solution.

Re:A little thin on tech detail (1)

somersault (912633) | about a year ago | (#45099913)

programmer*

Re:A little thin on tech detail (0)

Anonymous Coward | about a year ago | (#45100829)

Yeah, we got that. We're not so stupid that we wouldn't be able to parse a tiny typo that didn't even change the meaning of the sentence. In fact, our natural neural nets would have corrected the error based on the context and no one would have noticed if you hadn't pointed it out.

Re:A little thin on tech detail (1)

MightyYar (622222) | about a year ago | (#45101001)

Sadly, you replaced the inevitable grammar Nazi chain with an anti-grammar Nazi chain. I, too, am now participating in this activity devoid of all value.

Re:A little thin on tech detail (0)

Anonymous Coward | about a year ago | (#45101389)

Heil Hitler

Re:A little thin on tech detail (1)

NoImNotNineVolt (832851) | about a year ago | (#45101555)

Me too!

Re: A little thin on tech detail (0)

Anonymous Coward | about a year ago | (#45102861)

You're heiling Hitler too or they're supposed to heil you as well?

Re: A little thin on tech detail (1)

NoImNotNineVolt (832851) | about a year ago | (#45102977)

Way to ruin the thread of thread ruination.

Me too! [tvtropes.org]

Re:A little thin on tech detail (0)

Anonymous Coward | about a year ago | (#45102641)

To get this back on track (to grammar Nazi chain), I would have noticed it if he hadn't pointed it out. :)

Re:A little thin on tech detail (0)

Anonymous Coward | about a year ago | (#45101547)

But it DID change the meaning of the sentence. I would have corrected myself if I was him, too. The programming having to come up with an effective algorithmic solution is quite different than the programmer having to come up with an effective algorithmic solution. We are talking about neural nets, after all.

Re:A little thin on tech detail (1)

SuricouRaven (1897204) | about a year ago | (#45099927)

I imagine that on large neural network applications (thinking machine vision and such) it might make sense to train the network using a conventional computer or even supercomputer for the big ones, then copy the trained network into a purpose-designed chip (some form of FPGA) to save space and power.

Re:A little thin on tech detail (1)

Macchendra (2919537) | about a year ago | (#45100333)

Not a helpful answer. There is no information anywhere on the web on whether these are continuous or discrete neural networks. As far as putting it on the chip, if it is a discrete neural network, then there is no advantage over a cuda enabled neural network running on an nvidia tesla. It is just Malibu Stacy with a new hat.

Re:A little thin on tech detail (1)

Bengie (1121981) | about a year ago | (#45100873)

I hope your neural network code isn't branchy because GPUs are horrible with branches.

Re:A little thin on tech detail (1)

Macchendra (2919537) | about a year ago | (#45105627)

I hope it isn't branchy either, because that would imply that I was completely ignorant of all modern neural algorithms. How'd you get modded up to three?

Re:A little thin on tech detail (0)

Anonymous Coward | about a year ago | (#45100899)

Well perhaps they'll tell you all about that when they actually design it. This is nothing more than a statement that says that they are going to do it. Qualcomm doesn't usually do obviously stupid things, so I'm sure that we can give them the benefit of assuming that they aren't going to pull a bunch of crap out of the 1980's AI thought that caused the AI winter.

Re:A little thin on tech detail (1)

SuricouRaven (1897204) | about a year ago | (#45101823)

Power consumption, speed and possibly cost.

A lot of neural network use is in the unglamorous side of machine vision. Things like classifying apples on a high-speed conveyor belt as 'round' or 'dented' and triggering an actuator to knock the dented ones into a bin. If you're doing that for fifty apples a second, that's a lot of processing power. Which is the more practical option: A couple of tesla cards in a PC drawing a kilowatt of power, or a neural net accelerator chip that can do the job on a few percent of the power, in less space and at lower component cost?

Re:A little thin on tech detail (0)

Anonymous Coward | about a year ago | (#45105547)

The area you're referring to is generally called computer vision in CS research. You further disqualified your post by introducing a very naive example.
Neural networks are used almost everywhere these days. Read up!

Re:A little thin on tech detail (1)

SuricouRaven (1897204) | about a year ago | (#45107343)

Computer vision is also the 'classic' application of neural networks. It's the one you'll find used as an example in most textbooks, and an area where neural networks work particually well.

Re:A little thin on tech detail (1)

Macchendra (2919537) | about a year ago | (#45105605)

A the math for a neural network accelerator chip is indistinguishable from the math for graphics. It's all the same multiplying vectors and matrices. And most of the work is done up front in training the neural network, the results of which can be distributed to computers with processors of lesser power. If qualcomm is going to get a couple of teraflops on a single core then power to them.

ZISC (1)

Anonymous Coward | about a year ago | (#45099459)

The idea's been tried before, http://en.wikipedia.org/wiki/Zero_instruction_set_computer . I wonder if they plan on making this mobile too

And cue the (1)

maroberts (15852) | about a year ago | (#45099519)

Skynet comments

Re:And cue the (3, Funny)

K. S. Kyosuke (729550) | about a year ago | (#45099857)

If they are going to mimic a human brain, all that Skynet is going to be doing will be ordering beer and streaming porn and sports events.

John (3, Interesting)

Anonymous Coward | about a year ago | (#45099967)

'Qualcomm to build neuro inspired chips'

Probably not. I interviewed with them in San Diego a few years ago and was quite shocked by the lack of technical skills of the people performing the interviews and the chat style of technical interviewing (their lack of basic English skills also might have something to do with their inability to ask sensible questions too).

They may just buy a reference design from ARM to build Snapdragon processors and be very succesful with that but I honestly do not see those people developing neuro inspired chips. Not in a million years.

Re:John (1)

xtal (49134) | about a year ago | (#45103649)

How many PhD's do you think it takes to design a chip?

A long time ago, I wrote some code to generate VHDL from a basic neural network framework. The code was trained on a PC then migrated to compatible VHDL and microcode. The VHDL was then synthesized and loaded onto Xilinx FPGA automatically.

That was not complicated to do ten years ago, and I am far from an expert. The performance gains were epic, although, training is complicated.

Methinks that Qualcomm (based on their reported revenues is quite able to do this. "Revenues: $3.88 billion, up 46 percent year-over-year and 16 percent sequentially."

What's changed is the tech is cheap and fast enough.. it wasn't, ten years ago. The only surprising thing to me is that you can't get these chips now.

I'm Sorry (1)

TechyImmigrant (175943) | about a year ago | (#45100949)

"I'm sorry Dave. I'm afraid I can't let you make that phone call"

Cool. (2)

NoImNotNineVolt (832851) | about a year ago | (#45101611)

This seems rather interesting. I've dabbled in artificial neural networks out of curiosity. This seems like it could be really useful.

Neural nets are fast. Training them can be very slow, though. Backpropagation for multilayer perceptron nets is more computationally costly than simple feed-forward usage, and training a net can take many, many iterations if the training data set is large. Neural nets implemented in hardware could make this process much faster.

Of course, TFA doesn't have much detail. Are these chips going to be capable of "learning" like this? Or will you have to pre-load them with the appropriate matrix of interconnection-weights and only run them in feed-forward mode? If they can't actually do learning, I'd imagine the utility of such a device will be very limited.

Re:Cool. (1)

SuricouRaven (1897204) | about a year ago | (#45101837)

Train once on the supercomputer. Then just write the trained weights into the processors for mass-production. Great for industrial production line tasks, where you need to be able to detect defective items on a high-speed conveyor belt.

Re:Cool. (1)

NoImNotNineVolt (832851) | about a year ago | (#45102045)

Sure, that works. Unless, of course, you want the mass-produced devices to be capable of learning. I thought that was the whole point.

The feed-forward computations are already sufficiently quick, and the benefit of implementing that part in hardware is lost on me. Especially as a discrete component.

Re:Cool. (1)

SuricouRaven (1897204) | about a year ago | (#45102079)

Why would you want the end product to be capable of learning? It'd just be a support nightmare when they learn incorrectly.

The benefit of hardware is in speed and power usage, which in turn enables the use of much larger networks allowing for improved classification accuracy and more complex training. If you're doing mass-production, then a discrete NN-accelerator chip in conjunction with a cheap processor might also be cheaper than the high-end processor needed to run the net in software.

Re:Cool. (1)

NoImNotNineVolt (832851) | about a year ago | (#45103003)

Why would you want the end product to be capable of learning? It'd just be a support nightmare when they learn incorrectly.

Artificial neural networks have been found to be useful for voice recognition, for example. While it is possible to train one single ANN to recognize words from a given language, better recognition accuracy can be realized by training the system to be tailored to individual speakers. That, however, requires the ANN to continue learning after it has left the supercomputer and been shipped to end users. This would not be possible if this component doesn't support backpropagation.

That being said, I'm sure there are uses for neural nets that can no longer learn. However, that limits the demand for this product to a small subset of the neural network market.

Re:Cool. (1)

zwarte piet (1023413) | about a year ago | (#45107423)

I suppose the factory training could be a starting point for the device to learn further.

Re:Cool. (0)

Anonymous Coward | about a year ago | (#45103453)

You can run any neural net on your machine today, it just runs very slow. Given that they are readily available everywhere, and they are essentially just a fuzzy function mapping tool from domain X to Y, there is no news here. Ok, we make a specialized ASIC for doing Neural maps. Yippee. It doesn't get us any closer to AI, it's just marketing.

Re:Cool. (1)

NoImNotNineVolt (832851) | about a year ago | (#45103617)

Worst. Response. Ever.

Oh joy, artificial stupidity! (0)

Anonymous Coward | about a year ago | (#45103835)

A notable point of neural nets is that they also make mistakes.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?