Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

A.I. Advances Through Deep Learning

Soulskill posted about 2 years ago | from the skip-the-lesson-on-killing-all-humans dept.

AI 162

An anonymous reader sends this excerpt from the NY Times: "Advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking. ... But what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just 'neural nets' for their resemblance to the neural connections in the brain. 'There has been a number of stunning new results with deep-learning methods,' said Yann LeCun, a computer scientist at New York University who did pioneering research in handwriting recognition at Bell Laboratories. 'The kind of jump we are seeing in the accuracy of these systems is very rare indeed.' Artificial intelligence researchers are acutely aware of the dangers of being overly optimistic. ... But recent achievements have impressed a wide spectrum of computer experts. In October, for example, a team of graduate students studying with the University of Toronto computer scientist Geoffrey E. Hinton won the top prize in a contest sponsored by Merck to design software to help find molecules that might lead to new drugs. From a data set describing the chemical structure of 15 different molecules, they used deep-learning software to determine which molecule was most likely to be an effective drug agent."

cancel ×

162 comments

Sorry! There are no comments related to the filter you selected.

It's the dawn of the roboapocalypse (0)

Anonymous Coward | about 2 years ago | (#42085113)

Take cover while you can!

Re:It's the dawn of the roboapocalypse (0)

Anonymous Coward | about 2 years ago | (#42085565)

Jeff Hawkins will be the first against the wall.

Go die you sellout to all that is evil.

Sources of improvements? (4, Insightful)

drooling-dog (189103) | about 2 years ago | (#42085161)

I wonder how much of these improvements in accuracy are due to fundamental advances, vs. the capacity of available hardware to implement larger models and (especially?) the availability of vastly larger and better training sets...

Re:Sources of improvements? (2, Informative)

PlusFiveTroll (754249) | about 2 years ago | (#42085175)

from TFA

" Modern artificial neural networks are composed of an array of software components, divided into inputs, hidden layers and outputs. The arrays can be “trained” by repeated exposures to recognize patterns like images or sounds.

These techniques, aided by the growing speed and power of modern computers, have led to rapid improvements in speech recognition, drug discovery and computer vision. "

Sounds like both.

Re:Sources of improvements? (4, Insightful)

iggymanz (596061) | about 2 years ago | (#42085301)

no, that first sentence pretty much sums up digital neural nets over two decades ago. So more likely the over two orders magnitude processing power per chip improvement since then, with addressable memory over three orders magnitude bigger....

Re:Sources of improvements? (2, Interesting)

Anonymous Coward | about 2 years ago | (#42086173)

The way they are trained is very different, and it's this change that improves the performance. It's more than just making them faster, a fast idiot is still an idiot.

Re:Sources of improvements? (1)

Anonymous Coward | about 2 years ago | (#42086195)

Robo-Bush for Prezident!

Re:Sources of improvements? (0)

Anonymous Coward | about 2 years ago | (#42086275)

FYI, the 80487 (1987) had about 1 mflop/s, the GTX-690 has about 5 tflop/s, which is a factor of 5'000'000x faster, or 6.5 orders of magnitude.

Re:Sources of improvements? (3, Informative)

Anonymous Coward | about 2 years ago | (#42086451)

A garden snail has about 20,000 neurons, a cat has 1 billion neurons, a human has 86 billion neurons.

http://www.guardian.co.uk/science/blog/2012/feb/28/how-many-neurons-human-brain [guardian.co.uk]

Yes, but... no. (5, Interesting)

Anonymous Coward | about 2 years ago | (#42086973)

This is a very misleading metric. First, some not-insignificant number of the neurons in the brain are involved in non-cognitive computations. Muscle control, hormone regulation, kinesthesia, vision (not thinking about what is seen, but simply recognizing it), heart rates and other system regulation and so on.

Examples also exist [fyngyrz.com] of low-neuron (and synapse) count individuals who retain cognitive (and all other major) function; these examples cannot be explained away by "counting neurons."

We don't know which yet, but given that high neuron count has been ruled out as the single way to accommodate intelligence, we do know that we need to look to other mechanisms for human cognition. Structure, algorithm, other features known or unknown may be responsible for intelligence; and it may be that something entirely disjoint is responsible for the rise of intelligence; but we know it isn't simply high neuron count.

--fyngyrz (anon due to mod points)

Re:Sources of improvements? (1)

tirerim (1108567) | about 2 years ago | (#42085313)

from TFA

" Modern artificial neural networks are composed of an array of software components, divided into inputs, hidden layers and outputs. The arrays can be “trained” by repeated exposures to recognize patterns like images or sounds.

These techniques, aided by the growing speed and power of modern computers, have led to rapid improvements in speech recognition, drug discovery and computer vision. "

Sounds like both.

Well, that doesn't say anything; that just described every neural network for the past couple of decades, except for the "rapid improvement" part. I haven't read TFA, so I don't know if there's more detail, but just describing the basics of how neural networks operate isn't an explanation for why they're suddenly improving.

Re:Sources of improvements? (5, Informative)

Prof.Phreak (584152) | about 2 years ago | (#42085385)

The ``new'' (e.g. last decade or so) advances are in training hidden layers of neural networks. Kinda like peeling an onion, each layer getting progressively coarser representation of the problem. e.g. if you have 1000000 inputs, and after a few layers, only have 100 hidden nodes, those 100 nodes are in essence representing all the ``important'' (some benchmark you choose) information of those 1000000 inputs.

Re:Sources of improvements? (3, Insightful)

PlusFiveTroll (754249) | about 2 years ago | (#42085415)

Article didn't say, but if I had to make a guess, this is where I would start.

http://www.neurdon.com/2010/10/27/biologically-realistic-neural-models-on-gpu/ [neurdon.com]
"The maximal speedup of GPU implementation over dual CPU implementation was 41-fold for the network size of 15000 neurons."

This was done on cards 7 years old now. The massive increase of power in GPUs in the past few years along with more features and better programing languages for them means the performance increase could possibly be many hundreds of times faster. An entire cluster of servers gets crunched down in to one card, multiple cards in one server, and build a cluster of those and you can quickly see that amount of computing power available to neural networks is much much larger now. I'm not even sure how to compare the GT6800 to a modern GTX680 because of their huge differences, but the 6800 did 54 FLOPs and the 680 does 3090.4. A 57x increase. CPU's how far back to we have to go where CPUs are 57 times slower. If everything scales the same in the papers calculations it would mean over a 2000x performance increase on a single computer with 1 GPU. In 7 years.

Re:Sources of improvements? (1)

xtal (49134) | about 2 years ago | (#42085217)

Computers have gotten very cheap. Pretty much any prof that wants to pursue something now can build enough hardware to do so with a relatively small amount of money. Neural networks ran into a big wall twenty years ago because the tools weren't there yet.

Once people start having some successes, more funds will be made available, more advances will be made, justifyiing even more funding.. and then we'll turn control of the military over to SkyNet. :)

Re:Sources of improvements? (2, Insightful)

Anonymous Coward | about 2 years ago | (#42085239)

Don't forget that it's not impossible to build a specially designed processor to do a particular task; such as the digital orrery. Such devices created to do nothing but neural net simulations would be more efficient than using a general purpose computer. It would be linked to such to provide a convenient interface but do most of the heavy lifting itself.

Re:Sources of improvements? (1)

PlusFiveTroll (754249) | about 2 years ago | (#42085459)

Why build a special processor when ATI and Nvidia already do. [google.com] Probably at a much lower cost per calculation then a custom machine.

Re:Sources of improvements? (3, Informative)

ShanghaiBill (739463) | about 2 years ago | (#42085513)

Why build a special processor when ATI and Nvidia already do. Probably at a much lower cost per calculation then a custom machine.

A GPU can run a neural net much more efficiently than a general purpose CPU, but specialized hardware designed just for NNs could be another order of magnitude more efficient. Of course GPUs are more cost effective because they are mass market items, but if NN applications take off it is likely that everyone will want one running on their cellphone, and then customized NN hardware will be mass market too.

Re:Sources of improvements? (4, Informative)

Tagged_84 (1144281) | about 2 years ago | (#42085623)

IBM recently announced success in simulating 2 billion of their custom designed synaptic cores, 1 trillion synapses apparently. Here's the pdf report [modha.org]

Re:Sources of improvements? (1)

mikael (484) | about 2 years ago | (#42086519)

Everything ran into a big wall 20 years ago. There were 680x0, DEC Alpha and SPARC systems, but they were either $10,000 workstations (with no disk drive, server or monitor for the price) or there were embedded systems requiring a rack chassis development kit (manuals cost extra).

Image processing on a PC CPU (= 80386) had to be implemented as a script of image processing command line functions as it wasn't even possible to reliably allocate more than one 64K block. You would load the image in line by line, apply a DFFT, write out the image line by line, flip the image across the major diagonal, then repeat the process. Every image processing function would have to be implemented in this way. General purpose servers were much faster.

Alternative was to use primeval graphics processing boards which had some exotic combination of DSP's and CPU's (Intel i860 CPU, TI32020 DSP, TMS340x0 chip). Some graphics boards at the time actually had a network stack/socket built in so that images could be downloaded straight into video memory and bypass the CPU.

Now any department can buy a cloud server with Terabytes of storage, a couple of PC's with GTX680's, HD webcams, and download free image and video processing software.

Automatic creation of features (4, Insightful)

michaelmalak (91262) | about 2 years ago | (#42085237)

I wonder how much of these improvements in accuracy are due to fundamental advances

I was wondering the same thing, and just now found this interview [kaggle.com] on Google. Perhaps someone can fill in the details.

But basically, machine learning is at its heart hill-climbing on a multi-dimensional landscape, with various tricks thrown in to avoid local maxima. Usually, humans detemine the dimensions to search on -- these are called the "features". Well, philosophically, everything is ultimately created by humans because humans built the computers, but the holy grail is to minimize human invovlement -- "unsupervised learning". According to the interview, this one particular team (the one mentioned at the end of the Slashdot summary) actually rode the bicycle with no hands and to demonstrate how strong their neural network was at determining its own features, did not guide it, even though it meant their also-excellent conventional machine learning at the end of the process would be handicapped.

The last time I looked at neural networks was circa 1990, so perhaps someone writing to an audience more technically literate than the New York Times general audience could fill in the details for us on how a neural network can create features.

Re:Automatic creation of features (3, Insightful)

Daniel Dvorkin (106857) | about 2 years ago | (#42085395)

the holy grail is to minimize human invovlement -- "unsupervised learning"

Unsupervised learning is valuable, but calling it a "holy grail" is going a little too far. Supervised, unsupervised, and semi-supervised learning are all active areas of research.

Re:Automatic creation of features (0)

Anonymous Coward | about 2 years ago | (#42085559)

One does not preclude another. The current "cool thing" is to learn features first using an energy based unuspervised model, and then use those features in supervised, discriminative classifier.

Re:Sources of improvements? (2, Informative)

Anonymous Coward | about 2 years ago | (#42085365)

Glad they were able to make it work so quick, but drug discovery has been done like this for over a decade. I worked at an "Infomesa" startup that was doing this in Santa Fe in 2000.

Re:Sources of improvements? (5, Informative)

Black Parrot (19622) | about 2 years ago | (#42085439)

I wonder how much of these improvements in accuracy are due to fundamental advances, vs. the capacity of available hardware to implement larger models and (especially?) the availability of vastly larger and better training sets...

I'm sure all of that helped, but the key ingredient is training mechanisms. Traditionally networks with multiple layers did not train very well, because the standard training mechanism "backpropagates" an error estimate, and it gets very diffuse as at goes backwards. So most of the training happened in the last layer or two.

This changed in 2006 with Hinton's invention of the Restricted Boltzman Machine, and someone else's insight that you can train one layer at a time using auto-associative methods.

"Deep Learning" / "Deep Architectures" has been around since then, so this article doesn't seem like much news. (However, it may be that someone is just now getting the kind of results that they've been expecting for years. Haven't read up on it very much.)

These methods may be giving ANN a third lease on life. Minsky & Papiert almost killed them off with their book on perceptrons in 1969[*], then Support Vector Machines nearly killed them again in the 1990s.

They keep coming back from the grave, presumably because of their phenomenal computational power and function-approximation capabilities.[**]

[*] FWIW, M&P's book shouldn't have done anything, since it was already known that networks of perceptrons don't have the limitations of a single perceptron.

[**] Siegelmann and Sontag put out a couple of papers, in the 1990s I think, showing that (a) you can construct a Turing Machine with an ANN that uses rational numbers for the weights, and (b) using real numbers (real, not floating-point) would give a trans-Turing capability.

Re:Sources of improvements? (2)

phantomfive (622387) | about 2 years ago | (#42085539)

using real numbers (real, not floating-point) would give a trans-Turing capability.

What on earth is trans-Turing capability?

Re:Sources of improvements? (2)

Black Parrot (19622) | about 2 years ago | (#42085557)

using real numbers (real, not floating-point) would give a trans-Turing capability.

What on earth is trans-Turing capability?

Can compute things that a TM can't.

I think the paper was controversial when it first came out, but I'm not aware that anyone has ever refuted their proof.

Re:Sources of improvements? (2)

HalfFlat (121672) | about 2 years ago | (#42085669)

[...] using real numbers (real, not floating-point) would give a trans-Turing capability.

Given that almost every real number encodes an uncountable number of bits of information, I guess this isn't especially surprising in retrospect. The result though should make us suspicious of the assumption that the physical constants and properties in our physical theories can indeed take any real number value.

Re:Sources of improvements? (5, Informative)

maxwell demon (590494) | about 2 years ago | (#42086223)

Given that almost every real number encodes an uncountable number of bits of information, I guess this isn't especially surprising in retrospect. The result though should make us suspicious of the assumption that the physical constants and properties in our physical theories can indeed take any real number value.

The number of bits needed to represent an arbitrary real number exactly is infinite, but not uncountable.

Re:Sources of improvements? (2)

HalfFlat (121672) | about 2 years ago | (#42086311)

Indeed you are right.

Re:Sources of improvements? (1)

TheTurtlesMoves (1442727) | about 2 years ago | (#42086255)

In reality or in the physical. It get quantum at some point. So even with zero noise any real parameter has finite bits for "perfect" representation. Then there is the noise issue. Real system don't match perfect math.

Re:Sources of improvements? (1)

Black Parrot (19622) | about 2 years ago | (#42086693)

[...] using real numbers (real, not floating-point) would give a trans-Turing capability.

Given that almost every real number encodes an uncountable number of bits of information, I guess this isn't especially surprising in retrospect. The result though should make us suspicious of the assumption that the physical constants and properties in our physical theories can indeed take any real number value.

My intuition is that the difference between the TM's finite set of discrete symbols and the infinite/continuous nature of real numbers is exactly the reason.

I'm not aware of any theory of continuous-state computing along the lines of the Chomsky hierarchy, but maybe there's one out there.

Re:Sources of improvements? Mod parent up plz (1)

kanweg (771128) | about 2 years ago | (#42085617)

Thanks.
Back in the early nineties I bought a neural network program to play with. I couldn't get it to learn anything (except for the XOR etc. examples) even when it was so easy (range of boiling points of hydrocarbons depending on the number of carbon atoms. Predict the boiling point of the next one). So when I read about advances in computing power I knew that wasn't the reason. Your remark on back propagation could be the explanation because that was what this network did.

Bert

Re:Sources of improvements? (1)

snarkh (118018) | about 2 years ago | (#42086007)

> (b) using real numbers (real, not floating-point) would give a trans-Turing capability.

Not sure what it means -- a Turing machine is not even capable of storing a single (arbitrary) real number.

Re:Sources of improvements? (2)

aaaaaaargh! (1150173) | about 2 years ago | (#42086239)

He meant that an ANN with real numbers is a hypercomputer, which is true.

The problem is that like most conceivable hypercomputers neural networks with real numbers would violate natural laws, e.g. the laws of thermodynamics.

Re:Sources of improvements? (1)

TheTurtlesMoves (1442727) | about 2 years ago | (#42086257)

How so? The math of thermodynamics uses real numbers and does not need any "tricks" to make it work.

Re:Sources of improvements? (1)

Black Parrot (19622) | about 2 years ago | (#42086729)

How so? The math of thermodynamics uses real numbers and does not need any "tricks" to make it work.

I think there is a theoretical minimal entropy production for any computation, so there's a limit to the amount of computation you could do if you used the entire observable universe.

Of course, you can't have the infinite tape required by a TM either.

Re:Sources of improvements? (1)

snarkh (118018) | about 2 years ago | (#42086395)

Well, real numbers are inherently very problematic from the computational point of view.

Re:Sources of improvements? (0)

Anonymous Coward | about 2 years ago | (#42086233)

These methods may be giving ANN a third lease on life. Minsky & Papiert almost killed them off with their book on perceptrons in 1969[*], then Support Vector Machines nearly killed them again in the 1990s.

Aren't support vector machines provably more powerful than ANNs?

Re:Sources of improvements? (1)

snarkh (118018) | about 2 years ago | (#42086407)

>Aren't support vector machines provably more powerful than ANNs?

In a sense yes. Both (non-linear) SVM's and Neural Nets are universal approximators. However, SVM's can be shown to converge to the ground truth given sufficiently many observations. No such result exists for neural networks. zz

Re:Sources of improvements? (2)

phantomfive (622387) | about 2 years ago | (#42085449)

I think this quote says it all:

Referring to the rapid deep-learning advances made possible by greater computing power, and especially the rise of graphics processors, he added: “The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better. There’s no looking back now.”

I'm sure they've come up with a few incremental advances, but it looks primarily like they've just taken advantage of hardware improvements. You can see from the numbers in the article the results are about what you'd expect from improved hardware (as opposed to actually solving the problem):

[some guy] programmed a cluster of 16,000 computers to train itself to automatically recognize images in a library of 14 million pictures of 20,000 different objects. Although the accuracy rate was low — 15.8 percent — the system did 70 percent better than the most advanced previous one.

Re:Sources of improvements? (3, Insightful)

timeOday (582209) | about 2 years ago | (#42085575)

You can see from the numbers in the article the results are about what you'd expect from improved hardware (as opposed to actually solving the problem)

"As opposed to actually solving the problem"? You brain has about 86 billion neurons and around 100 trillion synapses. It accounts for 2% of body weight and 20% of energy consumed. Do you think these numbers would be large if they didn't need do be?

I think the emphasis in computer science on focusing so exclusively on polynomial-time algorithms has really stunted it. Maybe most of the essential tasks for staying alive and reproducing don't happen to have efficient solutions, but the constants of proportionality are small enough to brute-force with several trillion neurons.

Re:Sources of improvements? (2, Insightful)

smallfries (601545) | about 2 years ago | (#42085901)

The problem comes when you try larger inputs. Regardless of constant factors if you are playing with O(2^n) algorithms then n will not increase above about 30. If you start looking at really weird stuff (optimal circuit design and layout) then the core algorithms are O(2^2^n) and then if you are really lucky n will reach 5. Back in the 80s it only went to 4, buts thats Moore's law for you.

Re:Sources of improvements? (2)

timeOday (582209) | about 2 years ago | (#42086945)

When you talk about O() you're talking about the worst case for finding an exact solution. Brains don't find exact solutions to anything.

Re:Sources of improvements? (1)

ceoyoyo (59147) | about 2 years ago | (#42086935)

No, "deep learning" refers mostly to new training algorithms. More computer power helps of course, but the problem previously was that your training became less efficient the bigger your system got. If that doesn't happen, you can scale things up indefinitely.

Just more of the same (2)

qbitslayer (2567421) | about 2 years ago | (#42085511)

They haven't done anything that wasn't already being done by others. They're just doing more of it. Essentially, the approach consist of using Bayesian statistics and a hierarchy of patterns. Prof. Hinton pretty much pioneered the use of Bayesian statistics in artificial intelligence. With a rare notable exception (e.g. Judea Pearl [cambridge.org] ), the entire AI community has jumped on the Bayesian bandwagon, not unlike the way they jumped on the symbolic bandwagon in the latter half the 20th century, only to be proven wrong fifty years later.

The Bayesian model essentially assumes that the world is inherently probabilistic and that the job of an intelligent system is to discover the probabilities. A competing model (see links below), by contrast, assumes that the world is perfectly consistent and that the job of an intelligent system is to capture this perfection.

See The Myth of the Bayesian Brain [blogspot.com] and The Second Great AI Red Herring Chase [blogspot.com] if you're interested in an alternative approach to AI.

Re:Just more of the same (1)

martin-boundary (547041) | about 2 years ago | (#42085835)

Sorry, but those blogposts aren't very convincing. Do you have *actual* arguments comparing Bayesian to these hypothetical alternatives, or should we just take the claims on trust?

Re:Just more of the same (0)

Daniel Dvorkin (106857) | about 2 years ago | (#42086043)

Do you have *actual* arguments comparing Bayesian to these hypothetical alternatives, or should we just take the claims on trust?

It's the "Rebel Science" guy. He's a nutcase. So no, he's not going to have any actual arguments, just a bunch of pseudoscientific babble.

hardware (1)

globaljustin (574257) | about 2 years ago | (#42085533)

It's the latter...one could assiduously identify common research buzzwords

From a neuroscience perspective, it's about transmission of signals continuously in a highly complex network...a **hardware limit**

The idea that there will be a 'fundamental advance' that allows for 'artificial intelligence' is really just hype.

All we can ever make is better things to follow our instructions.

Re:hardware (1)

Black Parrot (19622) | about 2 years ago | (#42085583)

All we can ever make is better things to follow our instructions.

What is the basis for that claim?

In 50 years when we can simulate a brain to any arbitrary level of detail, or build a wet-brain one neuron at a time, why wouldn't it be able to do what naturally occurring intelligence can?

Is there some Special Ingredient that cannot be simulated, even in principle? Or that cannot be understood well enough to try?

Re:hardware (0)

Anonymous Coward | about 2 years ago | (#42085743)

http://www.gizmag.com/ibm-supercomputer-simulates-a-human-sized-brain/25093/

It's both (5, Interesting)

Anonymous Coward | about 2 years ago | (#42085551)

In the past few years, a few things happened almost simultaneously:

1. New algorithms were invented for training of what previously was considered nearly impossible to train (biologically inspired recurrent neural networks, large, multilayer networks with tons of parameters, sigmoid belief networks, very large stacked restricted Boltzmann machines, etc).
2. Unlike before, there's now a resurgence of _probabilistic_ neural nets and unsupervised, energy-based models. This means you can have a very large multilayer net (not unlike e.g. visual cortex) figure out the features it needs to use _all on its own_, and then apply discriminative learning on top of those features. This is how Google recognized cats in Youtube videos.
3. Scientists have learned new ways to apply GPUs and large clusters of conventional computers. By "large" here I mean tens of thousands of cores, and week-long training cycles (during which some of the machines will die, without killing the training procedure).
4. These new methods do not require as much data as the old, and have far greater expressive power. Unsurprisingly, they are also, as a rule, far more complex and computationally intensive, especially during training.

As a result of this, HUGE gains were made in such "difficult" areas as object recognition in images, speech recognition, handwritten text (not just digits!) recognition, and in many more. And so far, there's no slowdown in sight. Some of these advances were made in the last month or two, BTW, so we're speaking about very recent events.

That said, a lot of challenges remain. Even today's large nets don't have the expressive power of even a small fraction of the brain, and moreover, the training at "brain" scale would be prohibitively expensive, and it's not even clear if it would work in the end. That said, neural nets (and DBNs) are again an area of very active research right now, with some brilliant minds trying to find answers to the fundamental questions.

If this momentum is maintained, and challenges are overcome, we could see machines getting A LOT smarter than they are today, surpassing human accuracy on a lot more of the tasks. They already do handwritten digit recognition and facial recognition better than humans.

Re:Sources of improvements? (3, Interesting)

PhamNguyen (2695929) | about 2 years ago | (#42085665)

I work in this area. It is mainly the latter, that is bigger data sets and faster hardware. At first, people thought (based on fairly reasonable technical arguments) that deep networks could not be trained with backpropagation (which is the way gradient descent is implemented on neural networks). Now it turns out that with enough data, they can.

On the other hand there have been some theoretical advances by Hinton and others where networks can be trained on unsupervised data (e.g. the Google cats thing).

Re:Sources of improvements? (1)

Anonymous Coward | about 2 years ago | (#42086029)

It's everything together: more data, better computer performance and better algorithms.

One idea you can use is to take your training set, add random noise to it and then train again, add different random noise and train again and so on. You can also exploit symmetries of the task to generate extra data - for image recognition of things that are still recognizable when mirrorred and/or rotated, you can mirror and rotate the input to generate extra data. You can also zoom in and out on the input pictures to teach the network to ignore scale.

A more recent advance is to randomly disable nodes in the network while training. The effect of that is to defeat over-training and also it improves performance because it makes the rest of the network more resilient to errors - there will be several redundant ways that the network recognizes something, which means it is now more able to recognize that thing when you use it without disabling any nodes.

Another thing you can do is to do unsupervised learning with neural nets. Normally you have to know what the correct output is for each input in order to train a neural net. So if you want the neural net to learn to recognize images, then you have to give it a lot of pictures annotated with what is in those pictures. However, it is much easier to get terabytes of images than it is to annotate those images. Same thing with speech recognition. So what you do is that you train the network on the unannotated data. It will have no idea what anything is, but what you train it to do is to get an idea of what images or speech looks like in general. After that you can train on a much smaller annotated training set and now the network will perform better because it already knows what to look for in pictures in general. More precisely, imagine running the network in reverse, so outputs become inputs and vice versa - now apply random input to get the network to generate an image. In this way you create a probability distribution over all images - how likely are they to turn up in this process? For unsupervised learning, you train the network to give a larger probability to pictures in the training set than to random pictures or to random noise. The outcome is that you can make good use of a huge set of unannotated data as long as it is accompanied by a much smaller set of annotated data. For example it is now helpful to trawl the net for random pictures without knowing what they are pictures of, while before that was not helpful at all.

Then there are advances in algorithms for setting up a plausible initial set of weights for the network and ideas for how to wire the network up. There are also algorithms that allow training networks on a GPU which is much faster.

This is just what I'm aware of without having read any papers or books on the subject and without using any of this stuff for anything, so I'm sure there is a lot more than just this.

Re:Sources of improvements? (1)

K. S. Kyosuke (729550) | about 2 years ago | (#42086143)

I wonder how much of these improvements in accuracy are due to fundamental advances, vs. the capacity of available hardware to implement larger models and (especially?) the availability of vastly larger and better training sets...

There are limits to what you can achieve with that. I was once surprised to discover how often I actually mishear words (when watching, e.g., episodes of US TV series) and no amount of repeating helps me. After thinking about it for a while, it became apparent to me that I actually interpolate based on the context. This, however, requires understanding what the particular speech is about. The same goes for reading badly printed or (more often) badly scanned text - quite often I reconstruct the word based on actual understanding of the discourse around the gap. I don't think the provisions you've posited here can contribute to that in any way.

Re:Sources of improvements? (1)

mikael (484) | about 2 years ago | (#42086561)

I used to do some transcription work to make a bit of spare cash. At the beginning of the tape, I really wouldn't understand the accent, not recognising some words, but after going through the tape once and replaying it, I would immediately recognise the words. It's almost as if there were a set of mask images for every word, and these didn't quite fit at first, but after 20-30 minutes they were scaled, rotated, and transformed in some way until they made a better match. Each word would also have a limited set of other words that would come after it, so that also narrowed down the set of possibilities.

Re:Sources of improvements? (1)

illaqueate (416118) | about 2 years ago | (#42086569)

Yes, we often interpolate from knowing what is being discussed. We can have algorithms to stand in to some extent but there is a limitation when the inference we make is from a representation of things out there in the world and knowledge about how those things work. We can sometimes get a sense of a conversation from very lossy understanding of what is being said.

Deep learning? (1)

olegalexandrov (534138) | about 2 years ago | (#42085165)

A lot of vague marketing-speak in this article. "Deep learning"? The article basically talks about neural networks, just one of the techniques in machine learning. Neural networks were hyped for a long time, perhaps because of the catchy name.

Deep Belief Networks (5, Informative)

Guppy (12314) | about 2 years ago | (#42085223)

A lot of vague marketing-speak in this article. "Deep learning"? The article basically talks about neural networks, just one of the techniques in machine learning.

It's hard to tell from the article, but they probably are trying to refer to Deep Belief Networks [scholarpedia.org] , which are a more recent and advanced type of Neural Network, which incorporates many layers:

Deep belief nets are probabilistic generative models that are composed of multiple layers of stochastic, latent variables. The latent variables typically have binary values and are often called hidden units or feature detectors. The top two layers have undirected, symmetric connections between them and form an associative memory. The lower layers receive top-down, directed connections from the layer above. The states of the units in the lowest layer represent a data vector.

Re:Deep learning? (1)

Anonymous Coward | about 2 years ago | (#42085257)

While you're right that "deep learning" is mostly excellent marketing by Hinton, there is some substance behind that marketing. For a long time AI folks had more or less abandoned neural architecture inspired algorithms because they did not perform well and there were some no-go results proven about classes of functions which were not learnable with the architectures of the time. Over the last 5-6 years there has been substantial progress made on finding tractable ways of training deeper architectures (more difficult because of the large parameter space). These algorithms are now starting to be competitive with other state of the art learning algorithms, with reason to believe there may be further progress to be made.

Robot Apocalypse it's not, but it is definitely an exciting area of machine learning right now.

Re:Deep learning? (1)

Mr. Mikey (17567) | about 2 years ago | (#42085269)

A lot of vague marketing-speak in this article. "Deep learning"? The article basically talks about neural networks, just one of the techniques in machine learning. Neural networks were hyped for a long time, perhaps because of the catchy name.

You could have answered your own questions with a quick search, rather than assume that that which you are ignorant about is mere "marketing-speak."

deeplearning.net [deeplearning.net]

Deep learning (Wikipedia) [wikipedia.org]

Unsupervised Feature Learning and Deep Learning [stanford.edu]

Re:Deep learning? (3, Insightful)

AthanasiusKircher (1333179) | about 2 years ago | (#42085299)

A lot of vague marketing-speak in this article. "Deep learning"?

Agreed. Why do we need the adjective "deep"? Perhaps it's because a lot of AI jargon uses "learning" when they really just mean "adaptive" (as in, "programmed to respond to novel stimuli in anticipated ways"), whereas normal human "learning" is much more fluid.

The article basically talks about neural networks

Yet another victory for marketing. These things have been around for at least 25-30 years, and the connection to what little we actually have deciphered about how the brain encodes, decodes, and processes information has always been incredibly tenuous. There always seems to be these AI strands of "cognitive science" or "neural modeling," which are often nothing than just somebody's pet algorithm or black box dressed up with words that make it sound like it has some scientific basis in actual neurophysiology or something.

Don't get me wrong -- I'm sure some of the examples in TFA have made great advances, partly due to speed and hardware unthinkable 25-30 years ago. And some of the functionality of the "neural nets" might give significantly better results than previous models.

But I really wish people would lay off the pretend connections to humanity. Why can't we just accept that a machine might just function better with a better program or algorithm or whatever, rather than saying that "our research in cognitive science [i.e., BS philosophy of the mind] has resulted in neural networks [i.e., a mathematical model instantiated into programming constructs] that exhibit deep learning [i.e., work better than the previous crap]."

(Please note: I mean no insult to anyone who works in neuroscience or AI or whatever. But I do question the jargon that seems to make unfounded connections and assumptions that the brain works anything like many algorithmic "models." We may succeed in creating artificial intelligence by developing our own algorithms or we might succeed by imitating the brain, but I don't think we're making progress by pretending that we're imitating the brain when we're really just using marketing jargon for our pet mathematical algorithm.)

Re:Deep learning? (3, Informative)

Black Parrot (19622) | about 2 years ago | (#42085453)

Why do we need the adjective "deep"?

Because the "deep learning" technologies use artificial neural networks with many more layers than traditionally, making them "deep architectures".

It's widely accepted that the first hidden layer of an ANN serves as a feature detector (possibly sub-symoblic features that you can't put a name to), and each successive layer serves as a detector for higher-order features. Thus the deep architectures can be expected to have some utility for any problem that depends on feature analysis.

Re:Deep learning? (2)

AthanasiusKircher (1333179) | about 2 years ago | (#42085499)

I completely agree that you've justified the use of the adjective "deep" in regard to "deep architectures" (and I got that before writing my post). I still don't get how this "deep" has much to do with "learning," though, in the broader world... and even if we equate the jargony connotations of "machine learning" with "learning," it still seems a stretch to use "deep" as an adjective directly applied to that... but perhaps it's just me.

Re:Deep learning? (2)

Black Parrot (19622) | about 2 years ago | (#42085567)

I completely agree that you've justified the use of the adjective "deep" in regard to "deep architectures" (and I got that before writing my post). I still don't get how this "deep" has much to do with "learning," though, in the broader world... and even if we equate the jargony connotations of "machine learning" with "learning," it still seems a stretch to use "deep" as an adjective directly applied to that... but perhaps it's just me.

I have a bigger issue with "learning" than with "deep", since with very few exceptions ANNs don't learn anything autonomously, but rather are adjusted by an external algorithm to to perform well on a given problem. "Deep training" would make sense for "deep architectures".

Re:Deep learning? (0)

Anonymous Coward | about 2 years ago | (#42085663)

Come on this is just the age old scientific naming. Everyone thinks their discovery is the last ever, the newest, shiniest one and will be the solution to all the world's problem, and they're named accordingly.

Treat it like electricty, right, where electricity flows from negative voltage to positive voltage. Brilliant naming there (perfectly explainable given the perspective of the time it was discovered though). Or terms like "the modern age", "the new age" and, my favorite "the newest age" (generally considered to be in our past, naturally). Or the name "atom" (greek for "indivisible"), brilliant naming there. Some people are also getting bit ahead of themselves like the color "strange" in quantum mechanics (yes, really). You can add the colors which goes like this : charm + anti-strange = strange D meson. Elementary, right ?

Re:Deep learning? (2)

ceoyoyo (59147) | about 2 years ago | (#42087027)

Actually, it seems your post is the vague one. "normal human "learning" is much more fluid." What does that mean?

Learning: (dictionary.com)
1. knowledge acquired by systematic study in any field of scholarly application.
2. the act or process of acquiring knowledge or skill.
3. Psychology. the modification of behavior through practice, training, or experience.

Many machine learning algorithms "learn" exactly the way you'd teach a child. They see examples, you tell them what the object, word, etc. is, and they remember that answer imperfectly. Repetition improves their accuracy and a breadth of examples improves their generality. After not seeing something for a while, they may forget it.

As the other poster pointed out, "deep" describes algorithms that are better able to teach multi-level systems. The changes associated with learning are better propagated to deeper levels, better utilizing all the capacity of the system.

No, it's not just you. There are a lot of people who see the brain as the last bastion of their identity as some kind of special and privileged creature, therefore it must be magical and any attempts to explain how it works are misguided, childish and silly. Whether that's your actual belief or not, that's what your post sounds like. Modern computational neuroscience has actually come a long way. We're even capable of producing chips that can be implanted and replace some parts of the brain. It's not magic.

deep shit (1)

globaljustin (574257) | about 2 years ago | (#42085599)

Why do we need the adjective "deep"?

Because the "deep learning" technologies use artificial neural networks with many more layers than traditionally, making them "deep architectures".

So, you admit 'deep' is a marketing buzzword...thank you. It's *obviously* not a technical term.

It is a discrete, ordinal description of a quantity...that's ALL the word 'deep' in this context means...which means it's a non-technical word...and non-technical words used to make non-existent distinctions in order to gain attention...

well that's a marketing word...

Re:deep shit (1)

smallfries (601545) | about 2 years ago | (#42085919)

When people write a paper for publication they have to differentiate their approach from previous approaches. You seem to have latched onto deep as an imprecise description of the number of layers. It is not. It is an accurate distinction in comparison to previous approaches. Because previous approaches were limited to about (not exactly) two layers it makes the definition of the label a little fuzzy, but the partition into shallow / deep approaches is crisp.

Re:deep shit (1)

AthanasiusKircher (1333179) | about 2 years ago | (#42086447)

Then how about something like "multilevel" or "multilayer adaptive networks of transfer functions" or something like that (I'm sure someone can improve the precision of that description)... rather than the vague and imprecise "deep learning neural networks", which makes implicit and inaccurate connections to brain processes for no good scientific reason (other than to fool people into giving grant money).

Re:Deep learning? (1)

maxs-pooper-scooper (2528808) | about 2 years ago | (#42086637)

The term "deep" comes from the idea that the algorithm is trying to learn something deeper than previous algorithms. In fact, the usual set of machine learning algorithms are termed shallow learning now. The difference is that deep learning tries to model P(X) whereas shallow learning (SVM, NN, naive Bayes, etc..) try to learn P(X|Y) where X is your input space and Y is the label space.

In deep learning, these neural networks are not your usual NNs. Deep learning isn't just taking advantage of hardware scaling for more nodes and layers, rather it uses convolutional NNs which are slightly different.

Another difference is that deep learning is trying to learn an efficient representation for the inputs, i.e. automatic feature generation. This is not to say it trying to become an automatic unsupervised learning technique, but instead a supervised learning approach that takes care of the most time intensive and critical process (and typically unappreciated and overlooked) of any machine learning process -- feature extraction/generation.

Re:Deep learning? (1)

Anonymous Coward | about 2 years ago | (#42085469)

It looks like you are seeing something that is not there. The majority of neural network research is about developing new and/or improved algorithms to solve problems, not to say anything about how the human brain works. Some of the terminology might be borrowed from things related to the brain due to past inspirations, but researchers could care less if the algorithms actually model what goes on in the brain or not, because that is not the point. Much of the jargon does refer to specific things and isn't just a marketing layer on top of the actual math, and it wouldn't be the first time a math related field has used terminology based on very loose analogies, or even complete lack of analogy (e.g., don't assume work on happy numbers [wikipedia.org] has anything to do with modeling psychology).

Re:Deep learning? (3, Interesting)

AthanasiusKircher (1333179) | about 2 years ago | (#42085547)

It looks like you are seeing something that is not there. The majority of neural network research is about developing new and/or improved algorithms to solve problems, not to say anything about how the human brain works.

As someone who has read a lot of the founding literature of modern cognitive science and the philosophy of mind in the 1950s through 80s, which was hugely influential in setting up the early approaches to AI (including neural nets), I have to say -- this is where the stuff came from.

And frankly, a lot of applications in more obscure disciplines, such as in AI analysis in the humanities, researchers are still making claims about these models and their relationships to the actual brain. Hell, just a few years ago I heard a leading cognitive scientist claim that he found evidence for a sort of musical "circle of fifths" neural network in an actual circular physical structure of neurons in the brain... a made-up musical model grafted onto a made-up AI brain model, supported by noisy data... I admit this is an extreme example, but it's not unique.

I understand that modern researchers in "pure" AI may want to avoid recognizing the history or the implications of the terminology -- but there's a reason why the Starship Voyager was equipped with "neural gel-packs" that could get anxious and cause a warp-core breach at a temporal anomaly... words like "neural" actually mean something, and these "neural nets" have about as much connection to the biological function of actual neurons as Voyager's bizarre "neural gel-packs." Yet the implicit metaphor made in continuing to use the term should not be underestimated, not just in a general audience NYT article, but in the way fields are subtly shaped by their nomenclature.

Re:Deep learning? (0)

Anonymous Coward | about 2 years ago | (#42085677)

I don't see how that is avoiding to recognize the history. One can both acknowledge that something was inspired by something and no longer has any connection to it. And while "neural" means something specific in biology, it can mean something specific but different in computer science. That is the nature of jargon sometimes. Simulated annealing has gone a long ways beyond its roots in thermodynamics, and hill climbing algorithms don't seem to have much to do with actual hills any more... This comes up in some many examples in so many fields, many people move on and just have to live with reminding outsiders to the field that the meanings have diverged, as opposed to inventing new words for things that already developed a well established meaning one way or another.

Re:Deep learning? (1)

AthanasiusKircher (1333179) | about 2 years ago | (#42086383)

One can both acknowledge that something was inspired by something and no longer has any connection to it. And while "neural" means something specific in biology, it can mean something specific but different in computer science. That is the nature of jargon sometimes.

I completely get your point, and if it were just one or two words ("neural" or whatever), I might agree. But the influence in this case is pervasive, and it has shaped and continues to shape the way we talk about the field. New nomenclature often continues to extend the mind metaphors, when there is no necessary reason to. Why call it "deep learning" when "multilevel" or "multilayered" might better describe the process? Etc. That was the point of my original post. And frankly, the nomenclature seems to continue to generate a lot of confusion among scholars interested in cognitive science, if my pretty thorough familiarity with cognitive models applied to problems in the professional literature of the humanities is any indication.

Re:Deep learning? (0)

Anonymous Coward | about 2 years ago | (#42085317)

You're wrong. These networks have a unique structure and a unique method for learning neuron weights. The nets can be shown to be equivalent to a Bayesian network and the learning technique does a remarkable job of bypassing local minima when learning the parameters of the target probability distribution.

Re:Deep learning? (2)

Prof.Phreak (584152) | about 2 years ago | (#42085417)

Advances are in ways of learning hidden layers that are slightly more clever than backpropagation. For example, lets say you have an image, apply some transform to it (dct, wavelet, whatever, neural net layer, etc.) and save all the important features, but at say 10x less space. Then do the same to those features. Every time reducing the amount of data by 10x. After a few such layers, lets say you're left with 10 bits worth of information---the ``most important'' (according to your benchmark used) ten bits of the whole image.

The ten bits could be anything, such as `this image is a car' or `this image is a face', or ``this face looks angry', etc.

The trick is applying the benchmark on the hidden layers---e.g. how do you pick out which features are important after applying a transform. For that, you train another (inverse) transform that recovers original data from the features---the one that gets you closest to the original wins (e.g. lets say you feed 1000 bits into a neural net to get 100 bits out, and then via inverse transform turn those 100 bits into the *original* 1000 bits... that would mean that your 100 bits represented all the information in the input 1000 bits---obviously more often than not you won't get a perfect match but something close---repeat for any number of layers you want).

Re:Deep learning? (0)

Anonymous Coward | about 2 years ago | (#42087009)

People often mistake neural networks for the multilayer perceptron(MLP) algorithm trained using backpropagation, which is what has been around for decades. Neural networks should really be seen as a class of algorithm instead. Things like restricted boltzmann machines and deep belief networks are very different from the MLP in terms of the theories they were based on as well as capabilities.

More info plz (1)

Anonymous Coward | about 2 years ago | (#42085173)

Without the rate of success it's hard to see why the Merck contest is an impressive example, since "rand()%15", which presumably is the same for an untrained neural net, will win it sometimes too and is not very interesting.
That being said the other examples in tfa are better.

Open knowledge (0)

Anonymous Coward | about 2 years ago | (#42085205)

We need to open all the documentation for everyone who want to learn and investigate about IA.

Re:Open knowledge (2)

AthanasiusKircher (1333179) | about 2 years ago | (#42085399)

We need to open all the documentation for everyone who want to learn and investigate about IA.

Absolutely. It's about time we figured out who really won those caucuses -- and what the heck is up with the ethanol subsidies?

A.I. is 82,7% hype (-1)

Anonymous Coward | about 2 years ago | (#42085283)

Typed by a human.

Can their handwriting recognition solve captchas (2)

blue trane (110704) | about 2 years ago | (#42085305)

yet?

Re:Can their handwriting recognition solve captcha (4, Funny)

slashmydots (2189826) | about 2 years ago | (#42085381)

Humans can't even solve those, lol.

Can You Imagine a Beowulf Cluster of These? (1)

jjh37997 (456473) | about 2 years ago | (#42085367)

Can You Imagine a Beowulf Cluster of These?

Re:Can You Imagine a Beowulf Cluster of These? (1)

maxwell demon (590494) | about 2 years ago | (#42085403)

Maybe. Can it learn to run Linux?

Re:Can You Imagine a Beowulf Cluster of These? (0)

Anonymous Coward | about 2 years ago | (#42085431)

Yes, but it has great difficulty making any sense of Unity or Gnome3. The first time it tried, it ran up a virtual tab on Amazon amounting to trillions of dollars.

Re:Can You Imagine a Beowulf Cluster of These? (1)

AthanasiusKircher (1333179) | about 2 years ago | (#42085447)

And then tried to get out of its virtual debt by mining bitcoins.

Re:Can You Imagine a Beowulf Cluster of These? (0)

Anonymous Coward | about 2 years ago | (#42086871)

Wrong question. Can it learn to write Linux?

Neural Network for Machine Learning on Coursera (4, Informative)

Anonymous Coward | about 2 years ago | (#42085443)

I'm doing Prof Hinton course on Neural Network on Coursera this semester. It covers the old school stuff plus the latest and greatest. From what I gather from the lecture, training neural networks using lots of layers hasn't been practical in the past and was plauged with numerical and computational difficulties. Nowadays, we have better algorithms and much faster hardware. As a result we now have the ability to use more complex networks for modelling data. However, they need a lot of computational power thrown at them to learn compared to other machine learning algorithms (random forest). The lecture quotes training taking days on a Nvidia GTX 295 GPU to learn the MNIST handwritten dataset. Despite this, the big names are already using this technology for applications like speech recognition (Microsoft, Siri), object recognition (Google Cat video, okay that's not a real application yet).

Re:Neural Network for Machine Learning on Coursera (1)

PlusFiveTroll (754249) | about 2 years ago | (#42085507)

The hardware since the 295 days is around least 3 times as fast too. It seems just about every publication on neural networks has had something about GPUs in the last few years.

http://www.neuroinformatics2011.org/abstracts/speeding-25-fold-neural-network-simulations-with-gpu-processing [neuroinformatics2011.org]

Re:Neural Network for Machine Learning on Coursera (1)

Sulphur (1548251) | about 2 years ago | (#42086797)

The hardware since the 295 days is around least 3 times as fast too. It seems just about every publication on neural networks has had something about GPUs in the last few years.

http://www.neuroinformatics2011.org/abstracts/speeding-25-fold-neural-network-simulations-with-gpu-processing [neuroinformatics2011.org]

From the article : Furthermore, to increase the number of calculated time steps increases exponentially the computation time with the CPU while the computation time increases only linearly with the Graphic Processor Unit.

Eh?

Re:Neural Network for Machine Learning on Coursera (2)

IamTheRealMike (537420) | about 2 years ago | (#42086263)

Actually, Google has already launched neural network based speech recognition [blogspot.ch] . The cat demo was for fun, the underlying technology is already applied to real problems though. I can tell you now based on practical experience as a user that the accuracy boost from it has been amazing. The dictation feature in Android went from being "amusing toy" to "actually useful" almost overnight.

Re:Neural Network for Machine Learning on Coursera (1)

Anonymous Coward | about 2 years ago | (#42086565)

Minor correction after looking at the lecture slide again. It took a few days using a Nvidia GTX 285 (not 295) GPU to train 2 million 32x32 color images on a network with approximately 67,000,000 parameters, not the handwritten database.

Why? (1)

Anonymous Coward | about 2 years ago | (#42085553)

Why do we want to obsolete ourselves with AI?

Old News (1, Interesting)

Dr_Ish (639005) | about 2 years ago | (#42085701)

While there have been advances since the 1980s, as best I can tell most of this report is yet more A.I. vaporware. It is easy to put out a press release. It is much harder to do the science to back it up. How did this even get posted on the/. front page? If this stuff was true, I'd be happy, as most of my career has been working with so-called 'neural nets'. However, they are not neural, that is just a terminological ploy to get grants (anyone ever heard of the credit assignment problem with bp?) Also, there have been some compelling proofs that most neural networks are just statistical machines. So, move on. Nothing to see here folks, etc.

Common sense (0)

Anonymous Coward | about 2 years ago | (#42086103)

Another win for common sense. They only figured out to use entity relationships for learning?

Need some good drugs to believe it? (1)

3seas (184403) | about 2 years ago | (#42086315)

... wake up people..... its the fucking drug industry looking for any excuse it can to sell you aanother one of their drugs...

And pot remains, for the most part, illegal.....

I think we already have achieved artificial intelligence... in humans...

Referencing (1)

Tempest451 (791438) | about 2 years ago | (#42086341)

Computers are great at storing and retrieving data, but what they lack is the ability to reference the data in a meaningful way. An AI can recognize an Eagle, a white star, and red and white stripes, but can't readily see the commonality of those objects to the American Flag. Everything about how humans see the world is pattern recognition, but it is the way we reference those patterns that express our intelligence.

Neural networks have their limitations (1)

Hentes (2461350) | about 2 years ago | (#42086599)

While neural networks do amazingly well for a certain type of problems, they do have their limitations. Neural networks are good for designing reflex machines, that react to their current environment. They aren't efficient when they have to learn on the field or plan ahead.

It had to be asked (1)

cellocgw (617879) | about 2 years ago | (#42086661)

From a data set describing the chemical structure of 15 different molecules, they used deep-learning software to determine which molecule was most likely to be an effective drug agent."

So the AI is going to turn some molecules into an FBI undercover snitch? That's some serious DNA-FU there!

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>