Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and learn more about it. Thanks for reading, and for making the site better!

holy_calamity writes "Boston company Lyric Semiconductor has taken the wraps off a microchip designed for statistical calculations that eschews digital logic. It's still made from silicon transistors. But they are arranged gates that compute with analogue signals representing probabilities, not binary bits. That makes it easier to implement calculations of probabilities, says the company, which has a chip for correcting errors in flash memory claimed to be 30 times smaller than a digital logic-based equivalent."

It would seem that they have reinvented the analog computer, but this time entirely on a chip. And probably (hopefully) with some logic that prevents errors due to natural processes like capacitive coupling.

Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

Re:Analog Computers (2, Informative)

Anonymous Coward | more than 4 years ago | (#33286424)

This has nothing to do with analog computers. It has to do with probability of error: ref1: http://www.hindawi.com/journals/vlsi/2010/460312.html ref2: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5118445

Re:Analog Computers (5, Informative)

Anonymous Coward | more than 4 years ago | (#33286528)

No, it does. We aren't trying to reduce error in logic operations. We're passing analog values between one and zero into logic circuits. Literally, at the lowest level, the "bits" pumping through the chip are probabilities. It's not analog in the sense that we use op amps, we still use gates, but the inputs and ouptuts of the gates are probabilities, not hard bits.

Re:Analog Computers (1, Insightful)

Anonymous Coward | more than 4 years ago | (#33287358)

It's not analog in the sense that we use op amps, we still use gates

What's the difference? A gate is just a high speed high gain ultra high distortion opamp.

It's not analog in the sense that we use op amps, we still use gates

What's the difference? A gate is just a high speed high gain ultra high distortion opamp.

Forgot your introductory digital design courses already?

Digital circuits are designed to reliably transmit or compute a digital value in to presence of noise. The way this is done is by excluding huge ranges of voltages and making very high gain op-amps that, while fast, do not need to be accurate. Accuracy is thrown out the window in favor of speed and noise immunity. You will (or should) never see a properly operating op-amp in a digital circuit putting out a voltage other than something in the range representing a 0 or 1 (in TTL-compatible circuits for example, 0 to 0.2 V for a 0 and 4.7 to 5.0V for a 1... note that I'm quoting output ranges not input ranges). The acceptable voltage ranges were designed such that a valid 0 signal when combined with inevitable noise would still be read as a 0 at the next stage; mid-range values are not permitted. See, eg, http://www.interfacebus.com/voltage_threshold.html [interfacebus.com] .

Op-amps designed for accurate reproduction of analog values are an entirely different creature, one where accuracy is among the primary design requirements. In contrast to digital circuits, a mid-range value is not only permissible, but expected.

So while both digital and analog logic use op-amps, the design requirements and valid signal ranges are vastly different.

Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

Sure, you can make them cheap. But QA could be a bitch, I imagine. Simply ensuring that all used gates operate linear within a small error margin should be hard. And how you gonna give error margins for each output it calculated? After all, it's analog not digital.

Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

Analog ICs have been around since they put two transistors on a base. There's nothing new about an analog computer, other than maybe putting all the pieces together onto a single piece of silicon, but analog ICs are plentiful. The lowly op-amp is a very common one, and there are often transistorized equivalents for many passive components (because making a transistor is many times easier than making a resistor/capacitor/inductor in silicon, and the transistor version has better stability and specifications than what's possible with building the passive components in silicon directly.

Heck, the 555 is a very common analog IC - it just has a flip-flop on its output as its sole digital component. And nevermind the DACs and ADCs and other mixed-signal ICs out there.

Of course, if they managed to do this using digital IC fab technology (analog ICs are very "big" when you compare to modern digital deep submicron technology), that'll be a huge breakthrough.

Re:Analog Computers (2, Interesting)

Anonymous Coward | more than 4 years ago | (#33286554)

Probability computing is not analog computing. Nor is it digital. Nor is it limited to error correction and search engines. It's a new implementation of a mathematical concept that allows arbitrary logic to be implemented smaller and faster than traditional digital chips.

Calling it analog is an insult.

Re:Analog Computers (0)

Anonymous Coward | more than 4 years ago | (#33286640)

I am assuming that, by your statement, you mean that probability computing can be analog or digital, and is not definitively one or the other? I was reading your post, and I first thought you were saying that it is a third category (which makes no sense).

But, that being said, why is calling it analog an insult? If analog (continuous) logic/numbers are being used rather than digital (discrete) logic/numbers, then analog is not an insult, simply accurate - it is describing how it works, not what it is focused on doing.

Based on the reference given above. The idea is to use the possible error rate of a particular assembly of gates to generate a result tha represents a probability. So say, if by lowering the voltage level intentionally and run a particular logic through, the probability of the result is wrong (because of the physical limitations of the device), that would become the desired output, rather than having to raise the voltage to insure the logic is right all the time.

The whole idea is to use less gates and less energy to come up with the same statistical result in a silicon. Organize the gates in different structures will have different probabilities of producing errors. So in theory with enough emperical data we can safely predict the porbabilities of an error coming from a certain arrangements. That is the beauty of the statistics, after all, and it does not have to be dead accurate as long as we are in the margin of errors. The results wills still have the base signals of 0 and 1, except they now represent a certain probabilities, instead of a hard 0 or 1 bit.

Yes the theory is new so it would be hard to validate, but certainly it would be interesting to see how it works out in real-life application.

If you're referring to the first application on the website[1], yes, it does mention error correction for digital storage. However, TFA refers to an entirely different application. Also, the paper linked to somewhere below suggests that it is some kind of analog computer on chip. Which is amazing, because it's pretty difficult to get high densities and still preserve low noise levels.

It would seem that they have reinvented the analog computer, but this time entirely on a chip.

Actually it would be sweet to have an equivalent to a CPLD or FPGA [wikipedia.org] for analog electronics, where an entire analog sub-system could be reduced to a single chip, reducing the cost and board real estate for usage in low-cost electronics, reduce noise levels. Many it's my math background, or working in scientific computing, but being able to work natively with continuous number versus discrete representation that are often only approximate (a la floating point number in a digital computer), would be nice.

If such a computing device could be scaled up in "logical unit" density and speed like we've seen with digital computers, they could prove useful in a number of applications. Depending on the quality of noise (undefined or unintentional variation in numeric values), and its management, it might prove fruitful for scientific simulation such as weather forecasting, fluid dynamics, and any other model of physical conditions which are more accurately in continuous numbers (i.e. the Real [wikipedia.org] number domain).

Re:Analog Computers (0)

Anonymous Coward | more than 4 years ago | (#33289216)

...errors due to natural processes like capacitive coupling.

What are the odds of that?

There are 10 kinds of people in the world.. (5, Funny)

User: Are we in the right road to the beach? Google maps: Probably. User: the fuck?... Is this the beach road or not. Google maps: I'd say yes...ish. Most likely.... User: The road is cut! It ends like right here! Google maps: Let me change my first answer to "I wouldn't bet on it. Much. I wouldn't bet much on it.... Ok no, it's not likely to be the road. I'm turning off now. Good luck!"

Re:There are 10 kinds of people in the world.. (1)

Now, if you were a snowbound driver, in...let's say, Oregon. Your family is in the car, and you have to get back to your online magazine job in the Bay Area, and Google maps says to take that seasonal road through the woods...

Re:There are 10 kinds of computers in the world.. (0)

Anonymous Coward | more than 4 years ago | (#33287420)

...and out of those, 12.5% understand binary at any given time, and 87.5% do not...

A Computer for Truth Challenged Scientists? (2, Funny)

can I get a simpler explanation of what this can do for you? I understand it says it will be used for probability, but what does that really let you *do*?

well, to be honest, i have absolutely no idea - this is really new, and if it takes off, it'll be a while until it becomes clear what the possibilities and limitations of this technology are. many people have pointed out that it's just an analog computer, but it's not the same - first of all, it's on a microchip, hence much more power. that alone makes things different...

Re:may i just say (0)

Anonymous Coward | more than 4 years ago | (#33286762)

Probabilistic computing is widely used already. For example, what's the PROBABILITY, given past categorizations, that an email is spam? Software can answer that question, but because strictly-digital general purpose processors are designed to say "given these two definite operands, the definite answer is x" they're not especially efficient at answering this type of question. Lyric's stuff is. Instead of a GP processor saying "xor(1,0)=1", the kind of thing that takes a cycle or two, Lyric's technology gives the answer to xor(.5,.5) in the same amount of time.

You are a person who is learning from a machine or....
You are a learning machine who is now referring to itself as person! You also get excited about probabilities and you are posting on/.

You are a learning machine who is now referring to itself as person! You also get excited about probabilities and you are posting on/.

A.I. has gone too far...

On the plus side, it sounds like the robot revolution is going to be stymied for the same reason as my productivity. Destroy all humans! After I refresh/. one more time...

what is the probability (0)

Anonymous Coward | more than 4 years ago | (#33286498)

Been there, done that. Analog computers existed 50 years ago because digital computers were too slow. Even then, they were a nice market. Calibration is a big issue, and even with a perfectly calibrated machine you don't have a lot of accuracy.
With the speed of today's digital computers, this is a (poor) solution in search of a problem.

Re:Analog computers live again!! (1, Informative)

Anonymous Coward | more than 4 years ago | (#33286572)

It's not the same kind of analog. It's analog in the sense that it operates on things between 1 and 0, but it still uses logic.

Re:Analog computers live again!! (1, Interesting)

Anonymous Coward | more than 4 years ago | (#33286606)

Just like nobody needs enough vector float computations and SIMD instructions at once to justify making a card unit that does a @$%#$ ton of them at once. This chip, in a PCIE card could make a lot of sense.

Re: (0)

Anonymous Coward | more than 4 years ago | (#33286704)

This would be ideal for mobile telephones and GPS devices. Signal reception, noise cancellation and error correction can all be done faster and with less energy when done in analog.

Re:Analog computers live again!! (0)

Anonymous Coward | more than 4 years ago | (#33287614)

I agree about the solution in search of a problem bit. This is a lot like the NoSQL school of thought where people comment that if you yank all error correction code, the stuff runs like blue blazes.

Yes, it will run faster. But would a bank or a business want to trust results from chips built to have little to no error correction? I sure as heck wouldn't. Just as I want databases to have basic integrity (which NoSQL based systems toss in return for performance), CPUs need to be able to come up with the same value after an extensive calculation 100% of the time. Not 99%. Not 99.99999%.

An area this technology might be used in could be embedded controllers, which are not general-purpose devices.

If you're building a thrust vectoring system for a plane, and the servos have an accuracy of 1%, then it is more important to deliver more frequent servo updates than to deliver those updates with 0.0001% accuracy. If your device is attached to a sensor that has 5% manufacturing tolerances then you may not need even 8-bit precision on the math.

In the IS discrete-math world we tend to view numbers as precise figures. However, most numbers that computers deal with are actually not discrete quantities, and they can have substantial levels of error. If you can optimize a system to make it faster and cheaper and have it consume less power at the cost of a level of error that is still negligible compared to the error already in the system, then that is a good move.

However, I'm not sure this will ever make sense for GP computing, unless you can make it part of a standardized coprocoessor or something like that. If GP computing it is hard to know where error can be accepted, unless this is specified by the programmer.

GCC has -ffast-math, and I see this as something similar.

Probability in computers: it's called a float (4, Insightful)

The article mentions Bayesian calculations. Can these computers really speed up those calculations? Nowadays Bayesian calculations usually involve thousands of iterations of a technique called Markov Chain Monte Carlo [wikipedia.org] (MCMC) unless the distributions in question are conjugate priors [wikipedia.org] . The simulation then converges to the right answer.

The issue I see is that all these techniques are just math. They are either analytic (conjugate priors) or require strict error bounds in order get sensible answers (MCMC). There's no separate system of math that Bayesians use. Like many others, Bayesians just need quick reliable floating point mathematics. So anyway, I don't see how this can help Bayesian statisticians, unless it also revolutionizes engineering, physics, etc.

Re:Probability in computers: it's called a float (1)

I've been dealing with Bayesian methods for a few years, too. I understand the goal of the hardware is not to run everything that is being sold as Bayesian methods. Basically Bayesian calculations mean computing conditional probabilities, which usually gets down to a ton of multiplications and additions. If the analog hardware can produce results for a particular subproblem with sufficient accuracy, then you are saving a lot of power and time. If it can produce estimates that are not entirely accurate but within sufficient bounds, then you can still avoid a whole lot of digital computations by narrowing down on possible solutions.

Re:Probability in computers: it's called a float (0)

Anonymous Coward | more than 4 years ago | (#33287432)

Right: and it ends up calculating estimates with better-than-good-enough error bounds in a matter of a few cycles.

Re:Probability in computers: it's called a float (1)

We were using Bayesian nets on a project back in '89, using them to estimate probabilities on the state of certain installations based on text reports. It was pretty hairy, we were implementing Judea Pearl's algorithm, which was a pain to implement (actually we were re-implementing it from Lisp to C) and not that fast on the old Mac IIfx. It was quite powerful, though, depending on the quality of the knowledge input. I never got into the math, but I understand that other algorithms have been developed that simplify the math to some degree. Neat that they're now making chips for what we did in software.

Re:Probability in computers: it's called a float (1)

This technolgy seems to be a marriage between analogue computing and forward error correction (FEC) algorithms. FEC algorithms are "nice" in that you can have minor errors in their implementation, and they still work, albeit with slightly lower coding gain. (This also makes them hard to debug, as they tend to correct their own errors!). Generally, errors accumulate in analogue computing, but in FEC algorithms they should get corrected. The savings come from replacing an array of logic gates (as required to implement an opation on an 'n' bit word) with an amplifier whose voltage has a resolution of 1 part in 2^n.

Re:Probability in computers: it's called a float (1)

I suspect the variable you're leaving out is, in a binary computer context "floating point math" usually means IEEE floating point, which is a very different animal than the abstract concept that comes to mind when you say "floating point math". Even when it doesn't mean IEEE floating point, every binary floating point implementation is a compromise with some combination of limited performance, limited range, and limited precision.

Consdier that IEEE floating point has no exact representation of numbers like 0.1; this may not matter if its the final answer, because you're likely to limit the significant figures diaplayed sufficiently that the computer will appear to have reached exactly the right answer; but if that value is a variable in the middle of a long, complex chain of calculations, errors may accumulate and you may not be so happy with the result.

For some problems, analog computers are and always have been better at performing the required calculations. That, in and of itself, is nothing new.

Re:Probability in computers: it's called a float (1)

Sounds like someone has been living in a perfect reality (aka drugs).
I dare you to make a voltage regulator with better precision than an IEEE 64-bit floating point number... or even a 32-bit one.

Then try to implement mixers and actual logic.

Then embed it into a tiny circuit amid an extremely noisy environment.

Of course, the last two are just academic, as you're never even going to manage the voltage regulator without some extreme equipment.

Re:Probability in computers: it's called a float (4, Informative)

[...] Nowadays Bayesian calculations usually involve thousands of iterations[...]. The simulation then converges to the right answer.

The convergence you refer to is asymptotic. In practice it takes about 10000 iterations to get around a 1% bound on a single probability point estimate, and a factor of a hundred for each order of magnitude improvement. On top of that, if you're dealing with multiple distributions the overall expectation is not just a simple function of the component expectations unless the whole system is linear, you need to use convolution to combine results. And on top of that, lots of interesting problems are based on order statistics, not means/expectations. Having hardware that correctly manipulates distributional behavior in a few CPU cycles would blow the doors off of MCMC.

Re:Probability in computers: it's called a float (1)

Yeah, fine, we'd all like to compute our likelihoods faster, and you can imagine hardware which does it, but it's not clear from the article why doing this analog is superior. Or even how you can avoid sampling algorithms like MCMC using this approach. How does making probabilities analog get you a joint probability distribution over a parameter space, let you marginalize parameters to a lower dimensional distribution, and all the other things MCMC is for? I have a feeling that this is chip is targeted at Bayesian networks, where you're just propagating conditional probability values down a graph, rather than at MCMC.

Analogue Computing (2, Insightful)

Anonymous Coward | more than 4 years ago | (#33286534)

This is potentially a great advance. Everyone knows that analogue computing can greatly outperform digital computing (now each bit has a continuum of states so stores infinitely more data, each operation on 2 'bits'....you get the idea)....but there are many issues to resolve i.e.

1) Error correction - every 'bit' is in an erroneous state 2) Writing code for the thing - anyone got analogue algorithm design on their CVs?

Re:Analogue Computing (0)

Anonymous Coward | more than 4 years ago | (#33286778)

It could well create a whole new area of computing development.

Analogue computing is the eventual goal for computing, and i'm hopeful this will be one huge leap towards that goal. Hell, even Quaternary Computing would be better than crappy Binary. The only reason people won't make the switch is because it would mean changing everything to handle 4-bit. And for the most part, all of that crap could be handled in direct hardware emulation like 64 bit processors emulating 32 bit. God forbid you mention 3D motherboards to some of them, they'd go insane and probably start punching walls.

Analogue hard drives would have been a possibility at this point in time if it wasn't for SSDs coming out. HDDs are reaching hard limits that are making them much more expensive to build and more prone to error due to more compact sizes. Moving small things at high speed just isn't a good combination. But SSDs came out because the methods to make analogue HDDs are quite frustrating to build. Even fixed to something like 4 bits can be a challenge (10, 5, -5, -10). And let's not even go near Ternary when it comes to magnets... This method would be entirely possible, but it requires rethinking the firmware layer a bit. Too much work apparently, so lets make stupidly expensive hard drives with complicated and easier-to-break methods! HUZZAH ANALOGUE BRAIN LOGIC! Oh wait...

"Hell, even Quaternary Computing would be better than crappy Binary."

It would make no difference - the algorithms would be identical. All you'd gain would be saved storage space as each "bit" could represent 4 values instead of 2. You'd still be dealing with a system that could only handle discrete values.

I've been waiting.... (0)

Anonymous Coward | more than 4 years ago | (#33286566)

This is the first step in creating an infinite improbability drive, you know...

If 0.8 AND 0.6 = 0.7 (I assume you're taking the average here), then 1 AND 0 would be 0.5, when it's supposed to be 0. The only answers I would accept for 0.8 AND 0.6 are 0.6 (min) and 0.48 (multiplication). An OR gate is constructed by attaching NOT (1 - x here) gates to the inputs and output of an AND gate, yielding 0.8 or 0.92 depending on which rule you go with.

Of course, multiplicative is what I should have done
XOR is a bit of a bugger to figure out, so I will cheat and use
this [wikipedia.org] .
That's all the gates covered.

Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (0)

Anonymous Coward | more than 4 years ago | (#33286830)

The graph of XOR output on the z axis vs inputs 0 and 1 on the x and y respectively looks like a saddle.

I believe you are correct with the multiplication rule. According to the article,

Whereas a conventional NAND gate outputs a "1" if neither of its inputs match, the output of a Bayesian NAND gate represents the odds that the two input probabilities match. This makes it possible to perform calculations that use probabilities as their input and output.

But I'm not clear as to what "the odds that the two input probabilities match" means... that implies, to me, that it returns a 1 if the inputs are identical and 0 if not. I'm thinking it instead means, "Given events A and B with inputs p(A) and p(B), Bayesian NAND represents p(A and B)." Or perhaps p(A nand B)... I don't know.

I think it is more of a probability thing then what you are thinking of. The return is the probability that the two values are the same. So 0.5 AND 0.5 would be 100% while 0.5 AND 0.6 would like 80% or something depending on the allowed error and uncertainty.

Thinking of this reminded me of BugBrain [biologic.com.au] . If you want to play with Bayesian logic it has a pretty good set of examples including building a neural network to perform simple character recognition.

Anonymous Coward | more than 4 years ago | (#33286620)

Probability computing is not analog computing. Nor is it digital. Nor is it limited to error correction and search engines. It's a new implementation of a mathematical concept that allows arbitrary logic to be implemented smaller and faster than traditional digital chips.

Calling it analog is an insult that just shows you need to read a new book.;-)

If it uses analogue signals internally then its an analog computer whatever those signals may represent at a higher level in the same way that a DSP is just as digital as a crypto chip even though the binary data is used for different things.

Sorry, we'll never have one in America. We can't make proper tea, and I don't believe they can run on coffee.

We shall never experience the WHUMP-thunk of a whale and a pot of petunias landing on our shores, unless one of the Brit boffins makes a mistake and as you know that never happens.

One step closer to the Infinite Improbability Drive (http://en.wikipedia.org/wiki/Technology_in_The_Hitchhiker's_Guide_to_the_Galaxy#Infinite_Improbability_Drive)

Indeed, I'd expect it to have quite limited precision (which usually is OK in probability calculations; you generally don't really care if the probability is 0.34654323 or 0.34654324).

Re:Remember Slide Rules? (0)

Anonymous Coward | more than 4 years ago | (#33287364)

The difference between those two numbers is actually acuracy. In general, analog signals have unlimited precision, but limited accuracy. In contrast to a digital signal, which has extremely limited precision (one significant figure), but nearly 100% accuracy. They're the two extremes.

See slide 41 for the NAND gate they are bragging about.

I'm a bit worried about them being completely fabless. I'm sure all their circuits work in SPICE, but how is this going to deal with real world noise, especially embedded on some other digital chip? The powerpoint explicitly states that is adversely affected especially by the sudden spikes caused by digital noise...

I was about to post the slideshow myself, but I see you beat me to it:)

Analog computers sound so much more natural than d (1)

It's vaguely familiar, but since no two circuits are *truly* identical at the analog layer, *and* change as the temperature changes, people used digital instead where 'mostly 0' is still '0' and 'mostly 1' is still '1' regardless. Otherwise you can't mass produce them.

Of more interest is people using analog-alike bitstreams, where the average number of 1's vs 0's in a random stream is the amplitude of the analog wave. They then blend the input streams together to produce the output stream. I've mostly seen this done by Royal Holloway University to produce neural chips that *don't* need squillions of interconnections - they just blend probability streams. Looks like people are playing with optical ones now too. Why not put a story up about that instead?

Am I really the only person left that hates this construction? I know that it has become (very) common usage, but we, as nerds, should understand that details matter.

If one says that something is 50% smaller, we understand that to mean half the size. And if one says that something is 3000% smaller, or 30 times smaller, should we not understand that as not only taking no space, but actually giving us 29 times the original space back?

Unless we are making a three part comparison, which has new perils. If B is half the size of A, and C is 30 times smaller than B than B is to A, then we may understand the size of C as 0.5^30 times the size of A. However, if B is 99% of the size of A, then having C be 30 times smaller than B can mean that C is 70% of the size of A, or maybe C is 0.99^30 times the size of A.

Perhaps we should stick to saying what we mean, with things like "a chip for correcting errors in flash memory claimed to be one thirtieth the size of a digital logic-based equivalent"

While I agree with you that it is unclear or at least not intuitively obvious, the plain fact is that it has been in use for a long time and is very common. It's not a recent development (Jonathan Swift is known to have used the construction in the early 1700s) nor is it rare (almost 500,000,000 Google results for 'times less than'). For better or worse, language is not tied directly to math, nor is the meaning of a phrase necessarily tied to the meaning of the individual words that make it up.

10x smaller means: one tenth of the original size. Hence, 50% smaller means: double the size. Simple, eh?

Not the only one... but wrong and a bit silly (0)

Anonymous Coward | more than 4 years ago | (#33288382)

You know what 30 times smaller means. In fact, you instinctively know it. Manufacturing the chip doesn't expand the universe by 29 times the size of a regular chip. We don't know ways to create more space so there isn't really any other way to interpret that expression. It conveys no misinformation... It is just silly to nitpick about that.

It is different than nitpicking about whether 10 is ten times larger than 1 or ten times as large as 1 because there is one correct one and another that conveys misinformation (though the difference isn't that relevant: those aren't usually meant to be exact statements to begin with). Here... It is just worthless.

Re:Not the only one... but wrong and a bit silly (1)

and if I start. New sentence's. In completely inappropriate. Location's within my text. You can still understand. What I mean. Adding apostrophe's to my plural's also leaves my meaning clear.
It's still not correct.

Sounds like this hardware would be useful for Fuzzy Logic based AI applications. Fuzzy logic is useful for decision making and automating processes where multiple variables affect a range of possible reactions. Like when the cup you grab with your hand turns out to be very light because your girlfriend drank all your juice. When you initially grab the cup you start off with too much muscle activation and then adjust quickly at first then more slowly based on new sensory data. From common experience we know our grip strength isn't a function of one or zero but a range of activation that changes based on the ranges of other inputs. This is something Fuzzy Logic is good at and possibly something this chip would be good for too.

## Analog Computers (4, Insightful)

## timgoh0 (781057) | more than 4 years ago | (#33286370)

It would seem that they have reinvented the analog computer, but this time entirely on a chip. And probably (hopefully) with some logic that prevents errors due to natural processes like capacitive coupling.

## Re:Analog Computers (2, Insightful)

## Sockatume (732728) | more than 4 years ago | (#33286410)

Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

## Re:Analog Computers (2, Informative)

## Anonymous Coward | more than 4 years ago | (#33286424)

This has nothing to do with analog computers. It has to do with probability of error:

ref1: http://www.hindawi.com/journals/vlsi/2010/460312.html

ref2: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5118445

## Re:Analog Computers (5, Informative)

## Anonymous Coward | more than 4 years ago | (#33286528)

No, it does. We aren't trying to reduce error in logic operations. We're passing analog values between one and zero into logic circuits. Literally, at the lowest level, the "bits" pumping through the chip are probabilities. It's not analog in the sense that we use op amps, we still use gates, but the inputs and ouptuts of the gates are probabilities, not hard bits.

## Re:Analog Computers (1, Insightful)

## Anonymous Coward | more than 4 years ago | (#33287358)

It's not analog in the sense that we use op amps, we still use gatesWhat's the difference? A gate is just a high speed high gain ultra high distortion opamp.

## Re:Analog Computers (2, Informative)

## pz (113803) | more than 4 years ago | (#33288354)

It's not analog in the sense that we use op amps, we still use gatesWhat's the difference? A gate is just a high speed high gain ultra high distortion opamp.

Forgot your introductory digital design courses already?

Digital circuits are designed to reliably transmit or compute a digital value in to presence of noise. The way this is done is by excluding huge ranges of voltages and making very high gain op-amps that, while fast, do not need to be accurate. Accuracy is thrown out the window in favor of speed and noise immunity. You will (or should) never see a properly operating op-amp in a digital circuit putting out a voltage other than something in the range representing a 0 or 1 (in TTL-compatible circuits for example, 0 to 0.2 V for a ... note that I'm quoting

0and 4.7 to 5.0V for a1outputranges not input ranges). The acceptable voltage ranges were designed such that a valid 0 signal when combined with inevitable noise would still be read as a 0 at the next stage; mid-range values are not permitted. See, eg, http://www.interfacebus.com/voltage_threshold.html [interfacebus.com] .Op-amps designed for accurate reproduction of analog values are an entirely different creature, one where accuracy is among the primary design requirements. In contrast to digital circuits, a mid-range value is not only permissible, but expected.

So while both digital and analog logic use op-amps, the design requirements and valid signal ranges are vastly different.

## Re:Analog Computers (1)

## TD-Linux (1295697) | more than 4 years ago | (#33288442)

It's not analog in the sense that we use op amps, we still use gatesWhat's the difference? A gate is just a high speed high gain ultra high distortion opamp.

And worse, in this application neither high gain nor high distortion are desired properties.

## Re:Analog Computers (2, Interesting)

## tenco (773732) | more than 4 years ago | (#33286926)

Being able to do it on silicon should mean they can make them cheaply and quickly with existing fab gear. I could see these being a lot of fun for tinkerers.

Sure, you can make them cheap. But QA could be a bitch, I imagine. Simply ensuring that all used gates operate linear within a small error margin should be hard. And how you gonna give error margins for each output it calculated? After all, it's analog not digital.

## Re:Analog Computers (1)

## tlhIngan (30335) | more than 4 years ago | (#33288488)

Analog ICs have been around since they put two transistors on a base. There's nothing new about an analog computer, other than maybe putting all the pieces together onto a single piece of silicon, but analog ICs are plentiful. The lowly op-amp is a very common one, and there are often transistorized equivalents for many passive components (because making a transistor is many times easier than making a resistor/capacitor/inductor in silicon, and the transistor version has better stability and specifications than what's possible with building the passive components in silicon directly.

Heck, the 555 is a very common analog IC - it just has a flip-flop on its output as its sole digital component. And nevermind the DACs and ADCs and other mixed-signal ICs out there.

Of course, if they managed to do this using digital IC fab technology (analog ICs are very "big" when you compare to modern digital deep submicron technology), that'll be a huge breakthrough.

## Re:Analog Computers (2, Interesting)

## Anonymous Coward | more than 4 years ago | (#33286554)

Probability computing is not analog computing. Nor is it digital. Nor is it limited to error correction and search engines. It's a new implementation of a mathematical concept that allows arbitrary logic to be implemented smaller and faster than traditional digital chips.

Calling it analog is an insult.

## Re:Analog Computers (0)

## Anonymous Coward | more than 4 years ago | (#33286640)

Exactly. Thank you :)

## Re:Analog Computers (1)

## ByOhTek (1181381) | more than 4 years ago | (#33287258)

I am assuming that, by your statement, you mean that probability computing can be analog or digital, and is not definitively one or the other? I was reading your post, and I first thought you were saying that it is a third category (which makes no sense).

But, that being said, why is calling it analog an insult? If analog (continuous) logic/numbers are being used rather than digital (discrete) logic/numbers, then analog is not an insult, simply accurate - it is describing how it works, not what it is focused on doing.

## Re:Analog Computers (2, Informative)

## denobug (753200) | more than 4 years ago | (#33289522)

The whole idea is to use less gates and less energy to come up with the same statistical result in a silicon. Organize the gates in different structures will have different probabilities of producing errors. So in theory with enough emperical data we can safely predict the porbabilities of an error coming from a certain arrangements. That is the beauty of the statistics, after all, and it does not have to be dead accurate as long as we are in the margin of errors. The results wills still have the base signals of 0 and 1, except they now represent a certain probabilities, instead of a hard 0 or 1 bit.

Yes the theory is new so it would be hard to validate, but certainly it would be interesting to see how it works out in real-life application.

## It uses analog signals internally (1)

## Viol8 (599362) | more than 4 years ago | (#33287932)

Ergo its an analog system. What those signals represent is irrelevant.

## I bet Windows has one of these! (0)

## Anonymous Coward | more than 4 years ago | (#33286582)

I bet it's set to 'probably' blue screen, lol.

## Oh hilarious (1)

## Viol8 (599362) | more than 4 years ago | (#33287984)

Those BSOD jokes were old 10 years ago. Did your time machine take a wrong turn and you ended up in 2010 instead of 1995?

## Mod shit down (1, Interesting)

## Nicolas MONNET (4727) | more than 4 years ago | (#33286674)

It's got absolutely nothing to do with analog computers. At all. The first application cited is even digital storage.

## Re:Mod shit down (1)

## timgoh0 (781057) | more than 4 years ago | (#33287082)

If you're referring to the first application on the website[1], yes, it does mention error correction for digital storage. However, TFA refers to an entirely different application. Also, the paper linked to somewhere below suggests that it is some kind of analog computer on chip. Which is amazing, because it's pretty difficult to get high densities and still preserve low noise levels.

[1] http://www.lyricsemiconductor.com/products.htm [lyricsemiconductor.com]

## Re:Analog Computers (2, Interesting)

## plcurechax (247883) | more than 4 years ago | (#33288738)

It would seem that they have reinvented the analog computer, but this time entirely on a chip.

Actually it would be sweet to have an equivalent to a CPLD or FPGA [wikipedia.org] for analog electronics, where an entire analog sub-system could be reduced to a single chip, reducing the cost and board real estate for usage in low-cost electronics, reduce noise levels. Many it's my math background, or working in scientific computing, but being able to work natively with continuous number versus discrete representation that are often only approximate (a la floating point number in a digital computer), would be nice.

If such a computing device could be scaled up in "logical unit" density and speed like we've seen with digital computers, they could prove useful in a number of applications. Depending on the quality of noise (undefined or unintentional variation in numeric values), and its management, it

mightprove fruitful for scientific simulation such as weather forecasting, fluid dynamics, and any other model of physical conditions which are more accurately in continuous numbers (i.e. theReal[wikipedia.org] number domain).## Re:Analog Computers (0)

## Anonymous Coward | more than 4 years ago | (#33289216)

...errors due to natural processes like capacitive coupling.

What are the odds of that?

## There are 10 kinds of people in the world.. (5, Funny)

## Deus.1.01 (946808) | more than 4 years ago | (#33286374)

12.5% that understands binary 87.5 that don't...

## Re:There are 10 kinds of people in the world.. (3, Funny)

## jimicus (737525) | more than 4 years ago | (#33286652)

Probably.

## Re:There are 10 kinds of people in the world.. (4, Funny)

## Thanshin (1188877) | more than 4 years ago | (#33286790)

Probably.

User: Are we in the right road to the beach? ... ... Ok no, it's not likely to be the road. I'm turning off now. Good luck!"

Google maps: Probably.

User: the fuck?... Is this the beach road or not.

Google maps: I'd say yes...ish. Most likely.

User: The road is cut! It ends like right here!

Google maps: Let me change my first answer to "I wouldn't bet on it. Much. I wouldn't bet much on it.

## Re:There are 10 kinds of people in the world.. (1)

## WED Fan (911325) | more than 4 years ago | (#33287320)

## Re:There are 10 kinds of computers in the world.. (0)

## Anonymous Coward | more than 4 years ago | (#33287420)

...and out of those, 12.5% understand binary at any given time, and 87.5% do not...

## A Computer for Truth Challenged Scientists? (2, Funny)

## Handbrewer (817519) | more than 4 years ago | (#33286392)

## Re:A Computer for Truth Challenged Scientists? (1)

## Sockatume (732728) | more than 4 years ago | (#33286404)

No.

## Re:A Computer for Truth Challenged Scientists? (0)

## Anonymous Coward | more than 4 years ago | (#33286548)

Actually, no. It corrects the models to fit the results - on the fly.

## may i just say (0, Offtopic)

## martas (1439879) | more than 4 years ago | (#33286448)

## Re:may i just say (1)

## Deus.1.01 (946808) | more than 4 years ago | (#33286500)

Did programmers that got their first microprocessors that had its own functions for dividing and multiplying feel the same as you ;)

## Re:may i just say (1)

## poetmatt (793785) | more than 4 years ago | (#33286560)

can I get a simpler explanation of what this can do for you? I understand it says it will be used for probability, but what does that really let you *do*?

## Re:may i just say (1)

## martas (1439879) | more than 4 years ago | (#33286608)

## Re:may i just say (0)

## Anonymous Coward | more than 4 years ago | (#33286762)

Probabilistic computing is widely used already. For example, what's the PROBABILITY, given past categorizations, that an email is spam? Software can answer that question, but because strictly-digital general purpose processors are designed to say "given these two definite operands, the definite answer is x" they're not especially efficient at answering this type of question. Lyric's stuff is. Instead of a GP processor saying "xor(1,0)=1", the kind of thing that takes a cycle or two, Lyric's technology gives the answer to xor(.5,.5) in the same amount of time.

## Re:may i just say (3, Funny)

## dominious (1077089) | more than 4 years ago | (#33286638)

as a machine learning person

This either means:

/.

You are a person who is learning from a machine or....

You are a learning machine who is now referring to itself as person! You also get excited about probabilities and you are posting on

A.I. has gone too far...

## Re:may i just say (2, Funny)

## Chris Burke (6130) | more than 4 years ago | (#33287956)

You are a learning machine who is now referring to itself as person! You also get excited about probabilities and you are posting on /.A.I. has gone too far...On the plus side, it sounds like the robot revolution is going to be stymied for the same reason as my productivity. Destroy all humans! After I refresh /. one more time...

## what is the probability (0)

## Anonymous Coward | more than 4 years ago | (#33286498)

that this thing becomes a market hit ?

## Analog computers live again!! (1, Insightful)

## bradley13 (1118935) | more than 4 years ago | (#33286510)

## Re:Analog computers live again!! (1, Informative)

## Anonymous Coward | more than 4 years ago | (#33286572)

It's not the same kind of analog. It's analog in the sense that it operates on things between 1 and 0, but it still uses logic.

## Re:Analog computers live again!! (1, Interesting)

## Anonymous Coward | more than 4 years ago | (#33286606)

Just like nobody needs enough vector float computations and SIMD instructions at once to justify making a card unit that does a @$%#$ ton of them at once. This chip, in a PCIE card could make a lot of sense.

## Re: (0)

## Anonymous Coward | more than 4 years ago | (#33286704)

This would be ideal for mobile telephones and GPS devices. Signal reception, noise cancellation and error correction can all be done faster and with less energy when done in analog.

## Re:Analog computers live again!! (0)

## Anonymous Coward | more than 4 years ago | (#33287614)

I agree about the solution in search of a problem bit. This is a lot like the NoSQL school of thought where people comment that if you yank all error correction code, the stuff runs like blue blazes.

Yes, it will run faster. But would a bank or a business want to trust results from chips built to have little to no error correction? I sure as heck wouldn't. Just as I want databases to have basic integrity (which NoSQL based systems toss in return for performance), CPUs need to be able to come up with the same value after an extensive calculation 100% of the time. Not 99%. Not 99.99999%.

## Re:Analog computers live again!! (1)

## Dishevel (1105119) | more than 4 years ago | (#33288322)

CPUs need to be able to come up with the same value after an extensive calculation 100% of the time. Not 99%. Not 99.99999%.

But errors still happen no matter what you think. I haven't seen a system that really, truly runs at 100%. have you?

## Re:Analog computers live again!! (1)

## Rich0 (548339) | more than 4 years ago | (#33289002)

I think the concept is a good one.

An area this technology might be used in could be embedded controllers, which are not general-purpose devices.

If you're building a thrust vectoring system for a plane, and the servos have an accuracy of 1%, then it is more important to deliver more frequent servo updates than to deliver those updates with 0.0001% accuracy. If your device is attached to a sensor that has 5% manufacturing tolerances then you may not need even 8-bit precision on the math.

In the IS discrete-math world we tend to view numbers as precise figures. However, most numbers that computers deal with are actually not discrete quantities, and they can have substantial levels of error. If you can optimize a system to make it faster and cheaper and have it consume less power at the cost of a level of error that is still negligible compared to the error already in the system, then that is a good move.

However, I'm not sure this will ever make sense for GP computing, unless you can make it part of a standardized coprocoessor or something like that. If GP computing it is hard to know where error can be accepted, unless this is specified by the programmer.

GCC has -ffast-math, and I see this as something similar.

## Probability in computers: it's called a float (4, Insightful)

## Z8 (1602647) | more than 4 years ago | (#33286526)

The article mentions Bayesian calculations. Can these computers really speed up those calculations? Nowadays Bayesian calculations usually involve thousands of iterations of a technique called Markov Chain Monte Carlo [wikipedia.org] (MCMC) unless the distributions in question are conjugate priors [wikipedia.org] . The simulation then converges to the right answer.

The issue I see is that all these techniques are just math. They are either analytic (conjugate priors) or require strict error bounds in order get sensible answers (MCMC). There's no separate system of math that Bayesians use. Like many others, Bayesians just need quick reliable floating point mathematics. So anyway, I don't see how this can help Bayesian statisticians, unless it also revolutionizes engineering, physics, etc.

## Re:Probability in computers: it's called a float (1)

## ModelX (182441) | more than 4 years ago | (#33287270)

I've been dealing with Bayesian methods for a few years, too. I understand the goal of the hardware is not to run everything that is being sold as Bayesian methods. Basically Bayesian calculations mean computing conditional probabilities, which usually gets down to a ton of multiplications and additions. If the analog hardware can produce results for a particular subproblem with sufficient accuracy, then you are saving a lot of power and time. If it can produce estimates that are not entirely accurate but within sufficient bounds, then you can still avoid a whole lot of digital computations by narrowing down on possible solutions.

## Re:Probability in computers: it's called a float (0)

## Anonymous Coward | more than 4 years ago | (#33287432)

Right: and it ends up calculating estimates with better-than-good-enough error bounds in a matter of a few cycles.

## Re:Probability in computers: it's called a float (1)

## CptNerd (455084) | more than 4 years ago | (#33287444)

## Re:Probability in computers: it's called a float (1)

## femto (459605) | more than 4 years ago | (#33287768)

## Re:Probability in computers: it's called a float (1)

## mea37 (1201159) | more than 4 years ago | (#33288068)

I suspect the variable you're leaving out is, in a binary computer context "floating point math" usually means IEEE floating point, which is a very different animal than the abstract concept that comes to mind when you say "floating point math". Even when it doesn't mean IEEE floating point, every binary floating point implementation is a compromise with some combination of limited performance, limited range, and limited precision.

Consdier that IEEE floating point has no exact representation of numbers like 0.1; this may not matter if its the final answer, because you're likely to limit the significant figures diaplayed sufficiently that the computer will

appearto have reached exactly the right answer; but if that value is a variable in the middle of a long, complex chain of calculations, errors may accumulate and you may not be so happy with the result.For some problems, analog computers are and always have been better at performing the required calculations. That, in and of itself, is nothing new.

## Re:Probability in computers: it's called a float (1)

## TD-Linux (1295697) | more than 4 years ago | (#33288402)

Then try to implement mixers and actual logic.

Then embed it into a tiny circuit amid an extremely noisy environment.

Of course, the last two are just academic, as you're never even going to manage the voltage regulator without some extreme equipment.

## Re:Probability in computers: it's called a float (4, Informative)

## Frequency Domain (601421) | more than 4 years ago | (#33288096)

[...] Nowadays Bayesian calculations usually involve thousands of iterations[...]. The simulation then converges to the right answer.

The convergence you refer to is asymptotic. In practice it takes about 10000 iterations to get around a 1% bound on a single probability point estimate, and a factor of a hundred for each order of magnitude improvement. On top of that, if you're dealing with multiple distributions the overall expectation is not just a simple function of the component expectations unless the whole system is linear, you need to use convolution to combine results. And on top of that, lots of interesting problems are based on order statistics, not means/expectations. Having hardware that correctly manipulates distributional behavior in a few CPU cycles would blow the doors off of MCMC.

## Re:Probability in computers: it's called a float (1)

## Ambitwistor (1041236) | more than 4 years ago | (#33288530)

Yeah, fine, we'd all like to compute our likelihoods faster, and you can imagine hardware which does it, but it's not clear from the article why doing this analog is superior. Or even how you can avoid sampling algorithms like MCMC using this approach. How does making probabilities analog get you a joint probability distribution over a parameter space, let you marginalize parameters to a lower dimensional distribution, and all the other things MCMC is for? I have a feeling that this is chip is targeted at Bayesian networks, where you're just propagating conditional probability values down a graph, rather than at MCMC.

## Analogue Computing (2, Insightful)

## Anonymous Coward | more than 4 years ago | (#33286534)

This is potentially a great advance. Everyone knows that analogue computing can greatly outperform digital computing (now each bit has a continuum of states so stores infinitely more data, each operation on 2 'bits'....you get the idea)....but there are many issues to resolve i.e.

1) Error correction - every 'bit' is in an erroneous state

2) Writing code for the thing - anyone got analogue algorithm design on their CVs?

## Re:Analogue Computing (0)

## Anonymous Coward | more than 4 years ago | (#33286778)

It could well create a whole new area of computing development.

Analogue computing is the eventual goal for computing, and i'm hopeful this will be one huge leap towards that goal.

Hell, even Quaternary Computing would be better than crappy Binary.

The only reason people won't make the switch is because it would mean changing everything to handle 4-bit.

And for the most part, all of that crap could be handled in direct hardware emulation like 64 bit processors emulating 32 bit.

God forbid you mention 3D motherboards to some of them, they'd go insane and probably start punching walls.

Analogue hard drives would have been a possibility at this point in time if it wasn't for SSDs coming out. HDDs are reaching hard limits that are making them much more expensive to build and more prone to error due to more compact sizes. Moving small things at high speed just isn't a good combination.

But SSDs came out because the methods to make analogue HDDs are quite frustrating to build.

Even fixed to something like 4 bits can be a challenge (10, 5, -5, -10). And let's not even go near Ternary when it comes to magnets...

This method would be entirely possible, but it requires rethinking the firmware layer a bit. Too much work apparently, so lets make stupidly expensive hard drives with complicated and easier-to-break methods! HUZZAH ANALOGUE BRAIN LOGIC! Oh wait...

## Re:Analogue Computing (1)

## Viol8 (599362) | more than 4 years ago | (#33287810)

"Hell, even Quaternary Computing would be better than crappy Binary."

It would make no difference - the algorithms would be identical. All you'd gain would be saved storage space as each "bit" could represent 4 values instead of 2. You'd still be dealing with a system that could only handle discrete values.

## I've been waiting.... (0)

## Anonymous Coward | more than 4 years ago | (#33286566)

This is the first step in creating an infinite improbability drive, you know...

## 1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (1)

## EdgeyEdgey (1172665) | more than 4 years ago | (#33286586)

Anyone want to guess how the others function?

Or am I on completely the wrong track here.

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (1)

## EdgeyEdgey (1172665) | more than 4 years ago | (#33286626)

0.7 NOT = 0.3

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (4, Insightful)

## selven (1556643) | more than 4 years ago | (#33286664)

If 0.8 AND 0.6 = 0.7 (I assume you're taking the average here), then 1 AND 0 would be 0.5, when it's supposed to be 0. The only answers I would accept for 0.8 AND 0.6 are 0.6 (min) and 0.48 (multiplication). An OR gate is constructed by attaching NOT (1 - x here) gates to the inputs and output of an AND gate, yielding 0.8 or 0.92 depending on which rule you go with.

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (1)

## EdgeyEdgey (1172665) | more than 4 years ago | (#33286760)

XOR is a bit of a bugger to figure out, so I will cheat and use this [wikipedia.org] .

That's all the gates covered.

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (0)

## Anonymous Coward | more than 4 years ago | (#33286830)

The graph of XOR output on the z axis vs inputs 0 and 1 on the x and y respectively looks like a saddle.

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (1)

## bondsbw (888959) | more than 4 years ago | (#33286854)

I believe you are correct with the multiplication rule. According to the article,

Whereas a conventional NAND gate outputs a "1" if neither of its inputs match, the output of a Bayesian NAND gate represents the odds that the two input probabilities match. This makes it possible to perform calculations that use probabilities as their input and output.

But I'm not clear as to what "the odds that the two input probabilities match" means... that implies, to me, that it returns a 1 if the inputs are identical and 0 if not. I'm thinking it instead means, "Given events A and B with inputs p(A) and p(B), Bayesian NAND represents p(A and B)." Or perhaps p(A nand B)... I don't know.

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (1)

## John Hasler (414242) | more than 4 years ago | (#33287834)

It means that the reporter hasn't the foggiest idea how it works but had to write some sort of balderdash anyway.

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (3, Informative)

## Anonymous Coward | more than 4 years ago | (#33287234)

It's called fuzzy logic [http://en.wikipedia.org/wiki/Fuzzy_logic].

One way to define it is NAND(x,y) = 1-MIN(x,y)

and the rest follows using usual logic rules.

I have no idea if that's what they do though.

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (0)

## Anonymous Coward | more than 4 years ago | (#33288550)

thus the key code that will unlock my cyber overloards and smite you all with your own laserbeams

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (0)

## Anonymous Coward | more than 4 years ago | (#33286748)

Actually:

0.8 AND 0.6 = 0.48

0.8 OR 0.6 = 0.92 (a + b - ab)

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (2, Informative)

## ikkonoishi (674762) | more than 4 years ago | (#33286984)

I think it is more of a probability thing then what you are thinking of. The return is the probability that the two values are the same. So 0.5 AND 0.5 would be 100% while 0.5 AND 0.6 would like 80% or something depending on the allowed error and uncertainty.

Thinking of this reminded me of BugBrain [biologic.com.au] . If you want to play with Bayesian logic it has a pretty good set of examples including building a neural network to perform simple character recognition.

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (1)

## tenco (773732) | more than 4 years ago | (#33287164)

## Re:1 AND 1 = 1 : 0.8 AND 0.6 = 0.7 (1)

## tenco (773732) | more than 4 years ago | (#33287186)

## Not Analog, and Not Digital (0)

## Anonymous Coward | more than 4 years ago | (#33286620)

Probability computing is not analog computing. Nor is it digital. Nor is it limited to error correction and search engines. It's a new implementation of a mathematical concept that allows arbitrary logic to be implemented smaller and faster than traditional digital chips.

Calling it analog is an insult that just shows you need to read a new book. ;-)

Yay Moore's Law! :-)

## Yes it is analogue. (1)

## Viol8 (599362) | more than 4 years ago | (#33287882)

If it uses analogue signals internally then its an analog computer whatever those signals may represent at a higher level in the same way that a DSP is just as digital as a crypto chip even though the binary data is used for different things.

## Awesome! (3, Funny)

## UID30 (176734) | more than 4 years ago | (#33286622)

## Re:Awesome! (1)

## natehoy (1608657) | more than 4 years ago | (#33286654)

Sorry, we'll never have one in America. We can't make proper tea, and I don't believe they can run on coffee.

We shall never experience the WHUMP-thunk of a whale and a pot of petunias landing on our shores, unless one of the Brit boffins makes a mistake and as you know that never happens.

## Re:Awesome! (1)

## imakemusic (1164993) | more than 4 years ago | (#33287030)

That's not likely to happen any time soon.

So, next week.

## Re:Awesome! (0)

## Anonymous Coward | more than 4 years ago | (#33288904)

I would hazard a guess at something like 42 weeks

## Re:Awesome! (1)

## dangitman (862676) | more than 4 years ago | (#33289012)

How much longer before we get the "infinite improbability machine"?

As soon as someone hooks it up to a nice, hot cup of tea.

## Douglas Adams would be proud. (2, Insightful)

## nielsenj (313987) | more than 4 years ago | (#33286632)

## Re:Douglas Adams would be proud. (2, Funny)

## RivenAleem (1590553) | more than 4 years ago | (#33286782)

My Bistromathic drive makes that look like an electric pram

## Remember Slide Rules? (1)

## uncleroot (735321) | more than 4 years ago | (#33286672)

## Re:Remember Slide Rules? (0)

## Anonymous Coward | more than 4 years ago | (#33286950)

The difference is in speed and power consumption. Lyric does it for less of both. Much less of both. Shitloads much less of both.

## Re:Remember Slide Rules? (1)

## maxwell demon (590494) | more than 4 years ago | (#33287090)

Speed. And die size, i.e. cost.

Indeed, I'd expect it to have quite limited precision (which usually is OK in probability calculations; you generally don't really care if the probability is 0.34654323 or 0.34654324).

## Re:Remember Slide Rules? (0)

## Anonymous Coward | more than 4 years ago | (#33287364)

The difference between those two numbers is actually acuracy. In general, analog signals have unlimited precision, but limited accuracy. In contrast to a digital signal, which has extremely limited precision (one significant figure), but nearly 100% accuracy. They're the two extremes.

## The actual thesis (4, Informative)

## Mathiasdm (803983) | more than 4 years ago | (#33286810)

## Re:The actual thesis (4, Funny)

## Born2bwire (977760) | more than 4 years ago | (#33286892)

By Ben Vigoda, Co-Founder and CEO: http://phm.cba.mit.edu/theses/03.07.vigoda.pdf [mit.edu]

Huh, I thought he was dead.

## Re:The actual thesis (1)

## John Hasler (414242) | more than 4 years ago | (#33287892)

Some actual facts. Thank you.

## Re:The actual thesis (1)

## TD-Linux (1295697) | more than 4 years ago | (#33288190)

I'm a bit worried about them being completely fabless. I'm sure all their circuits work in SPICE, but how is this going to deal with real world noise, especially embedded on some other digital chip? The powerpoint explicitly states that is adversely affected especially by the sudden spikes caused by digital noise...

I was about to post the slideshow myself, but I see you beat me to it

## Analog computers sound so much more natural than d (1)

## trevc (1471197) | more than 4 years ago | (#33286960)

## Re:Analog computers sound so much more natural tha (1)

## StripedCow (776465) | more than 4 years ago | (#33288226)

why? can you elaborate?

## Sounds like a bitstream chip, but with more issues (2, Interesting)

## the Haldanian (700979) | more than 4 years ago | (#33287200)

It's vaguely familiar, but since no two circuits are *truly* identical at the analog layer, *and* change as the temperature changes, people used digital instead where 'mostly 0' is still '0' and 'mostly 1' is still '1' regardless. Otherwise you can't mass produce them.

Of more interest is people using analog-alike bitstreams, where the average number of 1's vs 0's in a random stream is the amplitude of the analog wave. They then blend the input streams together to produce the output stream. I've mostly seen this done by Royal Holloway University to produce neural chips that *don't* need squillions of interconnections - they just blend probability streams. Looks like people are playing with optical ones now too. Why not put a story up about that instead?

## step closer to Probability Drive? (0, Redundant)

## lampsie (830980) | more than 4 years ago | (#33287224)

## Whats next then? (1)

## kurt555gs (309278) | more than 4 years ago | (#33287418)

First probability on a chip, next an improbability drive!

## Re:Whats next then? (0)

## Anonymous Coward | more than 4 years ago | (#33287544)

## 30 times smaller? (1)

## Orgasmatron (8103) | more than 4 years ago | (#33287472)

If one says that something is 50% smaller, we understand that to mean half the size. And if one says that something is 3000% smaller, or 30 times smaller, should we not understand that as not only taking no space, but actually giving us 29 times the original space back?

Unless we are making a three part comparison, which has new perils. If B is half the size of A, and C is 30 times smaller than B than B is to A, then we may understand the size of C as 0.5^30 times the size of A. However, if B is 99% of the size of A, then having C be 30 times smaller than B can mean that C is 70% of the size of A, or maybe C is 0.99^30 times the size of A.

Perhaps we should stick to saying what we mean, with things like "a chip for correcting errors in flash memory claimed to be

one thirtieth the sizeof a digital logic-based equivalent"## Re:30 times smaller? (1)

## MozeeToby (1163751) | more than 4 years ago | (#33288240)

While I agree with you that it is unclear or at least not intuitively obvious, the plain fact is that it has been in use for a long time and is very common. It's not a recent development (Jonathan Swift is known to have used the construction in the early 1700s) nor is it rare (almost 500,000,000 Google results for 'times less than'). For better or worse, language is not tied directly to math, nor is the meaning of a phrase necessarily tied to the meaning of the individual words that make it up.

## Re:30 times smaller? (1)

## StripedCow (776465) | more than 4 years ago | (#33288286)

10x smaller means: one tenth of the original size.

Hence, 50% smaller means: double the size.

Simple, eh?

## Not the only one... but wrong and a bit silly (0)

## Anonymous Coward | more than 4 years ago | (#33288382)

You know what 30 times smaller means. In fact, you instinctively know it. Manufacturing the chip doesn't expand the universe by 29 times the size of a regular chip. We don't know ways to create more space so there isn't really any other way to interpret that expression. It conveys no misinformation... It is just silly to nitpick about that.

It is different than nitpicking about whether 10 is ten times larger than 1 or ten times

as large as1 because there is one correct one and another that conveys misinformation (though the difference isn't that relevant: those aren't usually meant to be exact statements to begin with). Here... It is just worthless.## Re:Not the only one... but wrong and a bit silly (1)

## geckipede (1261408) | more than 4 years ago | (#33289536)

## Heart of Gold? (1)

## vortechs (604271) | more than 4 years ago | (#33287752)

## Fuzzy Logic (1)

## steam_cannon (1881500) | more than 4 years ago | (#33288538)

## Re:Fuzzy Logic (2, Funny)

## maharg (182366) | more than 4 years ago | (#33289480)

If you're into the concept of fuzzy logic, then I strongly suggest reading Aldiss' Barefoot in the Head if you've not already done so.

I also recommend not reading it.