×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

'Approximate Computing' Saves Energy

Soulskill posted about 4 months ago | from the 1+1=3-for-sufficiently-large-values-of-1 dept.

Programming 154

hessian writes "According to a news release from Purdue University, 'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption. "The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency," said Anand Raghunathan, a Purdue Professor of Electrical and Computer Engineering, who has been working in the field for about five years. "Computers were first designed to be precise calculators that solved problems where they were expected to produce an exact numerical value. However, the demand for computing today is driven by very different applications. Mobile and embedded devices need to process richer media, and are getting smarter – understanding us, being more context-aware and having more natural user interfaces. ... The nature of these computations is different from the traditional computations where you need a precise answer."' What's interesting here is that this is how our brains work."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

154 comments

meanwhile... (3)

i kan reed (749298) | about 4 months ago | (#45729633)

The majority of CPU cycles in data centers is going to be looking up and filtering specific records in database(or maybe parsing files if you're into that). They can possibly save energy on a few specific kinds of scientific computing.

Numerical computation is pervasive (4, Informative)

l2718 (514756) | about 4 months ago | (#45729711)

This is not about data centers and databases. This is about scientific computation -- video and audio playback, physics simulation, and the like.

The idea of doing a computation approximately first, and then refining the results only in the parts where more accuracy is useful is an old idea; one manifestation are multigrid [wikipedia.org] algorithms.

Re:Numerical computation is pervasive (2)

DutchUncle (826473) | about 4 months ago | (#45729973)

Isn't Newton-Raphson an "approximation"?

Re:Numerical computation is pervasive (5, Interesting)

raddan (519638) | about 4 months ago | (#45730195)

Not to mention floating-point computation [wikipedia.org], numerical analysis [wikipedia.org], anytime algorithms [wikipedia.org], and classic randomized algorithms like Monte Carlo algorithms [wikipedia.org]. Approximate computing has been around for ages. The typical scenario is to save computation, nowadays expressed in terms of asymptotic complexity ("Big O"). Sometimes (as is the case with floating point), this tradeoff is necessary to make the problem tractable (e.g., numerical integration is much cheaper than symbolic integration).

The only new idea here is using approximate computing specifically in trading high precision for lower power. The research has less to do with new algorithms and more to do with new applications of classic algorithms.

Re:Numerical computation is pervasive (1)

tedgyz (515156) | about 4 months ago | (#45730617)

Holy crap dude - you hit the nail on the head, but my brain went primal when you brought up the "Big O".

Re:Numerical computation is pervasive (1)

raddan (519638) | about 4 months ago | (#45730881)

My mind inevitably goes to this [youtube.com] when someone says "Big O". Makes being a computer scientist somewhat difficult.

Re:Numerical computation is pervasive (-1)

Anonymous Coward | about 4 months ago | (#45730299)

This is another "good enough" trade-off. We saw it with NoSQL, and look how well NoSQL worked for the ACA backend. In theory, this type of computing will be used for things that are "good enough", similar to using a genetic algorithm for the travelling salesman problem. However, in real life, because it is cheaper, we will see this used on things where it shouldn't be, such as financial transactions.

I'm tired of halfway measures becoming the norm in computing, and how they creep in, just because it is cheaper.

Re:Numerical computation is pervasive (1)

egcagrac0 (1410377) | about 4 months ago | (#45730567)

we will see this used on things where it shouldn't be, such as financial transactions.

I, for one, am OK with 1/10000th of a dollar accuracy.

Heck, within 1/640th of a cent ought to be good enough for anybody.

Re:Numerical computation is pervasive (1)

parkinglot777 (2563877) | about 4 months ago | (#45730713)

Then you are creating a problem if it goes into financial transactions. If each transaction is computed 1/10000 less (truncate), how much would it be off by 1 million transactions? Also, only 1 cent difference in accounting will need adjustment in multiple places in order to prove the difference. You are thinking too narrow.

Back to the GP, I do NOT see how this will be put in financial transaction anywhere? Also, it is a rule of thumb to separate decimal from the whole value currency. Simple transaction computation would not need this kind of approximated scientific computation from the article.

Re:Numerical computation is pervasive (1)

egcagrac0 (1410377) | about 4 months ago | (#45730913)

how much would it be off by 1 million transactions?

That depends entirely on the dataset.

If all the transactions are dealing in units no smaller than $.01, you should see no error truncating to the nearest $.0001, no matter the number of transactions.

If you're worried, just create an account called "Salami" and post the rounding errors there.

Re:Numerical computation is pervasive (0)

Anonymous Coward | about 4 months ago | (#45730863)

Won't work for currencies like Bitcoin where a chunk that is in the 10^-8 digit range can have a solid value in the near future as the currency matures and increases in value.

Re:meanwhile... (1)

K. S. Kyosuke (729550) | about 4 months ago | (#45729739)

Even OLAP could probably profit from this. Sometimes, it doesn't matter whether the response to the question "does my profit increase correlate strongly with my sales behavior X" is "definitely yes, by 0.87" or "definitely yes, by 0.86", the important thing is that is isn't "most likely no, by 0.03".

Also, in the era of heterogeneous machines, you ought to have a choice in that.

Re:meanwhile... (4, Informative)

ron_ivi (607351) | about 4 months ago | (#45730035)

The majority of CPU cycles in data centers is going to be looking up and filtering specific records in database

Approximate Computing is especially interesting in databases. One of the coolest projects in this space is Berkeley AMPLab's BlinkDB [blinkdb.org]. Their cannonical example

SELECT avg(sessionTime) FROM Table WHERE city='San Francisco' ERROR 0.1 CONFIDENCE 95%

should give you a good idea of how/why it's useful.

Their bencmarks show that Approximate Computing to 1% error is about 100X faster than Hive on Hadoop.

Re:meanwhile... (5, Funny)

lgw (121541) | about 4 months ago | (#45730071)

Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!

Re:meanwhile... (1)

camperdave (969942) | about 4 months ago | (#45730127)

Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!

We've had approximate computing since the earliest days of the Pentium CPU.

Re:meanwhile... (1)

FatdogHaiku (978357) | about 4 months ago | (#45730723)

We've had approximate computing since the earliest days of the Pentium CPU.

My favorite joke of that era was
I am Pentium of Borg.
Arithmetic is irrelevant.
Division is futile.
You will be approximated!

Re:meanwhile... (1)

nospam007 (722110) | about 4 months ago | (#45730181)

" Perhaps "approximate computing" is farther along than I imagined!"

Indeed, Excel has been doing it for 20 years.

Re:meanwhile... (0)

Anonymous Coward | about 4 months ago | (#45730751)

The reason Excel is a large program is because the cpu is imperfect. And Excel would give correct calculations then some programming languages!

Re:meanwhile... (5, Funny)

formfeed (703859) | about 4 months ago | (#45730599)

Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!

Sorry, that was my fault. I didn't have my ad-block disabled. They must have sent them to you instead.
Just send them to me and I will look at it.

It's a nice thought (0)

Anonymous Coward | about 4 months ago | (#45729659)

But it's ultimately impossible to build a computer that calculates with arbitrary precision. The closest approximation would be to have a pair of FPUs, one for lower precision and one for higher precision. Many GPUs already function this way.

Re:It's a nice thought (1)

Drethon (1445051) | about 4 months ago | (#45729719)

Makes me think of 8 bit calculations representing large numbers in stead of 64 bit. The 64 bit result might be 4503599627370495 whereas the 8 bit would be 15 with each 1 of the 8 bit value representing 281474976710656. Might work by my brain is too fried at the end of the day to think of applications...

Re:It's a nice thought (3, Funny)

K. S. Kyosuke (729550) | about 4 months ago | (#45729751)

Actually, computers are already capable of computing with arbitrary precision - they're just incapable of computing with infinite precision.

Re:It's a nice thought (1)

viperidaenz (2515578) | about 4 months ago | (#45729837)

Except that's what these researchers are doing. They're building new instructions that perform faster but produce lower precision results.

Re:It's a nice thought (1)

Anonymous Coward | about 4 months ago | (#45730291)

Pshaw, I had one in the 4th grade. It was called a "slide rule" and I used it because I suck at memorization. Who needs multiplication tables when you have a handy tool the teacher doesn't know how it's used or what it actually does?

Re:It's a nice thought (2)

russbutton (675993) | about 4 months ago | (#45730935)

Slide rule. Good to three places. Good enough to design moon rockets, the SR-71, B-52, the Golden Gate Bridge, Hoover Dam...

Re:It's a nice thought (1)

bobbied (2522392) | about 4 months ago | (#45730333)

But it's ultimately impossible to build a computer that calculates with arbitrary precision.

Excuse me but not quite, assuming you don't mean absolute precision, we already use multiple precision calculations based on need, speed or memory foot print. We have multiple sizes of floating point number representations as well as integers of varying sizes. Plus, there is nothing that prevents you from doing X-Bit floating point number calculations if you wanted.

Analog (5, Interesting)

Nerdfest (867930) | about 4 months ago | (#45729675)

This is also how analog computers work. They're extremely fast and efficient, but imprecise. It had a bit of traction in the old days, but interest seems to have died off.

My Dad targeted naval antiaircraft missiles with- (0)

Anonymous Coward | about 4 months ago | (#45730021)

- analog computers.

A Talos missile, a two stage, first stage solid booster, second stage air breathing ramjet could take out a wildly evasive supersonic north vietnamese-piloted soviet MiG with an analog computer calculating the missile trajectory.

Re:My Dad targeted naval antiaircraft missiles wit (1)

wisnoskij (1206448) | about 4 months ago | (#45730295)

Seems like a totally impractical system.

You are probably already using an algorithm that produces approximate results, add on top of that a computer that makes mistakes routinely in the name of speed.
You would think that sometimes the stars would just align and you would get a result that is just completely wrong.

Re:My Dad targeted naval antiaircraft missiles wit (0)

Anonymous Coward | about 4 months ago | (#45731309)

This was during Vietnam

Digital computers would have been too slow.

Re:Analog (2)

wavedeform (561378) | about 4 months ago | (#45730261)

This was my immediate reaction, as well. Analog computers do some things extremely well, and faster than could be done digitally. Absolute accuracy may not be possible, but plenty-good-enough accuracy is achievable for a lot of different types of problems. Back in the 1970s I worked for a small company as their chief digital/software guy. The owner of the company was wizard at analog electronics, and instilled in me a solid respect about what can be done with analog computing.

Re:Analog (1)

egcagrac0 (1410377) | about 4 months ago | (#45730669)

Absolute accuracy may not be possible, but plenty-good-enough accuracy is achievable for a lot of different types of problems.

The same can be said of digital computers.

Re:Analog (1)

ezdiy (2717051) | about 4 months ago | (#45730373)

They live strong, reincarnated in MLC NAND flash cells [wikipedia.org] exactly because flash was first thing to reach litography cost limits. It's not actually true analog, but close enough to keep precision.

Re:Analog (0)

Anonymous Coward | about 4 months ago | (#45730417)

Analogue computers are a nice idea.

In closed position a transistor has zero current and therefor doesn't burn off heat.
In open position a transistor has zero voltage across it and therefor also doesn't burn off heat.
In partial open position a transistor has a non zero current and a non zero voltage across it and therefor burns off heat.

So using a transistor digitally is very energy efficient, because either the transistor is fully closed or fully open, only during the transitions energy is wasted as heat.

In analogue computer a transistor will spend most of its time in a partial open position waisting energy as heat.

Cooling of an analogue computer will be a significant issue. On the other hand switching frequency will not add any more heat to the system, power consumption will be as bad as it gets during idle.

Re:Analog (0)

Anonymous Coward | about 4 months ago | (#45730605)

Note that ECL logic also uses non saturating transistors and burns power independently of the frequency.

All receivers on differential busses (PCIe) are fundamentally analog parts, since they amplify a small differential signal to internal logic levels. But analog electronics is much funnier than digital (I do both), and you can't do anything digital at 60GHz and higher.

Re:Analog (3, Informative)

bobbied (2522392) | about 4 months ago | (#45730493)

This is also how analog computers work. They're extremely fast and efficient, but imprecise. It had a bit of traction in the old days, but interest seems to have died off.

Analog is not imprecise. Analog computing can be very precise and very fast for complex transfer functions. The problem with Analog is that it is hard to change the output function easily and it is subject to changes in the derived output caused from things like temperature changes or induced noise. So the issue is not about precision.

Re:Analog (1)

wisnoskij (1206448) | about 4 months ago | (#45730503)

Since in an Analogue computer every bit now contains an infinite amount of information, instead of just one, I imagine it would be incredibly fast.

And since every decimal is already stored in approximate form, in a normal computer, I cannot imagine it being that different.

Re:Analog (1)

wonkey_monkey (2592601) | about 4 months ago | (#45730893)

Since in an Analogue computer every bit now contains an infinite amount of information, instead of just one, I imagine it would be incredibly fast.

What is this I don't even.

Re:Analog (1)

wisnoskij (1206448) | about 4 months ago | (#45731027)

Binary: 1 bit can be 2 values, and contains that absolute minimal amount of information possible (true or false).
Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit. So information is sent and computer far faster.
Analogue: 1 bit can be an infinite amount of values, so an infinite amount more information can sent in a single bit. So information is sent and computed far far faster.

Re:Analog (1)

Ferrofluid (2979761) | about 4 months ago | (#45731449)

Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit.

No, that's not what a bit is. 'Bit' is short for 'binary digit'. A bit can, by definition, only hold one of two possible states. It is a fundamental unit of information. A decimal digit comprises multiple bits. Somewhere between 3 and 4 bits per decimal digit.

Re:Analog (1)

DerekLyons (302214) | about 4 months ago | (#45730999)

This is also how analog computers work. They're extremely fast and efficient, but imprecise.

On the contrary - they can be extremely precise. Analog computing elements were part of both the Saturn V and Apollo CSM stack guidance & navigation systems for example. Analog systems were replaced by digital systems for a wide variety of reasons, but accuracy was not among them.

Accuracy isn't important anymore (4, Insightful)

EmagGeek (574360) | about 4 months ago | (#45729681)

We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?

Re:Accuracy isn't important anymore (1)

l2718 (514756) | about 4 months ago | (#45729785)

I don't think you appreciate the point. In most cases, rather than multiplying 152343x1534324, you might as well just multiply 15x10^4x15x10^5 = 225x10^9

. And to understand this you need to be very comfortable with what 2+2 equals exactly.

Re:Accuracy isn't important anymore (1)

Em Adespoton (792954) | about 4 months ago | (#45729851)

We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?

Indeed... what's 3/9 + 3/9 + 3/9 after all? Does it approach 1, or is it actually 1? Do we care? Are we happy?

Re:Accuracy isn't important anymore (0)

Anonymous Coward | about 4 months ago | (#45729929)

We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?

Indeed... what's 3/9 + 3/9 + 3/9 after all? Does it approach 1, or is it actually 1? Do we care? Are we happy?

No, it's zero.

Re:Accuracy isn't important anymore (0)

Anonymous Coward | about 4 months ago | (#45730477)

Indeed... what's 3/9 + 3/9 + 3/9 after all? Does it approach 1, or is it actually 1? Do we care? Are we happy?

Not trolling, clueless. What do people say that's equal to, other than 1. I can only see it equaling something else with some imprecise math.

Re:Accuracy isn't important anymore (0)

Anonymous Coward | about 4 months ago | (#45730715)

Indeed... what's 3/9 + 3/9 + 3/9 after all? Does it approach 1, or is it actually 1? Do we care? Are we happy?

Not trolling, clueless. What do people say that's equal to, other than 1. I can only see it equaling something else with some imprecise math.

.3 repeating * 3 = .9 repeating. There is a significant amount of people in the world who think that .9 repeating != 1

Re:Accuracy isn't important anymore (2)

Em Adespoton (792954) | about 4 months ago | (#45731397)

3/9 is 0.3* in decimal, which is an infinitely repeating 3. Add 3 of those together, you get an infinitely repeating 9, which, while it approaches 1 using concrete values, is not precisely 1, for the standard definition of 1. However, using approximate computing or general notation, they're the same for all intents and purposes.

This gets even more interesting when you use a different base such as binary, that doesn't have the same issues with notational conversion as base 10. Base 12 is also useful here.

In my original comment, I was pointing out that we're already teaching partial answers, and we're also already doing approximate computing. Doing both intentionally though is a different matter altogether.

Time for a few mathematicians to completely refute what I said; it's mostly a thought experiment after all -- hence the "do we care?"

Re:Accuracy isn't important anymore (1)

Em Adespoton (792954) | about 4 months ago | (#45731433)

Oh yes, and an alternative is to argue that 3/9 is in fact equivalent to 0.4 -- but 0.4 * 3 = 1.2, not 1. Or, you could argue that 3/9 is always 1/3 and has no decimal representation, as infinite sequences aren't actually representable (at which point sequences like pi become a bit of an issue, as they have no known finite representation in any number base -- that we know of).

Re:Accuracy isn't important anymore (2, Insightful)

Anonymous Coward | about 4 months ago | (#45729959)

Where the hell did you get that from? Oh yeah, its the talking points about the common core approach. Too bad that is nothing like what the common core says. Find a single place where any proponent of the common core said something like that and I'll show you a quote mine where they really said "it is understanding the process that is important, of which the final answer is just a small part because computation errors can be corrected."

Future AIs (1)

Anonymous Coward | about 4 months ago | (#45729699)

I find this interesting that most science fiction portrays an AI of some sort of having all of the advantages of sentience (creativity, adaptability, intuition) while also retaining the advantages of a modern computer (perfect recall, high computational accuracy, etc.). This kind of suggests that with a future AI, maybe that would not be the case; maybe the requirements for adaptability and creativity places sufficient demands on a system's (biological or electronic) resources that you couldn't have such a perfect combination.

Also, I'm really bored at work today, so speculation like this is my cure.

Heard this before (1, Interesting)

Animats (122034) | about 4 months ago | (#45729705)

Heard this one before. On Slashdot, even. Yes, you can do it. No, you don't want to. Remember when LCDs came with a few dead pixels? There used to be a market for DRAM with bad bits for phone answering machines and buffers in low-end CD players. That's essentially over.

Working around bad bits in storage devices is common; just about everything has error correction now. For applications where error correction is feasible, this works. Outside that area, there's some gain in cost and power consumption in exchange for a big gain in headaches.

Re:Heard this before (0)

Anonymous Coward | about 4 months ago | (#45730003)

Working around bad bits in storage devices is common; just about everything has error correction now. For applications where error correction is feasible, this works. Outside that area, there's some gain in cost and power consumption in exchange for a big gain in headaches.

The point here is that you use approximate computing even when error correction is not needed. For example, when your browser retrieves an image and scales it down according to the stylesheet, minor errors in the scaling algorithm can make it faster with minimally visible loss of quality.

Re:Heard this before (0)

Anonymous Coward | about 4 months ago | (#45730101)

minor errors in the scaling algorithm can make it faster with minimally visible loss of quality.

Ah so jpeg then :)

Scuse me while I duck out of here...

Re: Heard this before (1)

Robin Ingenthron (3442105) | about 4 months ago | (#45730945)

As i recall, the greatest energy savings was determined when the answer did not matter at all, and the PC was unplugged from the wall, right?

Finally some better 'Ai' (0)

Anonymous Coward | about 4 months ago | (#45729723)

FPS game enemys who are more random, unpredictable, close enough, and maybe just say 'fuck this' after they see you mow down 50 of their buddies.

Re:Finally some better 'Ai' (0)

Anonymous Coward | about 4 months ago | (#45729845)

"enemys"...

There is no such word.

Re:Finally some better 'Ai' (3, Funny)

Diss Champ (934796) | about 4 months ago | (#45730097)

It's just another example of the 'Approximate Spelling' technique. The parent poster is illustrating significant savings in mental energy.

Been there (4, Funny)

frovingslosh (582462) | about 4 months ago | (#45729781)

I remember Intel doing something like this back in the days of the 386, except without the energy savings.

Re:Been there (0)

Anonymous Coward | about 4 months ago | (#45730121)

Actually it was Pentium [wikipedia.org] which was a precursor for these processors.

Fuzzy Logic anyone? (4, Informative)

kbdd (823155) | about 4 months ago | (#45729787)

Fuzzy logic was also supposed to save energy (in the form of requiring less advanced processors) by replacing computation intensive closed loop systems with table driven approximate logic.

While the concept was interesting, it did not really catch up. Progress of silicon devices made it simply unnecessary. It ended up being used as a buzz word for a few years and quietly died away.

I wonder if this is going to follow the same trend.

Didn't Intel already tried that with the P5 (0)

Anonymous Coward | about 4 months ago | (#45729795)

nuff said

I Use This (1)

DexterIsADog (2954149) | about 4 months ago | (#45729841)

I do this all the time. People are sometimes surprised when I can calculate an answer in a couple of seconds that takes other people half a minute or more, and my answer is within a few integers (or
Saves me energy, too.

Maybe now I can get respect here when I say (0, Funny)

Anonymous Coward | about 4 months ago | (#45729847)

FSRT POST!!!

Stop using floats (-1)

Anonymous Coward | about 4 months ago | (#45729957)

You don't need floating point precision unless you're doing science. Not even in the realm of computer graphics, because sub pixel precision is hilariously unnecessary.

It's not hard (1)

Red Jesus (962106) | about 4 months ago | (#45730043)

"If I asked you to divide 500 by 21 and I asked you whether the answer is greater than one, you would say yes right away," Raghunathan said. "You are doing division but not to the full accuracy. If I asked you whether it is greater than 30, you would probably take a little longer, but if I ask you if it's greater than 23, you might have to think even harder. The application context dictates different levels of effort, and humans are capable of this scalable approach, but computer software and hardware are not like that. They often compute to the same level of accuracy all the time."

To determine if a/b is greater than 1, it is sufficient to check if a > b.

To determine if a/b is greater than c, it is sufficient to check if a > bc.

Multiplication already consumes less time and energy than division on modern computers. I do not see why they needed to modify their instruction set to realize such gains.

Re:It's not hard (1)

femtobyte (710429) | about 4 months ago | (#45730217)

However, multiplying "simpler" numbers might be faster. For example, I can multiply 20*30 in my head faster than 21.3625*29.7482 (YMMV). Rounding 21.3625*29.7482 to 20*30 might be "good enough" for many purposes, and you can even go back and keep more digits for a small number of cases where it's too close to call with the approximation.

Re:It's not hard (0)

Anonymous Coward | about 4 months ago | (#45730619)

He's not talking about computers, he's talking about *you*, a person, doing multiplication. It's a fucking example!

This is just a basic demonstration that a less precise question can be answered faster than a more precise one.

Computers would be doing much more complicated calculations using more sophisticated algorithms but the larger point stands. If an approximate answer is good enough, you can spend less time figuring it out.

Re:It's not hard (1)

wonkey_monkey (2592601) | about 4 months ago | (#45730927)

I do not see why they needed to modify their instruction set to realize such gains.

It was just a generic example to give the casual reader a basic grasp of the idea, not a specific scenario they'll be applying their process to.

Physics doesn't care about complete precision (0)

Anonymous Coward | about 4 months ago | (#45730069)

It cares about knowing just how precise you are.

That is, measurements are reported 1.2+/-0.1.

Re:Physics doesn't care about complete precision (1)

camperdave (969942) | about 4 months ago | (#45730421)

Physics doesn't care about complete precision

But if I don't know how precisely I know a particle's momentum, how can I tell how vague I have to be about it's position?

Great idea! What could possiblity go wrong... (0)

Anonymous Coward | about 4 months ago | (#45730139)

Computer:
  - Let met approximate this cypher while I encrypt your bitcoin wallet private key....

MPEG, JPEG, MP3, etc (0)

Anonymous Coward | about 4 months ago | (#45730143)

Isn't this the idea behind advanced video and audio compression? Or any other "lossy" technique. You throw away data (precision) that isn't necessary to achieve an acceptable experience.

Could cool if you could arbitrarily turn down a processor's precision to save power.

Half-precision (3, Interesting)

michaelmalak (91262) | about 4 months ago | (#45730219)

GPUs have already introduced half-precision [wikipedia.org] -- 16-bit floats. An earlier 2011 paper [ieee.org] by the same author as the one in this Slashdot summary cites a power savings of 60% for a "an approximate computing" adder, which isn't that much better than just going with 16-bit floats. I suppose both could be combined for even greater power savings, but my gut feeling is that I would have expected even more power savings once the severe constraint of exact results is discarded.

Is that not what Approximation Algorithms are for? (1)

wisnoskij (1206448) | about 4 months ago | (#45730243)

And this is why we have thousands and thousands of approximation algorithms. Computers do the work perfectly precisely, except when we are talking about decimal numbers, and if you do not need perfect precision you just program in an approximate algorithm.

I do not think you will ever do any better than picking the best mathematical algorithm for your problem, instead of just relying on lazy computers.

Re:Is that not what Approximation Algorithms are f (1)

tlhIngan (30335) | about 4 months ago | (#45731203)

And this is why we have thousands and thousands of approximation algorithms. Computers do the work perfectly precisely, except when we are talking about decimal numbers, and if you do not need perfect precision you just program in an approximate algorithm.

I do not think you will ever do any better than picking the best mathematical algorithm for your problem, instead of just relying on lazy computers.

No, it's not. Approximation algorithms use exact computations and model approximation. The problem is using exact computations - it costs a lot of power to do so.

If instead you just needed to approximate, you can enable "approximate" mode on the calculation and the system gets you an approximate answer, which costs about 50% of the energy it takes to do an exact one.

For calculations like video and audio, that means the GPU consumes much less power as those applications are far more tolerant of approximate answers and the result is discarded in short while afterwards too.

If you don't care for the exact value, then you enable approximate calculations and save the energy of having to do an exact calculation. This is different from using an approximation algorithm on a normal computer where you calculate everything exactly and then fake approximation.

And yes, even when you're doing approximate calculations, there are times you need to do exact calculations - e.g., if you're iterating over lines of video, your iterator needs to be exact while the actual data may only need to be approximate. The proper CPU architecture has to allow for this.

Re:Is that not what Approximation Algorithms are f (1)

wisnoskij (1206448) | about 4 months ago | (#45731383)

If you don't care for the exact value you can use a specific algorithmic approximation, that normally gives you many orders of magnitude less computation time.

Drones (1)

srussia (884021) | about 4 months ago | (#45730257)

From TFS:

'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption.'

I am, for one, welcoming our new approximately accurate, longer-range drone overlords.

Overclocked GPUs, ASIC, analog? (1)

ezdiy (2717051) | about 4 months ago | (#45730307)

SHA256 double hash applications were probably first who used this on massive scale. It's actually ok to ramp clock/voltage up 50%, get 30% more rate at cost of 5% of wrong answers (and halving MTBF). ASIC miner chip giving wrong answers now and then because of imperfect mask process (even before OC) is common too.

However numbers for standard-cell ASIC design don't seem much favourable, certainly not "doubling", much less energy saving (on the contrary, at ballpark 10-30% of OC you reach point of diminishing returns, and only if you dont care much about MTBF).

Now what would be interesting is actual "analog" computers, ie number of states anywhere between 4-inf - there is literally too much of wasted "potential" nowadays. NAND flash chips do it already because they are about to hit limits of cost-effective litography (10nm?).

Glad this is happening (1)

Anonymous Coward | about 4 months ago | (#45730361)

Fuzzy logic and all that jazz really should be used more in computing.
It could save considerable power, or allow for far more computation in a smaller space.

So much stuff in computing only requires decently accurate results. Some requires even less accurate results.
If something was off by one pixel during one frame when it was moving, big deal, no loss.

Not to mention how great it would be for the sake of procedural noise.
You want something that isn't too random, but is just one value messed up a little, throw it through a fuzzy command and out it comes with a random offset.
That'd now be two commands compared to the usual few it'd take to set a value to itself + a random value, then set the possible offsets for the random command.
Or how about procedural generation in games, it could be used in so many areas of animation, texturing and the like.
Or how about AI, it would work wonders for AI, it's massively simplify the logic required to implement a simple expert machine.
It'd even make a real AI even easier to do, more so if you made these processors massively parallel.

Imagine a GPU of these, or even a set area of a GPU dedicated to fuzzy calculations. Might happen in the next 10 years, I sure hope so. (I'd think APUs might be a bigger thing by then though, or early 3D processors, who knows, so many routes it might take soon)
All I know is the future of processing is going to be FUCKING AWESOME in the coming few decades, it is going to transition so much that our computers will look like toasters.
Of course, not those smart ones. Does Anyone Want Any Toast [youtube.com]

Computation is not the big energy drain (4, Interesting)

Ottibus (753944) | about 4 months ago | (#45730367)

The problem with this approach is that the energy used for computation is a relatively small part of the whole. Much more energy is spent on fetching instructions, decoding instructions, fetching data, predicting branches, managing caches and many other processes. And the addition of approximate arithmetic increases the area and leakage of the processor which increases engergy consumption for all programs.

Approximate computation is already widely used in media and numerical applications, but it is far from clear that it is a good idea to put approximate arithmetic circuits in a standard processor.

Approximately once a month.. (2)

DigitAl56K (805623) | about 4 months ago | (#45730533)

.. this story or a slight variant gets reposted to Slashdot in one form or another.

Re:Approximately once a month.. (0)

Anonymous Coward | about 4 months ago | (#45730675)

one of these days, it will either be the year of linux on the desktop. just as soon as games start being available on linux, I think

Well, yeah (0)

Anonymous Coward | about 4 months ago | (#45730555)

It's a fairly common thing when perfect accuracy is not required. It's easier to check the distance from Coord A to Coord B is less than X on each axis than to pythag. It may seem a small increase in efficiency but when it's being done for Z hundred entities (x-x) every 100ms it adds up fast.

DUPE- and it's nonsense anyway (0)

Anonymous Coward | about 4 months ago | (#45730639)

ALL maths generally done in the 'floating point' domain is calculated to some APPROXIMATE accuracy. If this worthless clown-shoe excuse of a professor had the first clue, he'd understand this fundamental fact of applied computer engineering.

32-bit floating point calculates with less power than 64-bit at the same throughput, with the same type of electronic solution.

Markov Chains and the like already handle the statistical concept of "maybe this" or "maybe that" at known statistical probabilities.

The biggest MOUTHS at University are, sadly, all too frequently self-promoting morons. They do not seek to impress their associates in the same field, but seek to seem 'clever' to a more credulous general academic audience, like their bosses.

And to you who are reading this, but not understanding a word I say- try reading any decent primer on NUMERICAL ANALYSIS. India is famous for its mathematicians, and its cultural respect of the filed of maths, so sadly plenty of Indian conmen use their Indian heritage to pass themselves of as some form of maths genius to unsuspecting fools. What was that con Slashdot promoted a little while ago? The new 'Indian' method for super compression of data- or was it the new 'Indian' method of storing extraordinary amounts of data in a pattern printed by an inkjet printer? I think both cons got serious time here.

Computing already approximate (0)

Anonymous Coward | about 4 months ago | (#45730951)

I'm sure it's a matter of degrees, but as-it-is, computing is already approximate due to the finite precision of computer arithmetic. There are only 2^(bits) numbers that can be exactly represented on a computer when you've allocated "bits" number of bits to representing numbers. When you solve for the square root of 2 (call it sqrt(2)) on a computer, the answer you get back is not sqrt(2) but sqrt(2) + epsilon, where epsilon is some known bound on the error, When you use an ODE solver to numerically evaluate a differential equation, part of the settings (even if they're just the default ones) is the error tolerance. Similar statements apply for all types of numerical algorithms such as solving nonlinear equations, optimization routines, etc. What are some of the key difference in this approximate computing approach that differentiates it from just cranking down the tolerance on standard algorithms? Higher robustness to errors, randomness, etc.?

Nah, I didn't RTFA.

I see what they did here (1)

ihtoit (3393327) | about 4 months ago | (#45731003)

1. collect museum-piece Pentium systems
2. exploit FDIV bug
3. submit blurb to Slashdot
4. ...
5. Profit!

Approximate computer (1)

Iniamyen (2440798) | about 4 months ago | (#45731579)

This post was going to contain something insightful and funny, but because I'm using an approximate computer, it contains neither.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...