Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

DARPA Targets Computing's Achilles Heel: Power

timothy posted more than 2 years ago | from the never-a-good-time-to-buy-a-computer dept.

Power 100

coondoggie writes "The power required to increase computing performance, especially in embedded or sensor systems has become a serious constraint and is restricting the potential of future systems. Technologists from the Defense Advanced Research Projects Agency are looking for an ambitious answer to the problem and will next month detail a new program it expects will develop power technologies that could bolster system power output from today's 1 GFLOPS/watt to 75 GFLOPS/watt."

cancel ×

100 comments

Sorry! There are no comments related to the filter you selected.

let me answer that with a question (2)

FrozenFood (2515360) | more than 2 years ago | (#38858699)

Do the goverment know of an upcomming energy crysis?

Re:let me answer that with a question (5, Insightful)

Anonymous Coward | more than 2 years ago | (#38858719)

No, the problem is getting hold of raw materials for batteries. Mobile computing is on the rise and the west doesn't want to be too dependent on foreign mineral deposits. More efficient computers = smaller batteries = smaller amounts of lithium etc needed.

Re:let me answer that with a question (1)

FrozenFood (2515360) | more than 2 years ago | (#38858739)

I think another fundamenal problem is why does one need the entire western world need such large ammounts of processing power on the move?

Re:let me answer that with a question (-1)

Anonymous Coward | more than 2 years ago | (#38858763)

Do you realize how much CPU is required to decode h.264 1080p pr0n? What's the use of a laptop without it?

Re:let me answer that with a question (2)

0123456 (636235) | more than 2 years ago | (#38858859)

Do you realize how much CPU is required to decode h.264 1080p pr0n? What's the use of a laptop without it?

We used to that on a 200MHz dual-core ARM with some hardware decoding assist. If I remember correctly the whole system used less than 1.5W and much of that was the video encoder for the TV (we were using analogue component at the time, not HDMI).

And with GPU assist an Atom with a low-end GPU can happily play 1080P H.264.

Re:let me answer that with a question (0)

Anonymous Coward | more than 2 years ago | (#38858951)

we were using analogue component at the time, not HDMI

In other words, you were outputting 480i. And your source was probably not better than 480p.

And with GPU assist an Atom with a low-end GPU can happily play 1080P H.264.

Actually that depends on the bitrate of the encoding far more than whether it's "1080p" or not. I've seen plenty of "1080p H.264 video" that's got lousy quality the moment there's any action.

Re:let me answer that with a question (1)

0123456 (636235) | more than 2 years ago | (#38859473)

In other words, you were outputting 480i. And your source was probably not better than 480p.

Uh, no. Amongst other things I was writing the drivers to control the video output, so I think I know what we were displaying.

Re:let me answer that with a question (4, Informative)

SeaFox (739806) | more than 2 years ago | (#38861001)

And with GPU assist an Atom with a low-end GPU can happily play 1080P H.264.

Actually that depends on the bitrate of the encoding far more than whether it's "1080p" or not. I've seen plenty of "1080p H.264 video" that's got lousy quality the moment there's any action.

Not to mention the what profile of h264 was being used. High Profile requires much more computational power than Main. We're also assuming the video can be GPU accelerated. You can't just take any h264 video and get hardware acceleration, the video has to be encoded following certain rules about bitrate, b-frames, etc otherwise it will be all decoded in software.

Ten Years Ago ... (0)

Anonymous Coward | more than 2 years ago | (#38861363)

Ten Years Ago IBM developed the Cell processor which was capable of over 200 GFLOPS while using less then 80 watts of power{2.5 GFLOP/Watt}, and which was manufactured on a 90nm process. So it's a natural assumption that if using a 32nm or a 22nm process IBM could achieve greater than 65-GFLOPS/Watt using PowerPC coupled with Vector cores.

Re:Ten Years Ago ... (1)

Chris Burke (6130) | about 2 years ago | (#38868919)

Sorry but the natural assumption that power consumption decreases with decreasing transistor sizes went out the window pretty much right around the 90nm node. That was the inflection point where leakage went from being a minor nuisance to a major contributor comparable to switching power. Leakage goes up with decreasing transistor size, and so now it's a struggle to make sure that the new generation of part uses merely the same amount of power as the previous generation.

Lowering the power of devices today is more about driving your design with reducing power in mind, rather than counting on process technology to do it for you.

Re:let me answer that with a question (4, Interesting)

FooAtWFU (699187) | more than 2 years ago | (#38858823)

It occurred to me the other day that, while I have been programming and working with network monitoring tools and the like for a while, and I can get an email alert (or text message) whenever a piece of equipment goes down, the rest of the world doesn't have that sort of capability. A big chunk of of California Highway 1 could fall into the ocean, and people could fall off after it, and no one would notice until someone called it in. If my hard disk is on fire, I can get a message, but if the woods are on fire, you need to wait for someone to see the smoke.

Sensors and the like are pretty awesome to have.

Re:let me answer that with a question (1)

postbigbang (761081) | more than 2 years ago | (#38859311)

The problem is connectivity with someone who cares. The last mile is notoriously expensive, even with wireless. You could put lots of sensors along Hwy 1, but you'd need something to say where it started and stopped sliding into the ocean. You can actually run a piece of wire, calibrate it, and use two of them to cipher stop and start by using time domain reflectometry-- the technique used to find data cable problems. Somewhere, that wire needs be connected so that a computer will cough an alert when conditions merit. Every other sensor has that same kind of cost.

Re:let me answer that with a question (1)

XrayJunkie (2437814) | more than 2 years ago | (#38863121)

Regarding the smoke in the forrest. There are many research topics to create smart grids of sensors. these sensors communicate with each other, creating communications lanes to save energy. They are very small and can be dropped by a plane. These sensor networks can then monitor humidity and temperature. As a result, they can immediately notify the operators about fires.
Ah, and they are cheap as well!

1984 called. (1)

mosel-saar-ruwer (732341) | more than 2 years ago | (#38863935)


Sensors and the like are pretty awesome to have.

Indeed.

- BIG BROTHER

Re:let me answer that with a question (0)

Anonymous Coward | more than 2 years ago | (#38858865)

Perhaps because some of us in the "western world" figured out that by buying a fast computer that you spend less time waiting for, you are essentially buying time. Of course, most people can barely get past solitaire, but don't act like its any different in some other hemisphere.

Re:let me answer that with a question (0)

Curunir_wolf (588405) | more than 2 years ago | (#38859005)

How else will keep track of you, Citizen? We need the drones to be able to find you when you step out of line.

Re:let me answer that with a question (0)

Anonymous Coward | more than 2 years ago | (#38859049)

User tracking?

Re:let me answer that with a question (0)

Anonymous Coward | more than 2 years ago | (#38859325)

This. damn dynamic webpages and all the user tracking crap means you can hardly use the internet on old computers. I still use the internet in a "pull" rather than "push" fashion and if I open a few windows and tabs the Javascript will eat me alive. Turn it off and most of the page is now broken but readable.

Re:let me answer that with a question (1)

marcosdumay (620877) | more than 2 years ago | (#38859075)

Why do we need so much energy? Why do we need so much processing power? Why do we need so much stuf?

All those questions are on the same line. They all have the same answer. To the best of my knowledge the answer is either a deterministc one based on darwinism, or "people are addicted to power".

Re:let me answer that with a question (1)

marcovje (205102) | more than 2 years ago | (#38859159)

Indeed. The implicit assumption that utility of an embedded device is linear with the utility of the device.

But of course, a new media format need to be introduced sooner or later, since the prices of Blue Ray are already starting to lose their "premium" justification and become just plain ordinary.

Re:let me answer that with a question (1)

Anonymous Coward | more than 2 years ago | (#38858885)

I don't know if DARPA has other things in mind, but the main reason why most research done into power efficiency of computing is because supercomputing clusters are having energy consumption become more and more of both a cost and a logistical problem. In fact, in the gov't. sponsored research on what it'll take for us to develop an exaflop computer (two years old now, I grant you), significantly increased power efficiency is considered absolutely necessary. Mobile computing is more of an inspiration for efficiency than a matter of national policy; I know of no effort on the government's part to pressure mobile manufacturers into reducing battery size rather than increasing the life of a charge.

Re:let me answer that with a question (1)

stanlyb (1839382) | more than 2 years ago | (#38859323)

Like, the Lithium ore in Afganistan? But, but, we won the war, now everything belongs to us...i mean USa.

Re:let me answer that with a question (3, Interesting)

Luckyo (1726890) | more than 2 years ago | (#38859905)

Problem with lithium, it isn't mushroom and berries. You can just walk in there and pick it up. It's also not oil. You can't just put a hole in the ground, connect it to the pumping machinery and have oil. You need to have an actual ore mines, with huge, easy to sabotage, hard to fix machinery.

And finally, it's solid and heavy. It's a total bitch to move from center of war-torn nation that has world's best specialists in asymmetric warfare fighting against you both economically and in terms of general feasibility.

Re:let me answer that with a question (5, Informative)

Anonymous Coward | more than 2 years ago | (#38860191)

In a pinch you can extract lithium from sea water. That's basically what a lithium deposit is... an old sea that dried up and left the salts. Lithium isn't a big fraction of a battery's cost, weight or volume. Please everyone stop being silly. The cobalt that is often used in lithium batteries is far more expensive, rare and used in larger proportions. We just don't call them cobalt batteries so no one knows about that part.

And my mod points just expired. (1)

UpnAtom (551727) | more than 2 years ago | (#38861119)

Hope someone else bumps you.

Re:let me answer that with a question (1)

Luckyo (1726890) | more than 2 years ago | (#38864211)

In a pinch, you can extract gold from sea water as well. That doesn't make it viable to do so either.

Re:let me answer that with a question (5, Informative)

unts (754160) | more than 2 years ago | (#38858745)

The problem is not just generating the power... it's delivering it and consuming it without breaking/melting. And that's what they're getting at here - getting more FLOPS per watt... not finding out how to push more watts into a system. A silly amount of the energy going into a supercomputer comes out as heat... and a silly amount of energy is then used to remove that heat. Hopefully, by significantly improving the energy efficiency of chips and systems, we can make them a lot more powerful without them needing a whole lot more power. And I haven't even mentioned the mobile/embedded side of the spectrum where its about battery life and comfortable operating temperatures... the same energy efficiency goals apply.

This is the sort of thing we over the pond are very interested in too. Like for example *cough* the Microelectronics Research Group [bris.ac.uk] that I'm a part of.

Re:let me answer that with a question (0)

Anonymous Coward | more than 2 years ago | (#38860651)

Damn, I was hoping that you'd be one of my lecturers that I could start stalking, but according to your Hompage you're just a lowly research student :(

Re:let me answer that with a question (0)

Anonymous Coward | more than 2 years ago | (#38860697)

Whats even crazier, it turns out you were a CSE student! Perhaps you're worth stalking after all! (I, too, do CSE)

Re:let me answer that with a question (2)

ultramk (470198) | more than 2 years ago | (#38862513)

Erm, not to be overly pedantic, isn't *all* of the energy consumed by a supercomputer (or any other device) eventually converted into heat? First law of thermodynamics and all that?

Re:let me answer that with a question (4, Interesting)

stevelinton (4044) | more than 2 years ago | (#38858895)

In a sense. There is a widespread view that we will need 1 Exaflop supercomputers by roughly 2019 or 2020 for a whole range of applications including aircraft design, biochemistry to processing data from new instruments like the square kilometer array. On current trends, such a computer will need gigawatts of power (literally), which amongst other things would force it to be located right next to a large power station that wasn't needed for other purposes. This is felt to be a bit of a problem and this DARPA initiative is just one small part of the effort to tackle this and get the Exaflop machine down to 50MW or so, which is the most that can be routinely supplied by standard infrastructure.

Re:let me answer that with a question (1)

Wierdy1024 (902573) | more than 2 years ago | (#38859189)

With an exaflop computer, simulating the human brain is looking like it might be possible. If we can get a simulated brain working as well as a real brain, there's a good chance we can make it better too, because our simulated brain won't have the constraints hat real brains have (ie. not limited by power/food/oxygen supply, not limited by relatively slow neurones and doesn't have to deal with cell repair and disease)

Basically, if current models of the brain are anywhere near correct, and current estimates of computation growth are close, it seems there is a real possibility of a fully simulated skynet in 30-40 years.

Re:let me answer that with a question (1)

Johann Lau (1040920) | more than 2 years ago | (#38859229)

what makes a brain "better"? thinking faster, or thinking better thoughts?

Re:let me answer that with a question (0)

Anonymous Coward | more than 2 years ago | (#38859871)

And in turn - what criterions do you use to judge how "good" a thought is?

Re:let me answer that with a question (1)

Johann Lau (1040920) | more than 2 years ago | (#38859937)

What's your point, if you even have one? Just pouting? If you wanna be all relativistic, faster computing doesn't really help with the heat death of the universe so it's an exercise in futility, as are "good thoughts" no matter how they're defined. My point is, we're already derping with our current "hardware", why would supercomputers be put to any better use?

who we white man (0)

Anonymous Coward | more than 2 years ago | (#38861455)

i've written codes to calculate optical forces, using some nice freely avail high quality libs/compilers ie. intel's noncomercial compiler suites and MKL, hdf5 etc.. now im looking into how easily i can replace the lapack calls with something that uses CUDA or openCL for the heavy lifting which is mostly solving large linear systems.

i've estimated that i can find the em field scattered from a fairly general cube array of 20*20*20 spheres, each with individual radii, complex permitivities, complex permeabilities / positive/negative refractive indicies etc etc, each at arbitrary positions in space, in double prec....

anyway i've estimated the workset should fit in ~10gig of mem, and so seems viable in 16gig, .. one issue is that i've only got a gig of mem near the gpu.

these exascale computers would also be used for scientists, etc. not (just) for Rage III.

Re:let me answer that with a question (2)

Rockoon (1252108) | more than 2 years ago | (#38860785)

With an exaflop computer, simulating the human brain is looking like it might be possible.

Take a moment, relax, and then try to answer this question: What does computational speed have to do with it?

The point is that simulations are not linked to computational speed. Some simulations that we do today are performed thousands of times faster than "reality" while most others that we do today are performed thousands, or even millions of times slower. The speed of the simulation is irrelevant to their existence, so stop pretending that speed has any sort of importance to simulating something like a brain. A turing machine is a turing machine. Period.

Pretty much every day of your recent life you have observe the results of simulations that are trillions of times slower than reality, and that is the simulation of the propagation of light. We dont need faster computers to simulate a brain.. we just need to figure out how to simulate a brain.

tl;dr: The parent shouldn't talk about things that he doesnt understand.

Re:let me answer that with a question (1)

Anonymous Coward | more than 2 years ago | (#38862669)

Fair enough, although it shouldn't be forgotten that just the memory requirement for a simulation on the scale of an entire human brain is huge (the specific order of magnitude necessary for such a computation is unknown as it is unclear at precisely what level the human brain does its computation). A modern supercomputer can't simulate a human brain even at a trillion+ times slowdown due to simply not having the memory for the computation.

Furthermore, for medical use, a million times slowdown on a simulation is often acceptable (I saw a talk a few months ago about developing an ultra-fast supercomputer with chips specialized for simulating protein interactions which could do some useful but very small simulations with as little as a thousand times slowdown), but for using the simulated brain as an intelligence, more than an ~10x slowdown is probably worthless.

Re:let me answer that with a question (1)

Wierdy1024 (902573) | more than 2 years ago | (#38863249)

If you simulate a human brain a trillion times slower than realtime, and want to spend 10 simulated years teaching it stuff, you're going to be a very old man by the time your experiments complete...

Speed is important...

Re:let me answer that with a question (1)

somersault (912633) | more than 2 years ago | (#38863961)

I think the point is that we already have human brains that we can teach. There's no point having a computer pulling down a whole power station's worth of power just to simulate what is in the end only another human brain.

I am interested in AI and physics simulations myself so I'm not trying to say that simulating a brain isn't an interesting goal that might have something to teach us - but IMO if your end goal is useful intelligence for using in everyday life, there is no point in it. We already have billions of other human intelligences that we can interact with.

We have created robots and AI that are far better than humans at certain physical and mental tasks, and we're starting to see progress with programs that are good at more general knowledge (IBM's Watson), which is a good step to being able to actually hold a conversation of a sort with a computer. It's obviously not the same as human intelligence, but human intelligence is quite poorly suited to certain tasks. Humans can get bored, tired, their minds can wander, they can fall prey to emotional problems or full blown mental disorders.. having our computers subject to these things would be rather counter-productive outside of academic research into these individual phenomenon.. and there are all sorts of ethical questions to take into account when actually simulating a whole brain in that level of detail too.

Re:let me answer that with a question (1)

petermgreen (876956) | more than 2 years ago | (#38863645)

Some simulations that we do today are performed thousands of times faster than "reality" while most others that we do today are performed thousands, or even millions of times slower. The speed of the simulation is irrelevant to their existence

For a simulation to be useful it must reach desired results in a reasonable amount of time. If you are simulating something that only takes a few milliseconds in real life then a simulation that runs a thousand times slower than reality will still feel basically instant and one a million times slower will be done in around an hour. OTOH if you are simulating something that takes years in real life then with a 1000 times slowdown your simulation will be running for millenia.

Re:let me answer that with a question (1)

JasterBobaMereel (1102861) | more than 2 years ago | (#38862951)

Exaflop computer

    - limited by power constraints (as per this article)
    - limited by connectivity (nowhere near as many connections as a neurone)
    - limited by lack of unit repair (has downtime when repair needed)
    - limited by possibility of rogue programs, and damage

Slow neurones and the slow links between them don't actually seem to be an issue ...?

Seems more limited than a brain to me ...?

Re:let me answer that with a question (1)

Wierdy1024 (902573) | more than 2 years ago | (#38863261)

But each of those limitations improves approximately with moores law. the brain hasn't changed much this century. At some point one will surpass the other.

Re:let me answer that with a question (1)

JasterBobaMereel (1102861) | more than 2 years ago | (#38876405)

Computers have been faster than brains for most of their history, the thing that seems to matter is not speed but connections ...

Computers still have relatively limited numbers of these (compared to brains)

More of what we have now does not seem to be the solution, we are just getting power-hungry behemoths, that are very good at hyper-complex tasks but still no good at what we think is simple...

Moores Law has a limit, we are nearly at the atomic scale and Quantum effects are becoming more and more of an issue ... more of the same is not an option

We also have AI researchers who are using relatively simple but highly connected machines that can outperform most supercomputers at some tasks .....

Re:let me answer that with a question (1)

dkf (304284) | more than 2 years ago | (#38863063)

With an exaflop computer, simulating the human brain is looking like it might be possible.

It's looking like it's going to be rather more complex than that. Human brains use lots of power (for a biological system) and they do that not by being able to switch circuits very rapidly, but rather by being massively parallel. How to map that into silicon is going to be really challenging because it will require a totally different approach to the current ones; dealing with failures of individual components will be really a large part of the problem. To what extent will the power consumption itself prove an issue? Nobody really knows until it happens; we have no idea if we'll think of a way to get around the problem. There's also the matter of the correct level of simulation. Do we need to model chemical reactions? (That would be computationally expensive.) Can we model using discrete logic synapse-equivalents? A low-enough level of simulation will assuredly allow us to model a brain with enough processing, but can we do it with enough less that we can do real-time processing without stupid levels of power consumption? I have absolutely no idea there. (It wouldn't be a perfect simulation, but it would be nice if we didn't need a perfect sim as that would be far more practical.)

Another issue is that it seems that embodiment is crucial for getting the kind of intelligence we have. Minds seem to need bodies, at least as far as we understand it from biological systems (our working examples). I find this a fascinating development of neuroscience, and wonder whether substituting a robot body would work as well (I guess we could use wireless to keep the two parts physically separate, which would reduce power management complexity). But would that interaction help or hinder individual research efforts? Damned if I know the answer there.

Re:let me answer that with a question (1)

Rockoon (1252108) | more than 2 years ago | (#38863257)

dealing with failures of individual components will be really a large part of the problem.

Highly doubtful since the brain itself is very sloppy about the whole process. Neurons dont fire at exact thresholds, frequent permanent damage events plague them as we go through life, and even diet can have measurable effects on brain chemistry that effect how signals propagate as well as cause damage.

What I'm saying is that there is clearly an extremely high degree of redundancy built into brains because of the reality of physical randomness, and that there is no reason to believe that any small part of the system is essential in such a way that component failures are an obstacle.

Re:let me answer that with a question (1)

Chris Burke (6130) | about 2 years ago | (#38869027)

With an exaflop computer, simulating the human brain is looking like it might be possible.

The main problem of simulating a brain isn't the computational power required.

Re:let me answer that with a question (4, Insightful)

Teun (17872) | more than 2 years ago | (#38858919)

Why do you need an energy crisis to make something work more efficiently?
Concidering energy does not come cheap there is a very good commercial reason to save on one of the larger costs in computing (or any other activity)

And even though the US hosts the worldleaders in denial of CO2 related climate change it is still an ever more important concideration for many people, even in the US.

Re:let me answer that with a question (0)

Anonymous Coward | more than 2 years ago | (#38859529)

Necessity is the mother of invention. People don't think about problems until they are the next biggest thing to overcome in their path (and they have ideas on how to solve).

Re:let me answer that with a question (1)

AHuxley (892839) | more than 2 years ago | (#38861255)

More a "Battle of Stalingrad" fuel supply convoy, fly in vs huge demand on the ground math problem.
Every aspect of fuel use and cooling is been looked at. From HQ servers, air conditioning, servers in a tank to sensor networks.
They all need lots of electrical power that comes from very long fuel supply networks.

Re:let me answer that with a question (1)

somersault (912633) | more than 2 years ago | (#38863855)

The word you are looking for is "crisis".

Crysis is a pun based on the Crytek company name and the aforementioned word.

Re:let me answer that with a question (0)

Anonymous Coward | more than 2 years ago | (#38864681)

Do the goverment know of an upcomming energy crysis?

Don't let the other commenters get you down. I got the subtle "does it run Crysis?" pun...

In b4 bitcoins (-1)

Anonymous Coward | more than 2 years ago | (#38858707)

Yes you can get 75x more bitcoins for your watts until the difficulty adjusts. Bitcoin, destroying the planet for a couple of measly coins.

One small caveat: (0)

Anonymous Coward | more than 2 years ago | (#38858731)

You need to run your processors just a hair above T_c.

Fear not, I have the answer (0)

Anonymous Coward | more than 2 years ago | (#38858935)

Hamsters, wheels and dynamos.

Turing Tax (5, Interesting)

Wierdy1024 (902573) | more than 2 years ago | (#38858969)

The amount of computation done per unit energy, isn't really the issue. Instead the problem is the amount of _USEFUL_ computation done per unit energy.

The majority of power in a modern system goes into moving data around, and other tasks which are not the actual desired computation. Examples of this are incrementing the program counter, figuring out instruction dependancies, and moving data between levels of caches. The actual computation of the data is tiny in comparison.

Why do we do this then? Most of the power goes to what is informally called the "Turing Tax" - the extra things required to allow a given processor to be general purpose - ie. to compute anything. A single purpose piece of hardware can only do one thing, but is vastly more efficient, because all the power used figuring out which bits of data need to go where can all be left out. Consider it like the difference between a road network that lets you go anywhere and a road with no junctions in a straight line between your house and your work. One is general purpose (you can go anywhere), the other is only good for one thing, but much quicker and more efficient.

To get nearer our goal, computers are getting components that are less flexible. Less flexibility means less Turing Tax. For example video encoder cores can do massive amounts of computation, yet they can only encode video - nothing else. For comparison, an HD video camera can record 1080p video in real time with only a couple of Watts. A PC (without hardware encoder) would take 15 mins or so to encode each minute of HD video, using far more power along the way.

The future of low power computing is to find clever ways of making special purpose hardware to do the most computationally heavy stuff such that the power hungry general purpose processors have less stuff left to do.

Re:Turing Tax (3, Informative)

Anonymous Coward | more than 2 years ago | (#38859209)

For comparison, an HD video camera can record 1080p video in real time with only a couple of Watts. A PC (without hardware encoder) would take 15 mins or so to encode each minute of HD video, using far more power along the way.

While it makes your point, you're actually off by orders of magnitude on both: a modern PC can easily encode at 2-4x realtime for 1080p... and a good hardware encoder often uses less than 100 milliwatts. A typical rule of thumb is that dedicated hardware is roughly 1000 times more efficient, power-wise, than a CPU performing the same task.

Re:Turing Tax (1)

Wierdy1024 (902573) | more than 2 years ago | (#38859387)

You are indeed correct - it all depends on the codecs, desired psnr and bits/pixel available. For modern codecs, the motion search is the bit that takes most of the computation, and doing it better is a super-linear complexity operation - hence both your numbers and mine could be correct, just for different desired output qualities.

The ratio though is a good approximate rule of thumb. I wonder how this ratio has changed as time has moved on? I suspect it may have become bigger as software focus has moved away from pure efficiency to higher level designs, and cpu's have moved to more power hungry super-scalar architectures, but would like some data to back up my hypothesis.

But you loose flexibility (2)

SmallFurryCreature (593017) | more than 2 years ago | (#38859537)

If you want to talk about encoding, anime fan subbers are at the fore front. The latest is 10 bit encoding. It has a lot of benefits but what its main downside is that there is no hardware for it, you need to run it on the cpu. Someday hardware like a GPU might support it but that takes far to long to stay current.

That is the reason the general purpose CPU has won out so far, why mobile phones and tablet come with them as the main computing unit, because keeping up in hardware with the latest developments just is to slow.

You could in theory build a super computer that can run ONE task very fast. They existed, in fact the earliest computers WERE single task machines... and they lost out because the next task might be totally different and building a new machine for each task is slow and expensive.

The person below (Wierdy) talks about one bit of modern codecs... but this might change tomorrow, as indeed it has with 10bit encoding.

There is a reason DVD's suck donkey balls, open one up and look at what is inside and wonder why the fuck any of it was needed when any PC could easily have dealt with a better format (files max size, subtitle format etc)... because DVD players were purpose build devices and had to be designed ahead of current techonolgy to be widely supported. DVD players being purpose build single task hardware started out obsolete and couldn't change. Of course, the advantage was the they were relatively cheap and became cheaper BUT do you REALLY want your super computing to be this inflexible?

In many ways, the current GPU craze is nothing more then math co-processor of yesterday, or the windows chip on early video cards. They are useful but can't stay up with the rapid advances software can make.

The real money is in making generic hardware faster and more efficient because that is where the intresting stuff is happening. Profit wise as well. What would you rather be selling, DVD players or iPads?

Re:But you loose flexibility (0)

Anonymous Coward | more than 2 years ago | (#38860361)

I think the current GPU craze tells us that the ideal we have (had?) been striving for, a cpu capable of cranking out as many instructions per second in sequence as possible, is not a perfect fit for our needs. There are probably many other components arising from different trade-offs than those made for CPUs and GPUs, that would be valuable, and perhaps even turing complete.

Re:Turing Tax (1)

TeknoHog (164938) | more than 2 years ago | (#38859701)

I have four words for you. Field, Programmable, Gate, and Array.

Re:Turing Tax (1)

willy_me (212994) | more than 2 years ago | (#38859995)

A couple of more words: Power Hog.

- at least when compared to ASICs. But there are new developments in the area, see Silicon Blue Technologies [wikipedia.org] . It will be interesting to see how things work out in the future. Looks like all the players are trying to create power efficient FPGAs.

Re:Turing Tax (1)

gtall (79522) | more than 2 years ago | (#38860179)

And some words for you: volume and change. If you have a large enough application in the sense that you need millions of the things, and the application set in stone forever more, then ASICs are fine. If you ever intend to change it, or your run is small, FPGAs are a better choice.

Re:Turing Tax (1)

willy_me (212994) | more than 2 years ago | (#38860571)

or your run is small, FPGAs are a better choice

Yes, but the topic of discussion is power consumption not purchase price. My point was that FPGAs do not solve the problem of power consumption - at least not yet. They are getting better but then so are ASICs.

It appears that, looking forward, the best solution will be a combination of the two techniques. Specialized ASIC components glued together with FPGA elements. Most FPGA manufacturers already do this to a limited extent. It is common to see embedded CPUs in FPGAs - and I'm not referring to soft-CPUs.

Re:Turing Tax (1)

petermgreen (876956) | more than 2 years ago | (#38863925)

Yes, but the topic of discussion is power consumption not purchase price.

Power consumption is part of it but I don't think you can draw reasonable conclusions from power consumption alone. It's important but so are upfront cost and flexibility.

My point was that FPGAs do not solve the problem of power consumption - at least not yet. They are getting better but then so are ASICs.

Yes ASICs are the most power efficient way of performing a repetitive computation task because they have neither the data pushing overhead of CPUs/GPUs or the reconfigurable wiring overhead of FPGAs. However putting a design into an ASIC is expensive, so it's only practical if you want a lot of a copies of the design, plan to run each copy for a long time and are bloody sure you have got your design right. For many many workloads ASICs are out of the question.

So the more interesting question is how do FPGAs compare to CPUs and GPUs and that is going to be workload specific. IIRC the bitcoin miners found that FPGAs could give far more performance per watt than either CPUs or GPUs in their application BUT the upfront costs made it a hard sell. I'd expect crypto cracking to be similar.

It appears that, looking forward, the best solution will be a combination of the two techniques. Specialized ASIC components glued together with FPGA elements.

mmm, for example for floating point heavy tasks you could have an floating point blocks (adders, multipliers, dividers and so on) with FPGA like routing to move signals between them and some regular FPGA style register/logic units to control them.

Re:Turing Tax (1)

bill_mcgonigle (4333) | more than 2 years ago | (#38859935)

Less flexibility means less Turing Tax. For example video encoder cores can do massive amounts of computation, yet they can only encode video - nothing else.

And a Turning machine makes sense when transistors are expensive. But what's the actual cost of adding an h.264 encoder to a hardware die today? I bet it's cheaper than the electricity cost for doing much encoding over the ownership time of the part.

I suppose DSP's, VMX, MMX, SSE, etc. can all be seen as ways this has held true over time as transistors have gotten cheaper. Heck, lots of modern CPU functions can approximate this trend to a certain extent.

At this point, it's a matter of deciding what can be done in a big enough volume to make enough of the customers happy enough to pay for it.

Back in the day I could buy an 80387 to make my 80386 better at math. I wonder if any of the existing chip designs allow plug-in logic close enough to cache to make them worthwhile. GPU's are nice, but out on the PCIe bus they have to bring their own computers with them.

Re:Turing Tax (4, Interesting)

Kjella (173770) | more than 2 years ago | (#38860617)

To get nearer our goal, computers are getting components that are less flexible.

Actually, computers have lost lots of dedicated processing units because it just wasn't worth doing in dedicated hardware, that's where for example softmodems (aka winmodems) came from. And with GPUs going from fixed pipelines to programmable shader units, they too have gone the other way. Dedicated hardware only works if you are doing a large number of exactly defined calculations from a well established standard, like say AES or H.264. Even in a supercomputer the job just isn't static enough, if the researchers have to tweak the algorithm are you going to do build a new computer? You have parameters, but the moment they say "oh and we have to add a new correction factor here" you're totally screwed. Not going to happen.

Already happening (0)

Anonymous Coward | more than 2 years ago | (#38861705)

See the Anton Supercomputer.

It's the fastest molecular dynamics supercomputer in the world by a very wide margin. It is also the most power efficient supercomputer in the world in terms of ops/Watt, probably by about a factor of 100. It uses heavily-specialized custom processors.

HPC is not consumer general-purpose computing; specialization makes a lot of sense for HPC. Consider:

1. There are only maybe 5 applications which drive nearly all the $ in HPC (MD/CFD/etc.)
2. A high-end general-purpose supercomputer costs >$100 million. You can design a full custom supercomputer which will be much faster for $100 million (again, see the Anton example, their hardware team is ~30 people).

Re:Already happening (0)

Anonymous Coward | more than 2 years ago | (#38862945)

The application space at the high-end is not nearly as constrained as you imply. The fastest system in the US (based on the November 2011 Top 500 Rankings) is at Oak Ridge National Labs, which in open science installation. They have hundreds, if not thousands, of researchers from all over and they are all doing something different. Yes, there are groups that only do one type of computation and it may make sense for them look into a full custom design, but that is the exception, not the rule, at the high end...

Look more closely. (0)

Anonymous Coward | more than 2 years ago | (#38865227)

You'll find that the bulk of the compute time is spent on a small number of unique types of tasks.

Re:Turing Tax (0)

Anonymous Coward | more than 2 years ago | (#38865859)

"Actually, computers have lost lots of dedicated processing units because it just wasn't worth doing in dedicated hardware, that's where for example softmodems (aka winmodems) came from."

Lost "lots"? What else have we lost?

MMX, SSE, floating point?

PS3? Sure it's difficult to program but that's another matter.

Have a look inside a mobile phone: most of the hardware is specialized.

Softmodems exist because a modems' power- and performance requirements are low compared to current CPUs. That was not the case when modems first came to market. You won't find many softADSLmodems though.

It can be more efficient to do things on gp hardware depending on application (ie no steller floating point performance on you mobile phone) and the state of development of technology.

"And with GPUs going from fixed pipelines to programmable shader units, they too have gone the other way."

It is doubtful that nowadays (given quality and performance requirements) GPUs could fulfill their (primary) purpose without programmable shader units. And we sure don't do it on general purpose CPUs.
It's just that it turned out the kinds of calculations required for 3D rendering are also useful for other purposes, and a *relatively small* modification of GPUs allows for that. I'd hardly call that "GPU's going the other way" - GPUs are far from general purpose even tough slightly more gp than they used to be.

Once upon a time we started off with only integer calculation.

The long-term trend is obviously toward specialized hardware.

Re:Turing Tax (1)

tlhIngan (30335) | more than 2 years ago | (#38866111)

Actually, computers have lost lots of dedicated processing units because it just wasn't worth doing in dedicated hardware, that's where for example softmodems (aka winmodems) came from. And with GPUs going from fixed pipelines to programmable shader units, they too have gone the other way.

It's cyclical - going from specialized to general to specialized, etc.

Early computers used character generator chips - specialized processors that took ASCII(ish) inputs and generated the onscreen information. This evolved into chips handling video accelleration with sprites and all that.

Then we evolved into putting ti back in the processor and using video as a mere framebuffer - so we can have nice higher resolution graphics. We could scale framebuffer cards to impressive resolutions when we don't have to worry about dedicated hardware (sprites basically moved to complete software support). Then Windows came out and video specialization happened again with 2D accelleration (usually BitBlt).

Then we added 3D, which we initially did in software and migrated to hardware, and video, which went from postage sized QVGA or less to full screen HD.

And now today's video card isn't just a mere framebuffer like it was in the 90s, it's a full fledged specialized vector PC, doing stuff that once was in dedicated hardware blocks into general programs it runs to do what the hardware does. So even though we've specialized the video card, the video generalized and now we're writing code for it.

it's a cycle. Specialized blocks are replaced by more general ones which then spawn more specialized blocks that are replaced ...

Computronium (1)

Curunir_wolf (588405) | more than 2 years ago | (#38858991)

So they're researching how to create computronium? Will we then turn the whole solar system into a Matrioshka brain and all live in a virtual world?

Re:Computronium (1)

marcosdumay (620877) | more than 2 years ago | (#38859109)

First we'll need nuclear fusion and some kind of autonomous robots.

And we ever want that brain to be ours, we'd better get a deep understand of neurology and neural implants before we get those autonomous robots...

Re:Computronium (1)

SuricouRaven (1897204) | more than 2 years ago | (#38859199)

Getting the level of detail from a brain you'd need to simulate it might be less a matter of implants than destructive readout. Slice-and-scan.

Re:Computronium (2)

Curunir_wolf (588405) | more than 2 years ago | (#38860213)

Getting the level of detail from a brain you'd need to simulate it might be less a matter of implants than destructive readout. Slice-and-scan.

Well, right. That also eliminates the potential issues from having duplicate persons in virtual space and meat space.

they should talk to TI (3, Funny)

Gravis Zero (934156) | more than 2 years ago | (#38859133)

TI's line of MSP430 chips run using little solar cells. hell, they practically run on their own self-esteem. so scale that technology and bam, you got a super computer that runs on a couple AA batteries.

Re:they should talk to TI (0)

Anonymous Coward | more than 2 years ago | (#38860893)

You got me! I was absolutely sure your UID would be > 1million. Can't say I was off by much.

Do the math. Flops/watt is what we're talking about here. Find the flops of the TI chip, and see what wattage it uses...you'll be able to see how well the tech would scale.

Low power and low flops is one thing. Low power and high flops is another.

I would be impressed (0)

WindBourne (631190) | more than 2 years ago | (#38859137)

if this was applied to American companies and western manufacturing ONLY. Sadly, the neo-cons will push for the to be applied to everybody, esp. China.

Re:I would be impressed (0)

Anonymous Coward | more than 2 years ago | (#38859451)

neo-cons? really? Think you have the wrong thread...

Re:I would be impressed (1)

WindBourne (631190) | more than 2 years ago | (#38861291)

The ones that have pushed to have tech transfered to China has NOT been dems. It has been neo-cons when W was in office. While the neo-cons are out of the admin, they point fingers to O/dems, but the truth is that much of America's self destructive industrial policies were put into place by W/neo-cons. The question is, when will O/dems roll it back. The only way that they can do this, is if they will finally balance the budget. At the least, get it below .5T.

Re:I would be impressed (1)

garyebickford (222422) | more than 2 years ago | (#38864217)

Haha. Look up Clinton selling the Chinese our missile guidance technology. It enabled the Chinese to build a space program, and provide cheaper launch services, and also gave them the essentials to build accurate ICBMs. Some folks considered it treasonous.

Re:I would be impressed (1)

WindBourne (631190) | more than 2 years ago | (#38865781)

You mean magnequench? The technology that allows us to fly missiles without using GPS and relatively accurately? The tech that China wanted and the DOD and Clinton would NOT allow it to be transferred. But then W/neo-cons got into office, the DOD approved it. You mean that technology transfer?

Or are you talking about Hughes and Loral who ILLEGALLY transferred tech to China and was CONVICTED of such?

There IS treason, but it sure as heck was not Clinton. Sadly, the treason continues to this day.

Re:I would be impressed (1)

garyebickford (222422) | more than 2 years ago | (#38866193)

A few quotes:
Some guy with an axe to grind about Obama doing the same thing [onecitizenspeaking.com] :

In 1996, President Bill Clinton personally signed an executive order transferring control of satellite technology to the Department of Commerce; thus releasing restraints on a wide variety of sophisticated space and missile technology which were then exported to China.

CNN, 1998/05/22, [cnn.com]

WASHINGTON (May 22) -- President Bill Clinton on Friday defended a controversial satellite deal with China, even as White House officials delivered documents to the House International Relations Committee about the arrangement.

The president said the deal to launch U.S. satellites on rockets owned by other nations was "correct" and "based on what I thought was in the national interest and supportive of our national security."

Newsmax, 2003/9/29 [newsmax.com] :

Newly declassified documents show that President Bill Clinton personally approved the transfer to China of advanced space technology that can be used for nuclear combat.

The documents show that in 1996 Clinton approved the export of radiation hardened chip sets to China. The specialized chips are necessary for fighting a nuclear war.

"Waivers may be granted upon a national interest determination," states a Commerce Department document titled "U.S. Sanctions on China."

"The President has approved a series of satellite related waivers in recent months, most recently in November, 1996 for export of radiation hardened chip sets for a Chinese meteorological satellite," noted the Commerce Department documents.

These special computer chips are designed to function while being bombarded by intense radiation. Radiation hardened chips are considered critical for atomic warfare and are required by advanced nuclear tipped missiles.

Judicial Watch obtained the documents through the Freedom of Information Act, a Washington-based political watchdog group.

As I recall from the time, a lot of folks in the military and intelligence communities who were 'in the loop' were really vocal about this. It's been a long time so I don't recall too many details, so this will have to do.

Re:I would be impressed (1)

CapOblivious2010 (1731402) | more than 2 years ago | (#38859573)

if this was applied to American companies and western manufacturing ONLY. Sadly, the neo-cons will push for the to be applied to everybody, esp. China.

Yeah, I hate it when those xenophobic racist neo-cons push to share our advances with China (who has the fastest-growing need for power and one of the dirtiest power sources, namely coal). Hell, a move like that just might reduce poverty AND pollution, and no one wants that!

Thank god we liberals know that the only way to make the world a better place is to reflexively oppose not just everything the conservatives do, but everything we imagine they might do!

Re:I would be impressed (1)

WindBourne (631190) | more than 2 years ago | (#38861279)

Of course, I am not a liberal. But the problem is that China is cheating at WTO/IMF/FTAs. So, paying for tech like this and then allowing it to simply transfer to there because a bunch of neo-cons say to transfer.

Re:I would be impressed (1)

CapOblivious2010 (1731402) | more than 2 years ago | (#38863479)

OK, maybe china's cheating (by giving us goods/services at below cost - those bastards!)

But how did "neo-cons" get involved? I didn't see them (or any mention of politics at all) anywhere in this story until you conjured them up out of thin air. Let me guess: the reason you're not a liberal is because the liberals are way too far to the right - correct?

Besides, the last time I looked, the neo-cons had been out of power for several years, so I don't think you have anything to worry about.

P.S. Neo-cons have been "neo-cons" for 10+ years now, so that's not very new - does the "neo" prefix ever expire?

Re:I would be impressed (1)

WindBourne (631190) | more than 2 years ago | (#38865633)

First off, the neo-cons are VERY much alive and doing well. They control the republican party. Neo-cons was originally applied to dems that switched to the republican party. reagan was the head of that. The big difference is that the group that adapted reagan's beliefs (changing republican party's core beliefs dramatically) called themselves neo-cons. IOW, they declared themselves a group by doing that. That includes not just reagan, but those that follow him such as W., Cheney, Rove, Rumsfeld, Boehner, Cantor, etc. (though not republicans like Poppa Bush, Nixon, Ford, etc; basically, republicans from before reagan). Even to this day, those ppl remain in control of the republican party.

Secondly, I am a registered Libertarian and support it. HOWEVER, I am also not so stupid as to ignore TANSTAAFL. Why would China offer us up a goods/services below costs esp. since the vast majority of Chinese leaders are opposed to the west, and pour more money into their military than does America [cia.gov] ?

China IS cheating. And it is designed to destroy America by denying us the industry that allowed America to maintain a solid economical foundation. According to WTO, IMF, and even our FTA with them, says that neither side will cheat the way that China has. And yet, we allow it. Worse, the neo-cons CONTINUE to push it. They fight against any punishment against China. Likewise, when we do the 2009 'investment', it was originally about buying American goods. Why? Because Germany and China made their investments all about THEIR nation. It has been over and over the neo-cons that fight to keep offshoring more to CHina with a coming disaster to America (and the west).

we already know (0)

Anonymous Coward | more than 2 years ago | (#38859469)

the problem is finding a superconductor that will operate at room temperature

Re:we already know (1)

CapOblivious2010 (1731402) | more than 2 years ago | (#38859609)

the problem is finding a superconductor that will operate at room temperature

That insight, and $4.99, will get you a cup of coffee at starbucks.

Maybe if we had a bunch of high-power supercomputers (ideally with low power consumption), we could run more atomic- or quantum-level simulations, research, etc, and find such a room-temperature superconductor!

P.S. On a related note, I have a great idea for improving cities, reducing pollution, and eliminating commute times: just invent teleportation! It would be much more effective than wasting time with incremental side project like hybrids, smart traffic management, public transportation, etc.

Bitcoin difficulty increase time... (0)

Anonymous Coward | more than 2 years ago | (#38859575)

And a bit of extra inflation due to coins being released a bit faster before each retarget of course.

Power Consumption (2)

AnotherAnonymousUser (972204) | more than 2 years ago | (#38859823)

Is there any sort of rule of thumb when measuring power consumption - ie, X amount of processing uses Y blocks of power? Is there a theoretical minimum requirement of energy to perform certain types of calculations?

Re:Power Consumption (3, Informative)

alreaud (2529304) | more than 2 years ago | (#38860689)

Yes and actually very simple (SI units),

P = C*V^2*f where P is power in Watts, C is capacitance in farads, V is voltage in volts, and f is frequency in Hertz. C is kind of hard to measure, and is dynamic depending on processor load. A design value can be determined from processor data sheets.

Power is only consumed in MOS transistors during transitions, to the value I = C*dv/dt, where C is the overall transistor capacitance to the power supply, in this instance. If dv is 0, ie, at a stable logic level, then I must also be 0, and hence power dissipation must be zero due to Ohm's Law, P=I*V.

At 1.3V and 2.8GHz, the dv/dt multiplier becomes 4.73*10^9, significant for a 100-million transistor microprocessor even if overall capacitance/transistor ~femtofarads.

Re:Power Consumption (0)

Anonymous Coward | more than 2 years ago | (#38864985)

You forgot leakage current, which in modern designs is comparable to the switching power.

But, to answer the original question about a theoretical limit, yes there is. turning disorder into order takes energy, so you can approach it from a thermodynamics standpoint. You compare the amount of entropy in the data against what would be the amount if totally random, etc.

Re:Power Consumption (1)

Chris Burke (6130) | about 2 years ago | (#38868855)

You forgot leakage current, which in modern designs is comparable to the switching power.

Yeah, chip makers really wish power was only consumed on the transitions.

But, to answer the original question about a theoretical limit, yes there is. turning disorder into order takes energy, so you can approach it from a thermodynamics standpoint.

If your computer solely makes use of reversible calculations, you can reduce the power consumed to arbitrarily low levels using adiabatic circuits. Unfortunately "arbitrarily low power" comes commensurate with "arbitrary long computation time" so not necessarily a way to get TFlops/Watt.

Re:Power Consumption (1)

alreaud (2529304) | more than 2 years ago | (#38877825)

Thanks, it's been a while, LOL...

New transistors tech (0)

Anonymous Coward | more than 2 years ago | (#38860897)

I would bet they are going to help fund things like tri-gate or FIN-FET transistors which are already known. This will be one of those projects that's really just helping industry make the next (known) move forward. If money flows to Intel, IBM, AMD, Freescale, etc this will be the case. Or it could result in something interesting...

Sustainability=reliability (1)

neurosine (549673) | more than 2 years ago | (#38863289)

The real concern is how to maintain the mission critical applications when the power grid fails. The only fallback outside of tons of fuel (And even this won't last for decades) is a sustainable solution.

Truth is stranger than fiction (1)

biodata (1981610) | more than 2 years ago | (#38864409)

The machines already solved this problem in the fictional world.

the matrix is coming (1)

bwanaaa (653461) | more than 2 years ago | (#38865779)

everyone who checks in at a hospital should have their electrical energy harvested! that way we can pay for healthcare. and all the fancy computers, electronic medical records,etc that are needed in hospitals these days. Even though mortality has not been reduced by any of these measures.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?