Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Impressive GPU Numbers From Folding@Home

kdawson posted more than 7 years ago | from the that's-fast dept.

201

ludd1t3 writes, "The Folding@Home project has put forth some impressive performance numbers with the GPU client that's designed to work with the ATI X1900. According to the client statistics, there are 448 registered GPUs that produce 29 TFLOPS. Those 448 GPUs outperform the combined 25,050 CPUs registered by the Linux and Mac OS clients. Ouch! Are ASICs really that much better than general-purpose circuits? If so, does that mean that IBM was right all along with their AS/400, iSeries product which makes heavy use of ASICs?"

cancel ×

201 comments

Lopsided Alright.. (2, Interesting)

ackthpt (218170) | more than 7 years ago | (#16431135)

Those 448 GPUs outperform the combined 282,111 CPUs registered by the Linux and Mac OS clients. Ouch! Are ASICs really that much better than general-purpose circuits? If so, does that mean that IBM was right all along with their AS/400, iSeries product which makes heavy use of ASICs?"

That's pretty lopsided, but I suppose some of it could be explained away by GPU's not chewing through OS code and having to play nice for memory, so they'd be a bit more efficient. Could be most of those Linux and MacOS systems are long of tooth, but suspect someone's missed a few decimal places somewhere. I do love how quick a theory is posed and the OP starts to run with it. e.g. I look at the balance of my checking account and see there's $1,000 more in there than I expect there to be and immediately form the hypothesis that it's money to spend, without considering whether my rent check has gone through yet. Could be a rough time ahead if I went shopping with it. Either that or the GPU computers are on more than the others.

Whoops, used an old pentium for the math, never mind.

Re:Lopsided Alright.. (3, Insightful)

blahplusplus (757119) | more than 7 years ago | (#16431183)

I have a feeling this is memory bandwidth related, modern GPU's have insane amounts of memory bandwidth compared to the wide range of desktops. Not to mention the parallelism.

Re:Lopsided Alright.. (1)

apoc.famine (621563) | more than 7 years ago | (#16431287)

And I have a feeling that this is also use-related. There are a lot more things which demand processor power than GPU power. I bet there are tons more spare cycles on a GPU than on a CPU. I mean really - what maxes out a GPU? I'm guessing just a handful of games, while many other things rely on the CPU quite heavily.

Re:Lopsided Alright.. (1)

yabos (719499) | more than 7 years ago | (#16431489)

I don't really think that's the reason. GPUs have very fast RAM, lots of it, are dedicated to specific tasks and are very fast at these specific tasks. CPUs have lots of stuff in there that does nothing related to what the GPU is designed for like logic circuits and whatnot. Dedicated hardware is always way better than general purpose hardware.

Re:Lopsided Alright.. (4, Informative)

throx (42621) | more than 7 years ago | (#16431541)

It has nothing to do with memory bandwidth or use. The ASIC is about 1000 times faster than the CPU because it is using dedicated hardware designed to run very fast and parallel in 3D image processing, which is almost exactly the same problem as folding protiens.

Unless you are saying all CPUs are pegged at 99.9% use, or the GPU has memory three orders of magnitude faster then you're just looking at a effects that make a few percent difference here and there. The simple fact is the GPU is insanely faster at solving specific problems (3D processing) while it simply cannot ever run an operating system.

Re:Lopsided Alright.. (1)

Pollardito (781263) | more than 7 years ago | (#16431587)

i'm sure load difference is part of the picture. but load can't be a dominating factor, because while i'm sure there is a load difference it doesn't seem likely that it even approaches the same 50x difference that the performance gap has (if a non-Folding CPU averaged only 2% of its cycles unused, the GPU would have to be used 0% of the time to get a 50x difference).

i'm guessing that the better parallelization in the GPU together with the fact that the average GPU participating in Folding is much more modern than the average CPU makes up the bulk of the difference.

Lopsided Alright..ASIC and yea shall receive. (0)

Anonymous Coward | more than 7 years ago | (#16431189)

I'm waiting for the clients that use all the other ASICS in modern computers. e.g. sound card.

Re:Lopsided Alright..ASIC and yea shall receive. (1)

ackthpt (218170) | more than 7 years ago | (#16431315)

I'm waiting for the clients that use all the other ASICS in modern computers. e.g. sound card.

IDE controllers, HDD controllers, modem...

last night I saw it make a move for my ipod and nikon d70s! i'm drawin' the line there!

Re:Lopsided Alright..ASIC and yea shall receive. (1)

Al Dimond (792444) | more than 7 years ago | (#16432437)

The reason they use GPUs is that they're very powerful and better-suited to this type of computation than the CPU. The other specialized chips aren't. Maybe some of those new physics accelerators could be used, though I don't know enough about them to know if they'd be useful or not.

Re:Lopsided Alright.. (3, Interesting)

oringo (848629) | more than 7 years ago | (#16431711)

You can look at the statistics many ways. Here's the GFLOP/CPU catagorized by OS:

1. GPU: 65.463
2. Linux: 1.219
3. Windows: 0.948
4. Mac: 0.511

Of course, GPU beating the hell out of CPU in such tests is no surprise. It's pretty much a massive parallel vector engine. I'm more interested in seeing how PS3 holds agains all other guys when it comes out. They have a folding client for PS3 already.

Re:Lopsided Alright.. (0)

Anonymous Coward | more than 7 years ago | (#16431741)

From what I heard, theres problems getting threading working properly so the PS3 client only runs about 30% of it eventual speed (they're expecting it to about double the Windows performance)

Re:Lopsided Alright.. (1)

Firehed (942385) | more than 7 years ago | (#16431939)

It'll probably put out some pretty impressive numbers. Except for two problems. One: nobody will be able to buy one (or more importantly, nobody who would run F@H will, since so many are the Sony-hating slashdotter type). Two: most people have their console off when it's not being used for gaming. And I don't except it would do much number crunching beyond the game if it lives up to the hype.

Re:Lopsided Alright.. (1)

sbaker (47485) | more than 7 years ago | (#16432433)

A modern GPU might have as many as 48 "fragment shader" processors inside it - so there are really around 21,500 processors versus 282,000 CPU's. Then each GPU processor works in full four-way arithmetic parallelism - so it can do arithmetic and data move operations on four numbers just as fast as one. So with the right mapping of algorithm to processing, you have 86,000 floating point arithmetic units...they only need to be about 3x faster than the CPU's.

But these are not general purpose processors - in some respects they are horribly, horribly limited.

So if you have an algorithm that is enough like rendering polygons with textures - the GPU is just insanely fast...if your algorithm isn't enough like that - then you may not be able to run it on the GPU at all.

Re:Lopsided Alright.. (1)

Jeffrey Baker (6191) | more than 7 years ago | (#16432491)

This only makes sense if you are going to count all the ALUs and SIMD units separately for the CPUs, too. Your basic CPU can issue at least two floating point calculations in parallel and/or use SIMD units to operate on vectors as large as 128 bits. So the capabilities you ascribe to a GPU are not uncommon in a CPU.

The differences come in the quantity, not the kind. A CPU gives over a lot of transistors to caches and complex logic units. A GPU does not care much about logic and lacks caches.

Re:Oh for gods sake (0)

Anonymous Coward | more than 7 years ago | (#16432487)

A hammer is optimised for the task of hammering nails in boards, it will do this task significantly better than say a screwdriver.

Same with GPU, a GPU is desigened for one specific task (or a number of specific tasks), among them is folding. Not because of folding@home mind you, but because folding is one of the "operators" in image processing.

Now someone is using a GPU to the task for which it was desined, frankly I'm actually surprised they didnt get any more out of it.

So obvious... (1)

creimer (824291) | more than 7 years ago | (#16431141)

Everyone knows that ASCII text is faster than binary.

Re:So obvious... (5, Funny)

Iron Condor (964856) | more than 7 years ago | (#16431167)

ASCII silly question, get a silly ANSI.

Re:So obvious... (2, Funny)

Anonymous Coward | more than 7 years ago | (#16431381)

- Cool, can I play too?
- No, that's what consoles are for.

Re:So obvious... (4, Funny)

Jesus_666 (702802) | more than 7 years ago | (#16432193)

UTF are you talking about? I'm quite sure the mods are not latin-1 post like this go unmoderated.

Are ASICs really that much better? (5, Funny)

Anonymous Coward | more than 7 years ago | (#16431159)

Are ASICs really that much better than general-purpose circuits?

Generally ASICs are much better than general-purpose circuits except in general cases.

Re: Are ASICs really that much better? (1)

/ASCII (86998) | more than 7 years ago | (#16431537)

Exactly. The original question is stupid. An ASIC can be orders of magnitude faster for an fft, but you can't write a word processor for it.

Re: Are ASICs really that much better? (3, Insightful)

Jeffrey Baker (6191) | more than 7 years ago | (#16431621)

Good one ... but I also wonder why anyone is throwing around the term "ASIC" in this article. A GPU is obviously not an application-specific circuit, which is clearly shown by the fact that it can be programmed to process graphics, or protein folding, or numerous other tasks. A GPU is a general-purpose processor like a CPU, it just happens to have different numbers and kinds of execution units.

Re: Are ASICs really that much better? (1)

doublebackslash (702979) | more than 7 years ago | (#16432511)

Okay, this far down the page and nobody mentions it.
GPUs are designed to perform floating point operations on 4x4 arrays of floating point numbers. This allows them to do the math required to scale, rotate, and project 3d vectors onto a 2d surface. Follow so far? These circuits not only have fast memory ties and huge parallelism, the are also hard wired to perform some of the exact same operations required by FAH in only a few clock cycles instead of 44 (on the P4, 14 on the Opteron).
Being massively parallel means that instead of two FLOPS per clock the GPUs can push out hundreds, and they return in a short period of time so that any steps waiting on the result of a previous operation will be able to execute faster.
GPU's are very specific in their power, and it is no surprise to anyone who has played a game in software-only mode vs hardware accelerated mode that custom hardware is insanely fast.
Memory bandwidth is a factor, but not nearly the factor that people make it.
It has NOTHING to do with not running the OS, if and OS had that much overhead we would be screwed.
Get a clue people, custom hardware is good at custom things. Plain. Simple.

Also, research instead of speculating, or wait for someone to research for you. Don't throw assumptions out there, its bad for the community.

IBM right all along, or obvious? (2, Funny)

Gothmolly (148874) | more than 7 years ago | (#16431165)

Custom app written to run on hardware specifically designed to run apps like it, outperforming general purpose CPUs? Newsflash from Ric Romero!!1!

Distributed amongst home users (4, Funny)

Skevin (16048) | more than 7 years ago | (#16431169)

So, will someone please create a really pretty 3D screensaver representing the folding calculation process? I'd love to see a represention with hi-res lighting and texturing, full transforms, and user-scalable views at 400 million triangles/sec.. Thanks.

Solomon

Re:Distributed amongst home users (4, Interesting)

Enderandrew (866215) | more than 7 years ago | (#16431243)

The folding team has done this, and it will be a free download for the PS3 version. The Cell processor runs the Folding application itself, and the graphical representation of the protein folding calculations will be handled by the GPU with a pretty display.

Re:Distributed amongst home users (1)

noidentity (188756) | more than 7 years ago | (#16431785)

Do they run anything useful on the TV itself, or is this a nice way to waste electricity 24/7?

Re:Distributed amongst home users (3, Funny)

holysin (549880) | more than 7 years ago | (#16431857)

There is this lovely feature called a power button. There's also this really handy feature on most tvs since the mid-late 90s called multiple inputs.

Re:Distributed amongst home users (1)

Enderandrew (866215) | more than 7 years ago | (#16431961)

Yes, I said it has a nifty display.

And clearly, attempting to cure cancer and other such diseases is merely a waste of electricity.

Re:Distributed amongst home users (-1)

Anonymous Coward | more than 7 years ago | (#16432099)

What's that smell? The smell of thousands of PS3's overheating.

Re:Distributed amongst home users (0)

Anonymous Coward | more than 7 years ago | (#16432463)

So my Linux CPU will be powering up some umpteen yera old's wiz-bang PS3 graphix? No wonder us Linux folks are singing the blues...

Uhhh... (0)

Anonymous Coward | more than 7 years ago | (#16431197)

Stats page shows Windows clients putting up 149 TFlops, GPUs only 29. What kind of crack are you smoking?

Re:Uhhh... (0)

Anonymous Coward | more than 7 years ago | (#16431241)

The summary says nothing about Windows, just Linux and MacOS. What are YOU smoking?

Re:Uhhh... (1)

kextyn (961845) | more than 7 years ago | (#16431277)

Uhh...did you read the whole thing where it mentioned ABSOLUTELY NOTHING about Windows.

Re:Uhhh... (1, Informative)

Anonymous Coward | more than 7 years ago | (#16432589)

The stats page shows 157101 active CPUs for Windows and only 442 GPUs. The average GPU is about 70 times more productive ;-).

Obvious? (1, Insightful)

Iron Condor (964856) | more than 7 years ago | (#16431225)

Maybe I'm missing some subtlety in the OP somewhere, but if GPUs weren't better at what they're doing than CPUs, there wouldn't be a point in having a GPU in the first place.

...and if you have a problem that can be expressed in terms of the problem space the GPU is designed to handle, then that problem is going to run faster on the gpu than on the CPU.

Re:Obvious? (1)

Wesley Felter (138342) | more than 7 years ago | (#16431319)

GPUs are designed for graphics, but folding isn't graphics. That's why it's non-obvious.

Re:Obvious? (0)

Anonymous Coward | more than 7 years ago | (#16431913)

Not really. It's been known that GPU can manipulate matrices real fast (that the part making it fast for graphics) and so any problem with matrices (I think 3x3, or 4 x 3?) can be done with GPUs. Once that is recognized, other matrix problems can be unloaded onto it.

Re:Obvious? (0, Offtopic)

m0rph3us0 (549631) | more than 7 years ago | (#16431417)

Awesome sig.
I wanted to mod you up, but then I realized I couldn't tell you why.
So sorry.

Re:Obvious? (1, Offtopic)

SeaFox (739806) | more than 7 years ago | (#16431815)

Awesome sig.
I wanted to mod you up, but then I realized I couldn't tell you why.

Except you can Windows on a Mac now, so it can play games. So Macintosh hardware can do everything a Mac can do and everything a PC can do, making the Mac the superior hardware choice.

Re:Obvious? (1)

WilliamSChips (793741) | more than 7 years ago | (#16432029)

Until you realize that Asus computers are made in the same Chinese factory as Apples, are cheaper for the same hardware, and aren't made of ugly white plastic. I can stand white plastic in small things like iPods but for a whole computer it's plain ugly.

Re:Obvious? (1)

DRJlaw (946416) | more than 7 years ago | (#16432111)

So Macintosh hardware can do everything a Mac can do and everything a PC can do, making the Mac the superior hardware choice. News flash. PC hardware can do everything a Mac can do and everything a PC can do, making crippled Mac operating software the inferior software choice. Please review the thousands of posts to the OSX86 project immediately after Apple released MacIntel hardware, and before they tightened down the screws on their software interface to TPM authentication.

Re:Obvious? (0)

Anonymous Coward | more than 7 years ago | (#16432391)

> So Macintosh hardware can do everything a Mac can do and everything a PC can do, making the Mac the superior hardware choice. News flash. PC hardware can do everything a Mac can do and everything a PC can do, making crippled Mac operating software the inferior software choice.

BWA HA HA HA HA... you're joking right? No?

Oh, so you're just stupid then. OS X is the second-best OS on the planet (behind Linux), and the easiest to use.

> Please review the thousands of posts to the OSX86 project immediately after Apple released MacIntel hardware, and before they tightened down the screws on their software interface to TPM authentication.

What are you, some sort of retard? You can still run the latest versions of OS X on PCs. See http://wiki.osx86project.org/wiki/index.php/Instal lation_Guides [osx86project.org] for help.

Re:Obvious? (0)

Anonymous Coward | more than 7 years ago | (#16431695)

If programmers were to spend the same ammount of efforts to optimize their code in assembly instead of lumping in bloated code & frameworks, may be we can see a better comparision.

For which purpose? (1)

Enderandrew (866215) | more than 7 years ago | (#16431271)

The purpose a general purpose CPU is to handle all calculations. For this task, which is very specific, a GPU may be that much better.

GPUs are Specialized Parallel Computers (4, Insightful)

ThinkFr33ly (902481) | more than 7 years ago | (#16431273)

GPUs are, for the most part, highly specialized parallel computers [wikipedia.org] . Virtually all modern CPUs are serial computers. They do essentially one thing at a time. Because of this, most modern programming languages are taylored to this serial processing.

Making a general purpose parallel computer is very, very hard. It just so happens that you can use things like shaders for more than just graphics processing, and so via OpenGL and DirectX you can make GPUs do some nifty things.

In theory, and indeed often in practice, parallel computers are much, much faster than their serial counterparts. Hence the reason a GPU that costs $200 can render incredible 3D scenes that a $1000 CPU wouldn't have a prayer trying to render.

Re:GPUs are Specialized Parallel Computers (4, Informative)

dslauson (914147) | more than 7 years ago | (#16431501)

Yes. That's basically right.

Here's a Wikipedia article [wikipedia.org] on general purpose GPU processing.

Folding is what's know as a rediculously parallel problem. That is, it can be broken up in to small subproblems that can be distributed among many processors with a minimal amount of communication among processors. It also benefits from not requiring a lot of branching (if/switch statements and such), which GPUs generally do not handle well.

Many problems, (I'd argue MOST problems) do not cater well to these kinds of restrictions. So, while a GPU is well suited to crunching away on pieces of the folding problem, it's going to be lousy at doing the day-to-day stuff you do with your computer.

I KNEW it! (1, Funny)

Anonymous Coward | more than 7 years ago | (#16431307)

Macs and Linux suck. This is SCIENTIFIC proof.

This is the perfect time... (2, Interesting)

loraksus (171574) | more than 7 years ago | (#16431311)

... to start heating your house with your computers ;)

I actually installed boinc with seti on several of my machines last night and it worked quite well to heat part of the house (us Canadians need to turn the heater on earlier). Took a bit of time to get started, but it was nice and toasty in the morning.

Does anyone know if this method is less efficient in generating heat than using a apace heater? Slower perhaps...
If you're going to use energy by turning on the wall heater anyways, why not use it to crunch some numbers?

Re:This is the perfect time... (0)

Anonymous Coward | more than 7 years ago | (#16431421)

I actually installed boinc with seti on several of my machines last night and it worked quite well to heat part of the house

Only on /. will you see comments like this.

Re:This is the perfect time... (2, Funny)

loraksus (171574) | more than 7 years ago | (#16431465)

Well, I really, really hate the "heater that hasn't been used for 6 months" smell, so that was sort of my primary focus.
That makes it a little better... right?
Please?

Re:This is the perfect time... (1)

PatrickThomson (712694) | more than 7 years ago | (#16431441)

Yes, in a heater, everything is designed to generate heat. In a computer, the components are inefficient, resulting in the generation of ... oh ... never mind.

Re:This is the perfect time... (1)

Grishnakh (216268) | more than 7 years ago | (#16431469)

Does anyone know if this method is less efficient in generating heat than using a apace heater?

It probably depends on the technology your space heater uses to generate heat. If it's old-fashion resistive coils, it's probably about as efficient. The newer ceramic-element heaters I'm not sure about.

For the most efficient electric heating, you should be using a whole-house heat pump.

mnb Re:This is the perfect time... (0)

Anonymous Coward | more than 7 years ago | (#16431619)

If he is needing the heater this early in the year, it is a safe bet he lives in a climate where a heat-pump alone does not give enough delta-T to work all winter long.

P.S. - all electric heaters have the same efficiency, assuming no energy is "wasted" as visible light. The difference between them basically comes down to radiant vs. convection heat. Which is more useful depends on your circumstances. Radiant heat has the advantage of heating you and not the air.

Re:mnb Re:This is the perfect time... (1)

Grishnakh (216268) | more than 7 years ago | (#16431697)

P.S. - all electric heaters have the same efficiency, assuming no energy is "wasted" as visible light. The difference between them basically comes down to radiant vs. convection heat. Which is more useful depends on your circumstances. Radiant heat has the advantage of heating you and not the air.

I'm not sure if you meant to exclude heat pumps from this statement, but if not, heat pumps can achieve 3-4 times the efficiency of resistance heaters. Here's a handy link [gsu.edu] that explains it in layman's terms.

But you're right, heat pumps may not work very well in his climate if it's too cold.

Re:This is the perfect time... (0)

Anonymous Coward | more than 7 years ago | (#16431479)

Well, it depends.

Assuming perfect conversion from electricity to heat, it takes 277.8 kWh to equal a gigajoule of natural gas.

Now, take an average home in October, using 7 GJ of gas to heat at 60% efficiency.
You'd need 4.2 GJ of electricity (100% efficiency) to equal that. That comes to 1166.8 kWh.

I live in Calgary, and the current RRO prices (free market crap), gas is ~$3.75/GJ and electricity ~$0.08/kWh.
Total price gas: $26.65, total electricity: $93.34.

The break-even point where electricity is the same as natural gas is $13/GJ. Last winter it did reach that level, however with government energy rebates, not really.

Thus, we see why we are still using natural gas to heat homes. With a more efficient furnance (they can get up to ~98% efficient), it becomes even better.

Re:This is the perfect time... (1)

nodrogluap (165820) | more than 7 years ago | (#16431911)

So this begs the question: when are the natural gas powered computers coming out? :-)

Re:This is the perfect time... (4, Insightful)

Murphy Murph (833008) | more than 7 years ago | (#16431505)


I actually installed boinc with seti on several of my machines last night and it worked quite well to heat part of the house (us Canadians need to turn the heater on earlier). Took a bit of time to get started, but it was nice and toasty in the morning. Does anyone know if this method is less efficient in generating heat than using a apace heater? Slower perhaps..


Using your CPU as a space heater is not a bad idea. It is 100% efficient. Every watt it consumes gets turned into heat. Before someone says "but the cooling fans are wasteful" let me remind you that the air moved by those cooling fans will eventually come to a stop (inside your house) as a result of friction, releasing its energy as heat in the process.

Depending on what type of space heater you use, and the construction of your house, your computer can be more efficient than many other electric space heaters. Since none of the energy "consumed" by your CPU/GPU is converted to visible light, none of it has the opportunity to leave your house through your window panes (assuming you have IR reflective glass). Contrast this to quartz and halogen space heaters which produce a fair amount of visible light.

In much the same way, incandescent bulbs match the efficiency of compact fluorescents during the winter months. Every watt "wasted" as heat during the summer is now performing useful work heating your house. (Before someone says "you called a quartz/halogen space heater inefficient because of its waste light, and now an incandescent efficient because of its waste heat!' let me say that the space heater's light is not useful light, while the bulb's heat is useful heat (during the cool months.))

Re:This is the perfect time... (0)

Anonymous Coward | more than 7 years ago | (#16431669)

Every watt it consumes gets turned into heat.

This is true, but what about the energy lost generating the electricity and getting the electricity from the plant to your house? IOW, a CPU is just as good as an electric space heater, but maybe not as good as an efficient natural gas heater. If you've got gas heat, it's probably not a winning proposition to throw your CPUs into a busy loop if you have no reason other than heat.

Re:This is the perfect time... (1)

drinkypoo (153816) | more than 7 years ago | (#16431735)

In terms of efficiency, it is always worse to burn fuel to produce heat to do work to generate electricity to produce heat. In terms of price, when natural gas prices rise high enough, electrical heat is already better. I mean it comes and goes, but when heating oil and natural gas prices peaked here (California) it was cheaper to heat with electricity and just a bit up from here there's a Canadian talking about the same thing.

Re:This is the perfect time... (1)

ameline (771895) | more than 7 years ago | (#16431813)

Yes, everything you say is true, but (at least here in Toronto) electricity is one of the more expensive ways to produce heat. For the purposes of heating, natural gas is about 1/3 the price on a watt for watt basis. So while you're right, those incandescant lights are not making "waste" heat in the winter months, their heat is 3x more expensive than that produced by your furnace. You will still save money by using more efficient ways of producing light.

(And before you tell me that some percentage of my furnace heat goes up the chimney/out the exhaust, yes, some does -- 4% of it, to be exact -- I have a new high efficiency condensing unit.)

Re:This is the perfect time... (3, Insightful)

ameline (771895) | more than 7 years ago | (#16432109)

Ok, I went and did the math (assuming, on average 1035 btu/cubic foot of natural gas) Looking at my bills,

Natural gas is (cdn)$0.278287 per cubic meter, and electricity is 0.058 /kwh.

At 96% efficiency, natural gas works out to 0.027331 / kwh, (3413 btu in 1 kwh) or 47% of the cost of electricity at today's prices in Toronto.

So 1/3 was a bit of hyperbole, but not too much.

Re:This is the perfect time... (0)

Anonymous Coward | more than 7 years ago | (#16432145)

at least here in Toronto) electricity is one of the more expensive ways to produce heat.

Er, why is that? Ontario had abundant hydro and nuclear generation, at least when I left a couple of decades ago.

Just asking, since you seem to be clued in to your local energy. 1/3rd the cost is a heck of a gap.

Re:This is the perfect time... (1)

TClevenger (252206) | more than 7 years ago | (#16432217)

In much the same way, incandescent bulbs match the efficiency of compact fluorescents during the winter months. Every watt "wasted" as heat during the summer is now performing useful work heating your house. (Before someone says "you called a quartz/halogen space heater inefficient because of its waste light, and now an incandescent efficient because of its waste heat!' let me say that the space heater's light is not useful light, while the bulb's heat is useful heat (during the cool months.))

It's for that reason that I replace the CF bulbs in my unheated spaces (my computer room and the laundry room) with incandescents in the winter. Not only do incandescents perform better in the winter, they also provide a little heat into the space. Similarly, my otherwise 14W Linux box (Dell Latitude 1.7GHz) will be run up to full 100W steam in the unheated space with Folding@ to help warm me up while I'm working.

Re:This is the perfect time... (3, Informative)

evilviper (135110) | more than 7 years ago | (#16432271)

It is 100% efficient.

Not true. You aren't taking into account Power Factor at all... Not that I'm surprised, as most people don't understand it.

With switching power supplies, it's common to see PF in the range of 0.4, as opposed to fully-resistive electric space-heaters (and incandesent lightbulbs) with a perfect 1.0 PF.

Residential customers are lucky, in that they don't get charged for PF losses by the power company, while companies certainly do. However, it's still highly ineffecient, even if you aren't paying for it directly.

And besides that, electric heating is almost always more expensive than conventional heating, like natural gas, or electric heatpumps.

Re:This is the perfect time... (1)

OldSpiceAP (888386) | more than 7 years ago | (#16431837)

I live in a 2 story house. ONe of the rooms of the upper story contains my little computer lab. It has 4 desktops equipt with 21inch CRT's (yes crt's not lcd's .. come on they were free!) and a small server rack with a few Super Servers (super is the brand). Most of the summer the upsairs room is stiflingly hot and I sweat away the pounds as I model in blender. In the winter I don't even open my upstairs vents. Partially because heat rises of course but I'm reasonably certain the PC's are really keeping my upper story fairly warm.

Re:This is the perfect time... (2, Insightful)

olddoc (152678) | more than 7 years ago | (#16432079)

If my computer idles at 150w and runs FAH at 100% cpu at 200w and I need 20h to generate 1 unit,
I am spending $.10 for the extra kw hour roughly. In the summer I waste money on AC in the winter I save
gas money on heat. If I put my computer in 4watt S3 standby for 15 of those 20 hours, I can save a lot more.
FAH calculations do not depend on "free" "idle" computer power, they depend on users spending money to generate
the results.

Re:This is the perfect time... (1)

dlZ (798734) | more than 7 years ago | (#16432537)

... to start heating your house with your computers ;)

I actually installed boinc with seti on several of my machines last night and it worked quite well to heat part of the house (us Canadians need to turn the heater on earlier). Took a bit of time to get started, but it was nice and toasty in the morning.

Does anyone know if this method is less efficient in generating heat than using a apace heater? Slower perhaps... If you're going to use energy by turning on the wall heater anyways, why not use it to crunch some numbers?


I'm in the Central NY region, and it's the same here. I own a PC shop, and we used to have old CRT's in the back when we first opened 3 years ago. We basically had whatever we could scavenge. During the winter, we almost never had to turn on the heat in our backroom. The heat generated from the CRTs and the computers being serviced was more than enough to keep us toasty warm. Three years and some success later we have LCDs, and the effect just isn't the same.

Answers (1)

Junta (36770) | more than 7 years ago | (#16431313)

Q: Are ASICs really that much better than general-purpose circuits?
Yes, that's why anyone would bother.
Q: If so, does that mean that IBM was right all along with their AS/400, iSeries product which makes heavy use of ASICs?
A: Yes and no. More relevant will Cell pave the way to good price/performance. The problem with the iSeries line is not so much performance, but price/performance For the same cost of an iSeries config you can cluster a bunch of xSeries and beat it through sheer brute force of CPUs. If the QS20 and followups yield better price/performance, it could be interesting.

Re:Answers (1)

jackb_guppy (204733) | more than 7 years ago | (#16431591)

Not completily true.

I run using both hardware types. xSeries can not complete in the shear thoughput of grunt data processing - Billions and Billions of records. Yes, 1000 PC can process billion of records, but then your cost pass that of 1 iSeries.

Now when you limit the processing to the type of computer that best handles that type of work:

xSeries is better in single/one off processes, like a web page request, or finding the Lat/Long of Address, where all the information is fresh look up each time. So scaling by adding processors makes scents.

iSeries is better in bulk processing, like adding billions of records, searching billions of records, generating large complex reports. Here scaling of 64 processes, IO channels and having 2 TerraBytes of main memory makes it move!

 

Re:Answers (1)

GeorgeS069 (956679) | more than 7 years ago | (#16432143)

adding CPU's make scents???that's freakin awesome...can't wait for the HDTV with "smellavision" LOL

Better explanation (0)

Anonymous Coward | more than 7 years ago | (#16431349)

Fast ASICs are better than below-average CPUs, though.

Remember, the GPUs required for this are pretty new, while any CPU can run the normal client.

Look at how new high-end graphics cards have more RAM than the computer I bought just a couple years ago, for example. It's not surprising that new high-end GPUs are faster than average CPUs. Consider, also, that some fraction of people who would have otherwise run the normal client, and also have high-end systems (as demonstrated by their graphics card), have removed themselves from the normal CPU pool.

The "CPU" half of this statistic, then, is full of people with relatively wimpy computers. Are we surprised?

Acutual Performance Figures? (1)

heli0 (659560) | more than 7 years ago | (#16431353)

Are there actual benchmarks yet comparing average time per WU for GPU vs CPU?

For all we know the majority of those Linux and Mac clients are old P2s and G3s.

No BEEP Sheeeerlock (1)

McNihil (612243) | more than 7 years ago | (#16431375)

"...makes heavy use of ASICs..." only from someone raised in the x86 culture would find it flabbergasting that special purpose ICs are 3 magnitudes faster than a general purpose program.

ePenis comparisons aside... (1)

BoberFett (127537) | more than 7 years ago | (#16431405)

What's really exciting is what if only 10% of the PCs that are currently running the CPU version switch to the GPU version, the work output will increase by a factor of 6. What does that mean for the researchers using this data? Will they get the answers they're looking for in a matter of years instead of decades?

Quick Statistics... (1)

quakeroatz (242632) | more than 7 years ago | (#16431461)

GPU Speed % vs OS Type
Windows 6902%
Mac OS X 12818%
Linux 5370%
GPU 100%
Total 5889%

An average of 5889% faster than other "OS's" or PU's.

PLEASE edit story posts (0)

Anonymous Coward | more than 7 years ago | (#16431563)

Not so much to make the poster look less like a moron, e.g. "Are ASICs really that much better than general-purpose circuits? If so, does that mean that IBM was right all along with their AS/400, iSeries product which makes heavy use of ASICs?", but to spare the rest of us having their eyes roll back in their heads.

I used to have excellent vision, but reading these submissions to Slashdot is giving me eye strain from the frequent and violent eye rolls!

GPU != ASIC (1)

TeknoHog (164938) | more than 7 years ago | (#16431611)

If the same processor can be used to generate eye candy and cure cancer, I wouldn't call it application specific.

Re:GPU != ASIC (1)

msloan (945203) | more than 7 years ago | (#16431757)

Yet it certainly is not general purpose. It'd be essentially impossible to write a text editor, for example. At least without having the CPU do alot of the work.

Re:GPU != ASIC (0)

Anonymous Coward | more than 7 years ago | (#16431851)

Application Specific refers to the type of functions/calculations the system can perform. Not the actual human applications.

GPU Folding@Home (1)

jjacobs2 (969071) | more than 7 years ago | (#16431657)

I was curious about this so I did a bit of reading on their site. It seems like the GPU's are only useful for certain types of calculations. So while the GPU's can get a huge amount done fast they still need processors to handle stuff GPU's aren't made for. Another factor is that the specific model GPU determines what types of calculations it can handle. That's the reasoning behind only supporting certain kinds of ATI cards. They can handle enough different types of calculations that they're worth using. As far as the PS3 is concerned for f@h I would be concerned about overheating if you're running it like that for long periods of time. It's already been a problem to some extent for the xbox 360 and the cell processor is even more powerful. PS3's are very expensive machines (especially at launch) to be using them for f@h if there's risk of overheating.

Re:GPU Folding@Home (1)

shd666 (451529) | more than 7 years ago | (#16431873)

As far as the PS3 is concerned for f@h I would be concerned about overheating if you're running it like that for long periods of time. It's already been a problem to some extent for the xbox 360 and the cell processor is even more powerful.

I don't know specifics of PS3 heating, but the approach they took in PS3 to use many SPUs which are simple multi-issue SIMD DSPs is actually a "low power" approach for great performance. Irony.

sounds like they don't even need us (0)

Anonymous Coward | more than 7 years ago | (#16431687)

If a GPU can be so effective, seems like they might be better off building a cluster of ATI powered pc's and running the calculations in-house. I bet that's a lot cheaper than a supercomputer.

But i guess you can't argue with 'free'

Re:sounds like they don't even need us (1)

shd666 (451529) | more than 7 years ago | (#16431835)

If a GPU can be so effective, seems like they might be better off building a cluster of ATI powered pc's and running the calculations in-house. I bet that's a lot cheaper than a supercomputer.

Maybe ATI bought them a real supercomputer for this nice advertisement:-)

Nvidia is less efficient? (1)

jeffs72 (711141) | more than 7 years ago | (#16431749)

The article links to the site to download it, and one of their faqs is asking about supporting nvidia, and they said something to the affect that Nvidia doesn't have as many pipelines or something. My nvidia card (7950, really 2 7900's put together) is a pretty nice card. I'm wondering if it really isn't worth it to optimize to the nvidia, or if there is some bias from the guys who wrote this.

Anyone care to explain?

Re:Nvidia is less efficient? (0)

Anonymous Coward | more than 7 years ago | (#16432501)

nVidia cards don't have as many pipelines (on a single card at any rate, SLI is a different issue), and furthermore I don't believe nVidia supports 64-bit HDR rendering yet, which may be important for this. (ie, 32-bit output might not give sufficient accuracy)

Not really ASICs (1)

shd666 (451529) | more than 7 years ago | (#16431763)

Are ASICs really that much better than general-purpose circuits?

These days high-end graphics cards are multiprocessor DSP systems. That they're also ASICs is too general to be informative here. Those DSPs are programmable like the general-purpose processors, but they wouldn't be as efficient in normal programs. However, in certain types of programs they're very fast due to their simplified memory architecture, pipelining etc. I think it would be more accurate to ask:


"Are multiprocessor DSP systems really that much better than general-purpose multiprocessors systems?"


Usually the speed comes with the loss of programmability. Programs for those DSPs have to be designed with message-passing, tight threading and memory efficiency in mind, so it won't be easy to take advantage of the potential. It's interesting to see how far this will go.

Not a mystery (2, Insightful)

DaveJay (133437) | more than 7 years ago | (#16431767)

Take one hundred people with computers, and who have an interest in Folding@Home. Offer them a CPU-driven version of the app, and 100 computers will be running the CPU-driven app, regardless of the age/performance of the machine.

Now, offer them a GPU-driven alternative. For the most part, the only people that will install and run it are those with a fancy-schmancy video card capable of running it, and for the most part, the only people that have a fancy-schmancy video card capable of running it have high-performance computers as well (or at least more recent computers that came with compatible cards.)

So let's say that's ten out of the hundred, and those ten are statistically likely to have had the highest-performing CPUs as well; so you've pulled the top ten performers out of the CPU-client pool, and thrown them in the GPU-client pool. Even if you didn't switch those ten people over to the GPU, you could probably isolate those computers' CPU-client performance numbers from the other 90 and find that they're disproportionally faster than a larger number of the slower computers.

There's still more to the story, of course, but you really are taking the highest-performing computers out of the CPU pool and into the GPU pool. The exception would be high-performance servers with lousy/no graphic cards, but those are likely working so hard to perform their normal business that Folding@Home isn't a priority.

Move the vector processor on-board? (4, Interesting)

Zygfryd (856098) | more than 7 years ago | (#16431861)

So when are we going to see (x86/64) motherboards with a socket for a standard processor and a socket for a vector processor?
Couldn't we finally have graphics cards that only give output to the screen and separate vector processors with a standardized interface / instruction set?

Re:Move the vector processor on-board? (1)

Gothmolly (148874) | more than 7 years ago | (#16432389)

Vector processors with a standardized instruction set = any modern GPU + OpenGL.

Re:Move the vector processor on-board? (0)

Anonymous Coward | more than 7 years ago | (#16432419)

Torrenza

Argg... AS/400 mumble... (1)

Mike Blakemore (999177) | more than 7 years ago | (#16431935)

I seriously don't see IBM's AS/400 as the wave of the future. For those of you who get to support these green screen monsters - keep up on your PTF's. I had the unpleasant task of migrating from an AS/400 to an i5 shortly after I started working for a university. We hadn't applied any updates in a few years and it was a nightmare trying to get our different software packages to run on the new system. If it wasn't a licensing issue then it was a hardware compatibility problem. Turns out the new i5's ship with gig ethernet controllers built into the motherboards that don't support older protocols. Fun for all - especially since most OS/400 applications are horribly old. It's solid as a rock over all - as long as you don't hit refresh too many times and lock up 99% of the cpu.

Re:Argg... AS/400 mumble... (1)

Usquebaugh (230216) | more than 7 years ago | (#16432225)

So they hadn't kept up to date with patches. How exactly is an IBM/iSeries fault.

I've worked with the S36/S38/AS400/iSeries/I5 for decades. The thing has always been rock solid. Not quite in the mainframe realm of up time, but in 20+ years I have only seen the machine down twice unplanned. Both cases due to hardware borking, fscking disks.

You can refresh all you like, no problem.

The downside is the lack of a decent gui, screen scrapers just don't cut it. IBM should make the X protocol or VNC protocol availbale as part of fully integratd tool kit. Make it easy to code a gui as a 5250 display and most business would stay on it for another 20 years.

The cool technical thing about the beast is not the ASIC or the chips but rather the idea of single level storage, Frank Soltis the designer is one of the many unsung pioneers of the computing industry.

Re:Argg... AS/400 mumble... (1)

Shadyman (939863) | more than 7 years ago | (#16432243)

But the big question is...

Does it run Linux?

Eons ago, in a far, far away galaxy... (0)

Anonymous Coward | more than 7 years ago | (#16431953)

... I was a recent user of that new thing, Linux, and someone made a patch to use the fpu to accelerate memory content transfers, which that someone claimed was an often used operation. (the original patch page seems to be missing...)

FFW to now, does anyone know whether we could make use of this GPU... e.g., Nvidia MX 440 ;-) ... to speed up things like Oo.o ?

If so, how?

Thx.

wow (1)

hurfy (735314) | more than 7 years ago | (#16431979)

That does sound impressive.

Even if, as i imagine, many of the linux clients aren't exactly top end CPUs. Usually it seems the top-end GPU is as complex as the top-end CPU of the time. I know the transistor count was close when i built my last complete setup. Surely my 1.6 P4 (soon-to-be-linux box) get trashed for complexity/throughput by a new video card by now :(

That and like others said a targeted client and screaming memory for the GPU is gonna rock. Would be closer if the CPU client was aimed at a particular one, but noone is gonna make a dozen general clients to cover each generation and brand of CPU to best use each little feature.

but yowsers that's a lot of umph :)

Remember: 1 GPU has more than one processor. (4, Informative)

nick_davison (217681) | more than 7 years ago | (#16431981)

X1900 - 48 pixel shader processors plus 8 vertex shaders. Assuming you manage to run them all equally in parallel: 56 processors.

Standard CPU - 1 core (assuming dual cores get read as 2 CPUs).

448 GPUs x 56 = 25,088 effective processors all with on card memory.

25,050 CPUs x 1 core = 25,050 effective processors all dealing with system busses etc.

In short, if you're performing one simple task trillions of times, many very simple, highly optimized processors with dedicated memory do the job better than even a similar number of much more capable processors that have to play nice across a whole system.

And this ignores the number of old couple of hundred megahertz systems that people don't use anymore so hand over to the task vs. X1900s being the very high end of ATIs most recent line.

For massively parallel tasks like rendering pixels, folding proteins, compressing frames of a movie, etc. I'd absolutely love large quantities of a simple processor. For most other tasks, given present technology, I'd still side with fewer more able processors. Either way comparing 448 of something with 56 processors within it to 25,000 single processors and saying, "But 448 is SO much less than 25,000!" is an unfair comparrison.

Opportunity to improve many programs... (1)

gjuk (940514) | more than 7 years ago | (#16432349)

Whenever writing a significant bit of code, ask yourself if you can represent the functionality graphically. If so, recode it the function as a graphical problem and use the GPU. If these figures are right, this conversion can carry a 99% 'inefficiency overhead' and still run faster on the GPU than the original code could in the CPU...

A GPU is not an ASIC (0)

Anonymous Coward | more than 7 years ago | (#16432553)

QED

Golly Beav.... (2, Funny)

certain death (947081) | more than 7 years ago | (#16432559)

Imagine if they had developed this application for NVidia video cards, probably 2x the speed!!1! Go ahead, mod me a troll....I will appologize tomorrow :o)
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...