Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Folding@Home Releases GPU Client

kdawson posted more than 7 years ago | from the call-or-fold dept.

177

SB_SamuraiSam writes, "Today the Folding@Home Group at Stanford University released a client (download here) that allows participants to fold on their ATI 19xx series R580-core graphics cards. AnandTech reports, 'With help from ATI, the Folding@Home team has created a version of their client that can utilize ATI's X19xx GPUs with very impressive results. While we do not have the client in our hands quite yet, as it will not be released until Monday, the Folding@Home team is saying that the GPU-accelerated client is 20 to 40 times faster than their clients just using the CPU.'"

cancel ×

177 comments

Sorry! There are no comments related to the filter you selected.

Power usage? (4, Interesting)

Anonymous Coward | more than 7 years ago | (#16284611)

Anybody got an idea of what kind of power constant full speed GPU calculations are likely to burn?

Re:Power usage? (4, Informative)

NerveGas (168686) | more than 7 years ago | (#16284757)

I don't have specifics for that chip, but I would guess 100-150 watts. In both performance-per-cycle and performance-per-watt, it far outstrips using a general-purpose CPU.

20x-40x the performance at 1x-3x the power usage is pretty good.

steve

Re:Power usage? (2, Insightful)

jeffs72 (711141) | more than 7 years ago | (#16284801)

Or heat for that matter. My geforce 7900 raises my box temp by 4 degrees C just doing 2d windows xp desktop work. I can't imagine running a gpu at 100% and cpu at 100% for hours on end. Better have good cooling. (granted mine does suck, but stuff).

Re:Power usage? (0)

Anonymous Coward | more than 7 years ago | (#16284883)

Err... thermodynamics say that power usage == heat production, which is why you never see a heat production rate value in reviews.

Not really. (4, Funny)

megaditto (982598) | more than 7 years ago | (#16285099)

Not all the power gets dissipated as heat. Some gets sent down the Internet tubes.

Re:Power usage? (1)

jrobinson5 (974354) | more than 7 years ago | (#16285559)

Um, can't the energy be released as something else besides heat? I realize the vast majority of it is, but other types of energy are released, like light energy from the LEDs, mechanical energy from the fans, and radioactive energy from the monitor. (just kidding on the last one.)

Re:Power usage? (1)

Fordiman (689627) | more than 7 years ago | (#16285855)

In a CPU, no actual physical work is done. All the power is dissipated as heat.

I think.

Hey, anybody know the math for this?

Re:Power usage? (1, Informative)

macroexp (1002893) | more than 7 years ago | (#16286645)

Well, just because you don't see the work doesn't mean it's not being done. I'm not a classically trained scientist, but it seems that producing a desired waveform (data out) from a generated square wave (clock pulse) is definitely work. To be sure, the desired waveform doesn't just happen by itself.

To try a different tack on it, consider the work to be flipping the polarity of electrical domains. It's definitely physical work, but the things moving are tiny, not composed of matter, and don't move in space. However, a spinning flywheel in a closed box doesn't appear to be doing any physical work either, but it is if you measure its rotation. So in a cpu, do you measure "electronic potential flips per second"? I don't know, but I disagree with "no acutual physical work is done".

Re:Power usage? (1)

hcob$ (766699) | more than 7 years ago | (#16286657)

Um, can't the energy be released as something else besides heat? I realize the vast majority of it is, but other types of energy are released, like light energy from the LEDs, mechanical energy from the fans, and radioactive energy from the monitor. (just kidding on the last one.) Well, Since it's silicon, it all gets disipated as heat. If it was GaAs on the other hand, it would glow a pretty blue-white. :)

Re:Power usage? (5, Funny)

merreborn (853723) | more than 7 years ago | (#16285049)

I can't imagine running a gpu at 100% and cpu at 100% for hours on end.

Clearly, you're not one of the millions with an active WoW subscription.

Re:Power usage? (1)

Gemini_25_RB (997440) | more than 7 years ago | (#16285497)

...WoW using 100% gpu? ROTFLOL!!1!11!one!!eleventy!1 (Completely off-topic, but...) Seriously, WoW is not very intense on the graphics card.

Re:Power usage? (2, Informative)

packeteer (566398) | more than 7 years ago | (#16285701)

It still uses 100%. The the more GPU you have the more FPS you get.

Re:Power usage? (1)

Kheng (1000729) | more than 7 years ago | (#16286209)

I guess you havent ever done the Vael encounter. That still chugged my X2-3800+, 7800GTX system back in the days when i played WoW

Re:Power usage? (1)

Surt (22457) | more than 7 years ago | (#16286375)

... but did it chug your x2-3800+, or your 7800gtx?
I'd guess it was the 3800+ that was pegged.

Re:Power usage? (1)

wolf08 (1008623) | more than 7 years ago | (#16286671)

Hmm. Vael uses your GPU because of all of the special effets that are happening. Computationwise, it's the same as any other 40man raid, with 40 people + 1 boss. (There arn't even mini creatures!). The problem is that every few seconds, graphical effects are applied to every single player, and that spells are happening even faster than normal (For one person, they're instant). Oh, that and the fact that at the start of the encounter, red lightning hits every player =).

Re:Power usage? (1)

SB_SamuraiSam (962776) | more than 7 years ago | (#16284943)

From Vijay Pande: Keep in mind too that, at least right now, FAH draws only 80W from each GPU, so it's a surprisingly energy efficient folding farm too.

Re:Power usage? (5, Informative)

piquadratCH (749309) | more than 7 years ago | (#16285205)

The german newsticker heise.de [heise.de] cites 80 watts for a X1900 card while folding.

drawback (2, Funny)

User 956 (568564) | more than 7 years ago | (#16284635)

the Folding@Home team is saying that the GPU-accelerated client is 20 to 40 times faster than their clients just using the CPU.

Yeah, but what kind of results do you get if you combine the GPU-accelerated client with a KillerNIC video card? It must at least triple the speed. at least.

Re:argh (1)

User 956 (568564) | more than 7 years ago | (#16284679)

* network card. my bad

good, I think... (2, Insightful)

joe 155 (937621) | more than 7 years ago | (#16284653)

I like the idea of F@H, but I do worry about 1) opening up my computer to security risks and 2)damaging my computer because the processor (or now GPU) is getting hammered by always being accessed.

Are either of my worries vaild? can it damage it (or speed up its death) and what's the probability of a security threat?

Re:good, I think... (0)

alexandreracine (859693) | more than 7 years ago | (#16284735)

And since the client is not open, it is a risk.

Re:good, I think... (0)

Schraegstrichpunkt (931443) | more than 7 years ago | (#16286369)

It's a risk anyway. From what I can tell, the client is just a glorified version of apt-get or Windows Update. It downloads programs, checks their signatures, and runs them on your computer. Presumably, the F@H people won't sign anything malicious.

Re:good, I think... (2, Interesting)

rrhal (88665) | more than 7 years ago | (#16284763)

The capacitors in the power section of your motherboard have a finite life. If you are handy with the soldering iron you can replace these in an afternoon for about $15. I wonder how well the new (to motherboards at least) solid core Capacitors will do.

Re:good, I think... (1)

merreborn (853723) | more than 7 years ago | (#16284835)

Usually, if your GPU runs to hot, your machine will just bluescreen, or reboot, or something along those lines.

Re:good, I think... (4, Informative)

ThePeices (635180) | more than 7 years ago | (#16284869)

You wont damage your card. The GPU's cooling system is rated for keeping the GPU within its thermal design spec at full load, how long you run it doesnt matter as long as there is adequate ventilation. That applies to gaming too, so its not a problem. As to sppeding up its death, your card will become obsolete by the time that happens.

Re:good, I think... (1)

RAMMS+EIN (578166) | more than 7 years ago | (#16285181)

``your card will become obsolete by the time that happens.''

I don't like that kind of reasoning. If my computer is good enough today, it should be good enough 10 years from now. About the only thing I am willing to concede is that computers aren't always "good enough", but I do think they are now.

Re:good, I think... (2, Insightful)

Aladrin (926209) | more than 7 years ago | (#16285351)

Yes, it'll be 'good enough' 10 years from now, as long as you don't plan to do any more then than you do today. Don't buy any more hardware or software and hope to hell you have no problems.

Face it, computers are one of the fastest changing technologies. Intel plans to have some ridiculous hardware in only 5 years. 80 core CPUs? Crazy. If you think your current dusl dual-core setup (I'm assuming you have the best PC possible to back up that 10 yr statement) will be able to handle what an 80-core doesn't blink at, you're crazy. It's going to have approx 20 times the power, assuming no other advances in speed.

No, that logic works great for cars and toasters, but computers just change too much.

Re:good, I think... (2, Insightful)

Prosthetic_Lips (971097) | more than 7 years ago | (#16285575)

If my computer is good enough today, it should be good enough 10 years from now.

I hope I just missed your <sarcasm> ... </sarcasm> tags.

Ever hear of Moore's Law?

wikipedia: Moore's Law [wikipedia.org]

Transistor density has been doubling every 24 months (I recall it being quoted as 18 months, but we would be arguing semantics) for as long as I can remember. In 10 years, that's 2**5, or 32 times denser than it is right now. And you think the computer you have now will run anything remotely close to what is running then? You won't even be able to load the operating system.

Think back 10 years ago (1996), what was the "hot computer" filled with? The original Pentium, probably running at a blazing 66MHz?

Next let's quote Bill Gates, "640K should be enough for anyone."

Re:good, I think... (1)

Jeff DeMaagd (2015) | more than 7 years ago | (#16285945)

Next let's quote Bill Gates, "640K should be enough for anyone."

You aren't quoting Bill Gates. You are quoting an urban legend.

Frankly, while I think expecting a computer to have a ten year useful life is a big stretch, but I don't think it is unreasonable to expect a computer to have at least five years of useful life. My dad's computer is a cast-off dual Xeon 500MHz, made in 1998. Granted, it has a 10k RPM SCSI drive (which it was designed for) and 1GB of RAM in dual channel setup, the system was spec'd to allow 4GB in 8x512MB chips.

As it is, I really think that the people that are demanding of their computers comprise a minority of purchasers, cast-off systems should serve well as a hand-me-down or a simple Internet system.

Re:good, I think... (1)

Doppler00 (534739) | more than 7 years ago | (#16286675)

Hey look... it's Mr. Obvious Man!!!!

Seriously, have you seen the dust and grime a computer accumulates after 10 years??? I'd want to replace the thing just because it's really ugly and disgusting by that point.

Re:good, I think... (1)

Sark666 (756464) | more than 7 years ago | (#16285671)

And will things like xgl further along the demise of cards even they might be deemed 'obsolete'?

Re:good, I think... (1)

Ascoo (447329) | more than 7 years ago | (#16285861)

You wont damage your card. The GPU's cooling system is rated for keeping the GPU within its thermal design spec at full load, how long you run it doesnt matter as long as there is adequate ventilation.
With the exception that most cooling systems are mechanical (i.e. fan) have a mean time before failure. So keeping your system at full spec 24/7 may not be harmful to the ICs (assuming proper cooling), but the overall lifespan of the system will be decreased. Granted, most fans have MTBF of upwards of 2-3 years non-stop running, but I'm sure that only in ideal settings (i.e. dust free). I've had plenty of cheapo case/cpu fans die within 2 years, on machines that are run 24/7 (not at full load). Sure there are companies (i.e. panasonic's panaflo) that make high reliablity fans with MTBF of 70000 hours, but what are the odds that your average OEM uses them by default? Just a thought..

Re:good, I think... (1)

Gadgetfreak (97865) | more than 7 years ago | (#16286997)

I ran SETI@Home before switching to Folding@Home continuously on my laptop, which was mostly a desktop replacement in college, and has been a picture frame for the past 4 years. It's a Pentium 200 MHz MMX, and is slow as mud, but it still runs just fine. It has years of processor time running at 100% capacity, and it still hasn't died. It's a Gateway 2000, pre-name change. But as long as it continues to chug along, I'm not throwing it out. But non-stop processing for years on end doesn't seem to have bothered it at all.

Gomputers get ruined and Pande gets all the glory (0, Troll)

Anonymous Coward | more than 7 years ago | (#16285599)

1) Pande stole the idea from Seti
2) Why should our computers get ruined for him to get the Nobel Prize or something?
3) Let him apply for grants from NIH or NSF and use the money to build chusters for his computations like everybody else.
4) Stealing computer cycles from you and me, this is not nice. We paid hard cash for our computers, we bought them for ourselves, not for some other scientists.
5) Charity is for poor people, not for scientists.
6) I will NEVER EVER run folding on my computers (maybe SETI, they came up with the idea and deserve credit and some help).

Security Risk? Nope, much safer than games (3, Insightful)

billstewart (78916) | more than 7 years ago | (#16285869)

Folding@Home and similar projects aren't a security risk, as long as they're from trustable sources. They're certainly far safer than the closed-source game software that was the reason you bought a high-end 3-d accelerated video card in the first place. I'd prefer to see projects like that being open-source (at least in the sense of "you can read the source and do anything you want with it", as opposed to the stricter "accepts changes back from the community" part of the model.)


Most of the distributed-computation projects have a very simple communication model - use HTTP to download a chunk of numbers that need crunching, crunch on them for a long time, and use HTTP (PUT or equivalent) to upload the results for that chunk, etc. Works fine through a corporate firewall, and the only significant tracking it's doing is to keep track of the chunks you've worked on for speed/reliability predictions and for the social-network team karma that helps attract participants.


Online games normally have a much more complex communications model - you've got real-time issues, they often want their own holes punched in firewalls, there's user-to-user communication, some of which may involve arbitrary file transfer, and many of the games are effectively a peer-to-peer application server as opposed to the simple client-server model that distributed-computation runs. Fortunately, gamers would never use third-party add-on software to hack their game performance, or share audited-for-malware-safety programs with their buddies, or "share" malware with their rivals, or run DOS or DDOS attacks against other gamers that pissed them off for some reason.....


As far as the effects of running a CPU or GPU at high utilization go, most big problems will show up as temperature, though there may be some subtle effects like RAM-hogging number-crunchers causing your system to page out to disk more often. Not usually a big worry if you're running a temperature monitor to make sure your machine doesn't overheat. Laptop batteries are an entirely separate problem - you really really don't want to be running this sort of application on a laptop on battery power. I used to run the Great Internet Mersenne Prime Search when I was commuting by train, and not only did it suck down battery, the extra discharge/recharge cycles really beat up a couple of rounds of NiMH battery packs. Oh - you're also contributing to Global Warming and to the Heat Death of the Universe. But finding cures for major diseases is certainly a reasonable tradeoff, and we'll do that faster if you're using your GPU as opposed to 10 people using general-purpose CPUs.

Re:getting hammered (1)

madth3 (805935) | more than 7 years ago | (#16286559)

At least your worry number 2 is somewhat valid. I have a Dell machine whose CPU fan goes faster when the CPU works harder (and therefore gets hotter). After a few weeks of running F@H the machine failed. Luckily it was still in warranty and a new motherboard solved the problem without data-loss. So, I learned that my machine was not built for constant processor work even when it had been (and still is) excellent for irregular heavy usage (Java development)

Re:getting hammered (1)

5pp000 (873881) | more than 7 years ago | (#16286959)

So, I learned that my machine was not built for constant processor work


Hogwash. That was a defective machine. I've run F@H on plenty of machines -- including Dells -- and have never seen such a failure. (And if one had failed, I would still consider it defective.)

Impressive (0, Troll)

Rupert_Giles (992036) | more than 7 years ago | (#16284655)

So now I can fold on my ATI 19xx series R580-core graphics cards, huh? The GPUs can be utilized with impressive results, you say? Mm-hmm, can't wait to start using this to... uh... well... ah never mind.

Beware! (0)

Anonymous Coward | more than 7 years ago | (#16284661)

If you are running a cracked version of folding@home you won't be able to play on ranked servers.

Charity (1)

s-twig (775100) | more than 7 years ago | (#16284667)

I wonder now if I could start a legitimate charity and start fundraising for my new graphics card, with the intention to help solve humanity's problems.

Good use of my GPU when idle... (2, Funny)

CaptCanuk (245649) | more than 7 years ago | (#16284669)

Looks like a good use of my ATI card when I'm not gaming or Google Earthing under Linux. Sweeeet!

Re:Good use of my GPU when idle... (3, Informative)

Trogre (513942) | more than 7 years ago | (#16285467)

Two problems:

1) There is no Linux GPU client (yet)
2) Many gamers who use Linux have gone nVidia due to driver support. There is no nVidia client (yet)

Wouldn't this be folding at single precision only? (1)

flowerp (512865) | more than 7 years ago | (#16284685)


I doubt the GPU can do IEEE double precision floating point.

Is 32 bit precision precision enough for a scientific application
like protein folding?

Is the entire algorithm of folding a big approximation anyway?

Re:Wouldn't this be folding at single precision on (1)

Grey Ninja (739021) | more than 7 years ago | (#16284825)

I highly doubt that they use floating point operations, but I could be wrong. Floating point numbers are inherently inaccurate. If I were the FAH team, I would probably be using fixed point, as it's fairly precise.

I might also think that GPUs can handle doubles as well as floats. But again, could be pure nonsense. I am not familiar enough with the low level operations of a video card.

Re:Wouldn't this be folding at single precision on (1)

qbwiz (87077) | more than 7 years ago | (#16285025)

That might be a bit challenging, considering that I don't think that GPUs work very well with fixed-point (or any non-floating point) operations.

Re:Wouldn't this be folding at single precision on (2, Informative)

jfengel (409917) | more than 7 years ago | (#16285053)

Since we're dealing with measurements (or at least simulated measurements) of the real world, the numbers are always going to be inaccurate. Even in fixed point, errors accumulate. They just accumulate in different ways.

One problem with floating point is that it risks being unrepeatable. If you don't carefully define the terms of rounding, you'll have two different machines arrive at different results on the same calculation. But as long as you pick a standard (e.g. IEEE 754), your results are repeatable. Not any more accurate, but repeatability can be important, too, when you're dealing with potentially chaotic systems.

Now, if the GPU hardware doesn't inherently support your rounding standard you'll have a hard time getting repeatable answers out of it. You can compensate but it's a pain in the nuts, and it undoes a lot of the advantage of having your math engine in hardware.

Precision is purely a matter of the number of bits you throw at the problem. Fixed point is not inherently more precise; in fact, if the numbers you're working with aren't in the middle of the range of your chosen fixed point it'll be wildly imprecise.

They may well want to use integer operations rather than floating point or fixed point. When you can redesign your operations for integer arithmetic, you get repeatable results and the operations are very, very fast. But integers can be very imprecise, for the same reason fixed-point operations are.

Re:Wouldn't this be folding at single precision on (1, Insightful)

Anonymous Coward | more than 7 years ago | (#16285409)

I would probably be using fixed point, as it's fairly precise.

"Fairly"?

Re:Wouldn't this be folding at single precision on (2, Funny)

raftpeople (844215) | more than 7 years ago | (#16286105)

Precisely.

You're mostly wrong - Numerical Math is Hard :-) (1)

billstewart (78916) | more than 7 years ago | (#16286789)

  • Numerical Analysis is a somewhat complex art, and many people aren't good at it.
  • Floating point numbers are usually much more accurate than fixed-point, depending on the problem. They're certainly much less work to use - if you're dealing with fixed-point calculations where different numbers have different precisions, then you've got to convert them all by hand, and preventing round-off accumulation when doing fixed-point conversion requires significant care.
  • On the other hand, sometimes floating-point isn't as accurate - either kind of number can give you round-off errors, and single-precision floating point only gives you 24 bits of mantissa to work with as opposed to 32. There are calculations like budgets of large companies or California real estate for which this obviously loses precision.
  • GPUs handle different kinds of calculations - the geometry calculations have different needs than the pixel shading, for instance - and there are different numbers of arithmetic units for the different functions. Most GPUs can only do 32-bit for the most parallel units; doubles are reserved for the more specialized processing, so it doesn't get you a big gain if you're using them. The first-level documentation on the 1950 says it's got different kinds of processors, up to 128 bits, but it doesn't say how many of them are which depth, though presumably the processors there are most of don't have the higher resolutions.

Re:Wouldn't this be folding at single precision on (1)

Ant P. (974313) | more than 7 years ago | (#16284837)

I think newer cards with HDR and stuff like that can handle a bit more than 32-bit floats.

Single precision is fine (1)

paladinwannabe2 (889776) | more than 7 years ago | (#16284995)

You really don't need that many significant digits for most problems. With floating point numbers, 0.00000000005 (about the width of a hydrogen atom in meters) can be expressed as a float or a double, just like 0.5 can. Also consider that all widths and distances are approximate, since the particles are constantly moving in unpredictable ways. Using 64 bit prescision would be as ridiculous as saying that the moon is 14295433070.866 inches from Earth.

Re:Single precision is fine (0)

Anonymous Coward | more than 7 years ago | (#16285203)

Unless spock says it is!

Spock...tell...me...how...far...is...that planet?
Jim, I would hazard a guess that said planet is 1325987012095821.213 inches from us.
Bones...any lifesigns?

Re:Single precision is fine (0)

Anonymous Coward | more than 7 years ago | (#16285321)

On average it is 1.51338976 × 10^10 inches to the moon.

Re:Single precision is fine (1)

Nixusg (1008583) | more than 7 years ago | (#16285593)

How about using metric you insensitive clod! No wonder the mars Lander crashed ;)

Re:Single precision is fine (1)

hockpatooie (312212) | more than 7 years ago | (#16285929)

That's not quite right. Yes, one double-precision float can measure a hydrogen atom's width in meters. But funny things start happening when you start subtracting and dividing limited-precision numbers. You can get numerical instability and errors increasing exponentially. Most any numerical algorithm that uses floating-point has to be written very carefully to avoid these sorts of problems. Check out a textbook on numerical methods.

Re:Wouldn't this be folding at single precision on (0)

Anonymous Coward | more than 7 years ago | (#16286911)

Workstation-class OpenGl accelerators with 128 bit precision across the entire rendering pipeline. Traditionally used in CAD apps and other 3D content creation apps and now as a highly parallelized solution at render farms. That might do the trick without approximation. But I doubt you could do that with consumer-level gaming cards without the tricks and hacks that people who program these GPUs are so accustomed to use.

This is impressive, but... (1)

NoMoreBits (920983) | more than 7 years ago | (#16284717)

Utilizing power of the modern GPUs is certainly impressive, however there is a serious limitation at this time.

While SSE vectoring unit could process a 8 double precision 64-/80-bit numbers at a time, GPU could process vectors of hundreds of numbers, but limited to the single 32-bit prescision.

Most of the established CPU demanding scientific applications will need douuble prescision. Only few problems are very well suited for lower prescision.

Re:This is impressive, but... (0)

Anonymous Coward | more than 7 years ago | (#16285037)

Just a minor clarification.

Only x87 does 80-bit floating point (and then only for the intermediate result which means that the compiler usually moves the numbers around to make sure that you maintain as much precision as possible). SSE2 does 64-bit floating point. The rest seems about right with the following caveat - it could be possible (and very likely) that they increased the precision and gave up some speed and it still turned out 30 times faster.

Re:This is impressive, but... (1)

baadger (764884) | more than 7 years ago | (#16285039)

As an engineering student being forced as part of my degree to do a boat load of math I would hazard a guess that the crazy fucked up world of mathematics has a way to carry out double precision fp ops by transforming the problem into a vector of hundreds of numbers.

Re:This is impressive, but... (1)

jrockway (229604) | more than 7 years ago | (#16286215)

Your comment suggests that you should consider dropping out of school. Soon.

Re:This is impressive, but... (2, Informative)

dsouth (241949) | more than 7 years ago | (#16285239)

FYI --
  1. SSE vectors are 128 bits -- that's two doubles, not eight. [There may be 8 sse registers, but that doesn't mean you can do 8 simultanous sse operations.]
  2. It's possible to extend precision using single-single "native pair" arithmetic. There's a paper by Dietz et al on GPGPU.org that discusses this.
This doesn't make GPUs capable of double-precision arithmetic, and doesn't mean they will replace CPUs. But it can be used expand the number of algorithms where the vast "arithmetic density advantage" of GPUs can be applied. Top-end CPUs can do 20-30 single-precision GFLOPS, GPUs have about 10x more GFLOPs in the fragment shader ALUs. That's alot of power if you can figure out how to make it work for your problem.

Still Holding My Breath (0)

Anonymous Coward | more than 7 years ago | (#16284773)

For that BOINC port that's been in closed-beta for so very long now...

Folding@home versus Grid.org (2, Interesting)

Anonymous Coward | more than 7 years ago | (#16284795)

IMHO, the work the oxford university/grid .org cancer project is more important than understanding folding. It seems that folding@home is not directly working on producing a cure and they are focusing on understanding "how" something happens.

Check out http://www.chem.ox.ac.uk/curecancer.html [ox.ac.uk] and decide for yourself. Personally, I don't see direct value/benefit to the folding@home project. I understand that knowing about misfolding is important for certain diseases and maybe even cancers ..but I see the oxford univ. as having the most immediate and long term benefit. And iIt's a shame that project receives no publicity.

Since the "time to a cure" by understanding protein is very long term .. as cpu's get faster .. a delay would have negligible impact to the overall length of time taken. However working on directly on cures for common cancers has a more immediate benefit.

Re:Folding@home versus Grid.org (5, Funny)

P3NIS_CLEAVER (860022) | more than 7 years ago | (#16284891)

I agree. We should stop all science not having a direct impact on cancer until cancer is cured.

Huh? I didnt say that! (0)

Anonymous Coward | more than 7 years ago | (#16285243)

Argh .. I didnt say that we should stop all non direct cancer cure research!!

I am saying, there are other projects that we should know about and help out such as the grid.org one. Why put all eggs in one basket? Basically, if I were to react like you .. I'd be saying you're the one who's claiming only one project is important, without even doing any real background research to see what other important projects are there.

Re:Folding@home versus Grid.org (3, Insightful)

Matt Perry (793115) | more than 7 years ago | (#16285235)

It seems that folding@home is not directly working on producing a cure and they are focusing on understanding "how" something happens.
Understanding how something does or doesn't work is the first step to fixing things. Maybe what is learned by Folding@Home can be applied to solve problems in other areas like cancer.

Re:Folding@home versus Grid.org (2, Insightful)

Gumber (17306) | more than 7 years ago | (#16285411)

Dude, as basic reasearch goes, gaining a better understanding protein folding has a huge number of applications, including, I dare say, finding a cure for cancer.

Re:Folding@home versus Grid.org (1)

Toba82 (871257) | more than 7 years ago | (#16285457)

Too bad it's Windows only.

Re:Folding@home versus Grid.org (0)

Anonymous Coward | more than 7 years ago | (#16286589)

I'll install it on my PC when their client enables me to set the CPU usage at a certain percentage as I see fit.

Two words: closed architecture (4, Insightful)

J.R. Random (801334) | more than 7 years ago | (#16284907)

"With help from ATI, the Folding@Home team has created a version of their client that can utilize ATI's X19xx GPUs with very impressive results."

And therein lies the rub. While GPU's are getting more and more like general purpose vector floating point units, they remain closed architectures, unlike CPUs. Only those that can get help from ATI (or Nvidia) need apply to this game.

Re:Two words: closed architecture (4, Informative)

flithm (756019) | more than 7 years ago | (#16285179)

That's not necessarily true. It is a relatively new field of computer science, and thus there's not all that much info out there yet. But once you understand the basic concepts of general purpose GPU programming anyone can do it.

What's most likely is that the guys at Stanford started pushing the hardware to the limit, and in ways the driver developers might not have anticipated. Probably what they ran up against was bugs in the driver, and the help came from ATI in terms of ways to work around the bugs. Evidence backs this up from Folding@Home's GPU FAQ:

[You must use] Catalyst driver version 6.5 or version 6.10, but not any other versions: 6.6 and 6.7 will work, but at a major performance hit; 6.8 and 6.9 will not work at all.

Your next question might be, if that's true then why use ATI (who are known for poor driver quality)... it might simply be a matter of that's the hardware they had to test with, so that's what they needed to use.

At any rate, it's definitely possible to get started doing GPU programming without vendor support.

There's even some API's out there to help... The Brook C API (for doing multiprocessor programming) has a GPU version out called BrookGPU: http://graphics.stanford.edu/projects/brookgpu/ind ex.html [stanford.edu]

There's even a fairly large community of people using Nvidia's own Cg library for doing general purpose stuff.

There's also GPUSort (source code available to look at), which is a high performance sorting example that uses the GPU to do the sorting, and it trounces the fastest CPUs: http://gamma.cs.unc.edu/GPUSORT/results.html [unc.edu]

And last but not least there's the GPGPU site that is a great resource for all sorts of general purpose computing the GPUs: http://www.gpgpu.org/ [gpgpu.org]

Missed the point of "Closed" (1, Interesting)

Anonymous Coward | more than 7 years ago | (#16285557)

The point is that you can get documentation to program CPUs, at really low level (instruction sets, register maps, glitches and workarounds, etc), without so much fuss and do whatever you want, just visit Intel, AMD etc sites and get the PDFs. While for GPU you only matter if you are big and keep the information secret or go with the provided code.

Re:Two words: closed architecture (1)

RonnyJ (651856) | more than 7 years ago | (#16285645)

If GPU-assisted code ever gets turned into a 'selling point' for graphics cards, you can be sure it'll be opened up more.

People have wanted to do this for years (1, Interesting)

SnappyCrunch (583594) | more than 7 years ago | (#16284921)

People running SETI@home have asked and asked about versions with various processor optimizations, or versions that use GPUs, which are very much suited to lots of parallel operations. The SETI@home team answer is that they won't release versions that use specific optimizations for specific hardware because they're worried about the integrity of the results - They want people to be running as nearly the same client as possible. Given that it's very easy to double-check a given piece of data if there's any question about it, it always made me angry that the SETI team seemed to prefer laziness to getting far more out of their clients. I'm glad the Folding@Home team isn't making the same mistake.

Re:People have wanted to do this for years (1)

Ken_g6 (775014) | more than 7 years ago | (#16286857)

That was before BOINC [berkeley.edu] . Now the SETI client is open-sourced, and there are optimized versions. If your optimized version returns results like the standard client, you get credit; if its results are different, you don't.

The problem is, as discussed earlier, GPUs do single-precision math, and SETI requires double-precision.

Driver Versions... (0)

Anonymous Coward | more than 7 years ago | (#16284947)

Note that the client is only supported for catalyst 6.5 and 6.10 (not out yet). I'm on 6.8 and getting nonstop "early unit end" errors. I'll hold off on running this until 6.10 is released.

Only X1900s? (1)

Ant P. (974313) | more than 7 years ago | (#16284953)

Damn, and I've got a 9600XT just sitting on a shelf.

Re:Only X1900s? (0)

Anonymous Coward | more than 7 years ago | (#16285209)

What a coincidence, I've got a slot just sitting on a motherboard! How about we combine forces and let me put that card to good use!

Re:Only X1900s? (1)

Ant P. (974313) | more than 7 years ago | (#16285393)

I knew someone'd say that.

It's got a broken CRTC, no red channel in the output plus the image is smeared. If it worked (without causing blinding headaches after 5 minutes) I wouldn't be using this MX400 :)

Mac Support (1)

ZachPruckowski (918562) | more than 7 years ago | (#16284975)

Sadly, Mac support is still lacking. I've got a Mac Pro with x1900xt, and I'd be happy to donate, but it runs in OS X 99% of the time, so I have to run it emulated, and I can't do the graphics card thing. Any idea when a Universal version (and/or a GPU version) for Mac will be out?

Re:Mac Support (1)

hlimethe3rd (879459) | more than 7 years ago | (#16286731)

There's a thread about it at the folding forums: http://forum.folding-community.org/viewtopic.php?t =14182&postdays=0&postorder=asc&start=90 [folding-community.org] It isn't out yet, but they're working on getting the cores native. Intel native cores are what really matters, even if the client is running in Rosetta, because the client does very little actual work. My guess is that you'll see it in the next 4-6 months. They've probably been especially busy with the GPU stuff and the PS3 client.

GPU code samples? (1)

bigattichouse (527527) | more than 7 years ago | (#16285005)

Anyone know where I can find good starting places for GPU coding? Our Vectorspace engine would really benefit from that kind of power... I'd love to learn more about it.

Re:GPU code samples? (1)

dahl_ag (415660) | more than 7 years ago | (#16285047)

Re:GPU code samples? (0)

Anonymous Coward | more than 7 years ago | (#16286003)

For a starting tutorial that assumes you know nothing about programmable shaders, try this [lighthouse3d.com] . (Warning: Site goes down. A lot.)

Once you get past that, pick up the OpenGL "orange book," and possibly some of the ShaderX books to get a better idea of what you can do with shaders. Then (as another poster recommended) start reading GPGPU.org.

[Note: This all assumes you're interested in OpenGL and GLSL. If you're a DirectX person... well, you're on your own.]

Esoteric high-end GPUs are sexy (1)

Cid Highwind (9258) | more than 7 years ago | (#16285075)

...but apparently finishing the friggin OSX/Intel port they've been working on since January isn't.

It's ok, I didn't want to help cure cancer anyway.

Re:Esoteric high-end GPUs are sexy (1)

SB_SamuraiSam (962776) | more than 7 years ago | (#16285133)

The problem isn't the Pande group or their effort to get an OSX/Intel client, it's the compilers that are required that are not yet available, or may not become available. It's not as if they can throw it into XCode and just make a Universal Binary. -Sam

Re:Esoteric high-end GPUs are sexy (1)

dtremenak (893336) | more than 7 years ago | (#16285735)

They've been working on the GPU port since June 05 or earlier. Wait your turn.

say ati? I say drivers! (1)

towsonu2003 (928663) | more than 7 years ago | (#16285191)

ok, pls someone tell ati to _write drivers_ for their hardware before starting to write innovative software on them. I have a 2-year old ati card that does not get 3D support with fglrx drivers. Most funny thing, I get 3D with *open source* driver ati!

go figure...

Re:say ati? I say drivers! (1)

tcc3 (958644) | more than 7 years ago | (#16285365)

That would be the problem with ATI. Even when they bother to write drivers, they arent very good. I've never had a beef with their hardware but they could learn a lot about software supprot from Nvidia.

Win/Win (1)

one_red_eye (962010) | more than 7 years ago | (#16285427)

It's a win/win situation. Folding@Home crunches more data and the electric company makes more money.

Re:Win/Win (1)

Sqwubbsy (723014) | more than 7 years ago | (#16286881)

It's a win/win situation. Folding@Home crunches more data and the electric company makes more money.

Ah, but what if all that power generation/transmission is causing the illnesses Folding@Home is seeking to cure...wouldn't that be ironic.

Can I Get A Bootloader For My Voodoo5 GPU?!?! (1)

zenlessyank (748553) | more than 7 years ago | (#16285665)

I assume this is just a prelude for what is to come from AMD/ATI.. I can clearly see the line getting blurred!!!

I've been wondering... (1)

charstar (64963) | more than 7 years ago | (#16286009)

...when somebody would start to do other things with the GPU other than graphics!

When I first started poking around with Open GL, and learning what it is to make a 3d application, i started wondering when other applications would start taking advantage of the matrix crunching power a gpu.

SLI (1)

quizzicus (891184) | more than 7 years ago | (#16286155)

Anybody know if this would benefit from Crossfire?

Re:SLI (1)

Carnildo (712617) | more than 7 years ago | (#16286259)

It's in the FAQ: Crossfire produces a slight slowdown right now. In the future, it might be possible to use the GPUs independantly.

20 * 0 = ? (0)

Anonymous Coward | more than 7 years ago | (#16286281)

Great, now we can run Folding @ home 20x faster. And still get nothing out of it.

Folded Down Data (1)

Doc Ruby (173196) | more than 7 years ago | (#16286659)

Has anyone harnessed these folding algorithms for de/compression? Because 20-40x more power that can be stuffed into several PCI slots for parallel de/compression so cheap is worth waiting through all these exotic @Home projects to get better Net streaming.

Am I the only idiot? (1)

kitman420 (864936) | more than 7 years ago | (#16286811)

But what exactly is folding? And yes, I know I can find out in 30 secs on google, but 1) i'm lazy, that's why I read slashdot and 2) the headline should've given a one sentence description.

Obligatory (1)

Shadyman (939863) | more than 7 years ago | (#16286837)

My video card, for one, welcomes our new Folding@Home overlords

charitable donations (3, Funny)

naoursla (99850) | more than 7 years ago | (#16286981)

Is there any way I can use this to make my next graphics card purchase tax deductable?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>