Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Harvesting & Reusing Idle Computer Cycles

Hemos posted more than 9 years ago | from the making-good-use-of-time-and-resources dept.

Programming 224

Hustler writes "More on the University of Texas grid project's mission to integrate numerous, diverse resources into a comprehensive campus cyber-infrastructure for research and education. This article examines the idea of harvesting unused cycles from compute resources to provide this aggregate power for compute-intensive work."

cancel ×

224 comments

Sorry! There are no comments related to the filter you selected.

piss frost (-1, Troll)

Anonymous Coward | more than 9 years ago | (#12980061)

bitches!

Re:piss frost (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#12980640)

Reply to the comment posted over here in Games [slashdot.org] :

And 'sexist' comments against men (always leave the toilet seat up, always fall asleep after sex, etc.) are *always* attacked with the same amount of vigour by 'pro-equality' types such as yourself. Right. *rolls eyes*

I hate this kind of one-sided, all flash and no substance, convenient, sound-bite filled pseudo-feminism.

I believe in the core values of feminism, which ask that women not be unfairly discriminated against in areas of society such as politics and employment.

Women, in statistically significant numbers (i.e. any ones that don't validate a claim are few enough to be discounted in the generalisation), play mind games. FACT. No, I don't need to point to any links as evidence because it's one of those things we all just 'know' is valid, like gravity. One of those times where you don't need the methods of science to agree on something.

Men, also in statistically significant numbers, selfishly leave the toilet seat up and fall asleep after sex. It's what we do. But I don't have a problem with people, men or women, pointing out truths like this, even if they have negative connotations because they ARE TRUE. Ergo, women shouldn't (and most wouldn't) dispute the claim that they play mind games, as they wouldn't with other such claims as taking too long to get ready and crying over nothing sometimes.

We are never saying that all women play mind games. Like when we say Asian men love to gamble, we don't claim this as a literal absolute, but as a comparably noticeable difference that, un-PC as it is, is there. When people make jokes about British teeth on Slashdot I never have a problem and laugh accordingly, even though my teeth are straight, clean and white. I laugh, because I 'know' (in that fuzzy way described above) that comparably, the state of British dentistry is worse compared to the American equivalent.

Fuck all this ultra-PC, 'we're all the same' bullshit. We are not the same, we act differently in gender, race, and probably umpteen many other groupings. But just because I point this (the truth) out, it does not mean that I will discriminate by any of these groupings; call every black person a criminal or every woman an emotional psycho. But I will say that [as a whole] women are more emotional than men, Black people are worse swimmers (but better athletes) and Jews have poor eyesight. I'll also gladly take anything coming to me as a caucasian, agnostic male.

Only the motherfucking Sith deal in absolutes and your black-and-white, 'everything is logical and ultra-scientific' world.

You want us all to be treated with equal opportunities, respect and compassion and live together in love and harmony? Me too...so let's stop reaching for the race and gender cards at every opportunity. To dissolve these barriers, you need to pretend they don't exist and carry on as normal. There is a difference between 'You have a friend and he is black, so let's treat him equally because of it' and 'You have a friend and he happens to be black, but who the fuck cares?'.

Let's all take the latter approach.

I love yas all.

electricity (5, Informative)

TedCheshireAcad (311748) | more than 9 years ago | (#12980079)

Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%?

"wasted compute cycles" aren't free. I would assert they're not even "wasted".

Re:electricity (3, Informative)

Anonymous Coward | more than 9 years ago | (#12980098)

The point is that they're not being used, and that they can be used for research. From the point of view of the researchers, who need these cycles, they are wasted.

Re:electricity (4, Insightful)

TERdON (862570) | more than 9 years ago | (#12980110)

Yeah, but it still draws a lot less letting a some computers burn some cycles, than you would have to use if you built a shiny, new, cluster. And you don't have to pay for the hardware either, because you already have it...

Re:electricity (5, Interesting)

ergo98 (9391) | more than 9 years ago | (#12980123)

"wasted compute cycles" aren't free. I would assert they're not even "wasted".

No doubt in the era of idle loops and HLT instructions unused processor capacity does yield benefits. However from the perspective of a large organization (such as a large corporation, or a large university), it is waste if they have thousands of very powerful CPUs distributed throughout their organization, yet they have to spend millions on mainframes to perform computational work.

Electricity vs cost of more machines and labor (5, Insightful)

G4from128k (686170) | more than 9 years ago | (#12980134)

Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%?

This is a very insightful post, but has two crucial counterarguments
  1. Does anyone realize the cost of buying extra computers to handle peak computing loads?
  2. Does anyone realize the cost of idle high-tech, high-paid labor while they wait for something to run?
The proper decision would balance these three (and other factors) in defining a portfolio of computing assets that can cost-effectively handle both baseline and peak computing loads. Idle CPUs aren't free, but then neither are idle people or surplus (turned-off) machines.

Re:Electricity vs cost of more machines and labor (5, Funny)

Alwin Henseler (640539) | more than 9 years ago | (#12980507)

"The proper decision would balance these three (and other factors) in defining a portfolio of computing assets that can cost-effectively handle both baseline and peak computing loads."

You're probably right, but oh what a beautiful line of marketing-speak... If you happen to work in management or sales somewhere, write this baby down!

Re:Electricity vs cost of more machines and labor (1)

oliverthered (187439) | more than 9 years ago | (#12980801)

but idle people or surplus (turned-off) machines don't contribute to global warming.

Reused??? (2, Informative)

LemonFire (514342) | more than 9 years ago | (#12980149)


Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%?
"wasted compute cycles" aren't free. I would assert they're not even "wasted".


And neither are the computer cycles reused as the slashdot article would have you believing.

How can you reuse something that was never used in the first place?

Re:Reused??? (1)

chris_eineke (634570) | more than 9 years ago | (#12980268)

I have mod points, but whatever... you have to look at it on a grand scale.
Take the processor running in this box I am typing on -- an athlon64 3400+. Now say that we get about 3.4 GHz worth of cycles each second (that's what AMD tells you), running 24 hours a day, 365 days a year, for 2 years:

3.4 * 10^10 * 60^2 * 24 * 365 * 2 = 2.14 * 10^18 cycles.

Your CPU will only live so long. They usually break when the warranty has gone void. ;)

Re:Reused??? (2, Informative)

codeguy007 (179016) | more than 9 years ago | (#12980394)

Now say that we get about 3.4 GHz worth of cycles each second (that's what AMD tells you)

You should have used your mod points and not made a fool of yourself.

An Athlon64 3400+ does not run at 3.4GHz but 2.2GHz. Thus you're whole calculation of computer cycles is wrong. 3400+ is a PR rating comparing the performance of the Athlon64 to a Pentium4 of 3.4GHz.

Re:Reused??? (1)

stecoop (759508) | more than 9 years ago | (#12980419)

You should have read the last part of his sentence. See the "that's what AMD tells you". The 2.2GHz in an Athlon64 3400+ dosn't mean that my 2.4Ghz P4 can calculate more then the Athlon64. Each clock cycle can execute a certain numbe of instruction "steps" so the calculation the grandparent is good enough for a rough estimate.

Re:Reused??? (1)

codeguy007 (179016) | more than 9 years ago | (#12980523)

No that's not what AMD tells you. AMD tells you that a there 2.2GHz Processor performs about the same as a 3.4GHz P4. That makes the cycles calculation completely bogus as cycles from one processor don't compare with cycles from another.

Re:Reused??? (2, Informative)

ZosX (517789) | more than 9 years ago | (#12980460)

Who's the fool now? The 3400 rating is actually based upon the performance of a 1ghz Thunderbird IIRC. So a 2000 would be roughly the equivavelent of a 2ghz Thunderbird, NOT a 2ghz Pentium 4, even though the processor is actually running at 1.6ghz.

But don't take it from me. From the horses mouth:

Section 2 The Model number

The model number is fairly straight forward the numeric code of the Core ID will give you the model number. In the case of the newer Athlon XP's it will be the PR rating of the CPU. For example the AMD Barton 3200+ would have 3200 as its model number and not its operating MHz. The older CPU's such as the Thunderbird and the Duron which do not have PR ratings will have their operating speed in the model number section. A Thunderbird 1.4Ghz will have a model number of 1400.

Re:Reused??? (1)

codeguy007 (179016) | more than 9 years ago | (#12980579)

That does not say that they base performance on comparison with a 1GHz Thunderbird anywhere. Thus you have provided nothing.

And I can guarrantee you there's a corrolation with the Performance Rating of 3400 and how it performs compared to a 3.4GHz P4. Whether AMD admits it or not, there's a reason why an Athlon64 3400+ Performs about the same or better than a P4 3.4GHz. AMD wants you to know that an Athlon64 3400+ runs about the same speed as a P4 3.4GHz so that the end user more easily compare these AMD Apples to Intel Oranges.

Re:Reused??? (1)

ZosX (517789) | more than 9 years ago | (#12980647)

A Thunderbird 1.4Ghz will have a model number of 1400. See above.

From the goddamned wikipedia:

With the demise of the Cyrix MII (a renamed 6x86MX) from the market in 1999, the PR rating appeared to be dead, but AMD revived it in 2001 with the introduction of its Athlon XP line of processors. The use of the convention with these processors (which are rated against AMD's earlier Athlon Thunderbird cpu core) is less criticized, as the Athlon XP is a capable performer in both integer and FPU operations, and manages to out-perform an Intel Pentium 4 at a PR rating equalling the P4's mhz. The Athlon XP (as well as the Athlon 64) PR rating scheme is not intended to be anything more than a comparison to the same family of processors, and not a direct comparison to Intel or any other company's processor speeds (in raw MHz) which most skeptics say isn't true.

Don't believe me now?

You sir are a troll in the worst way. I've already explained how their PR system works, but I think at this point you are just looking to create an argument. You are totally goddamned right there is a correlation. In an awful lot of cases, the AMD chip outperforms the P4 with a similar PR number. These two numbers have little to do with each other in reality though other than for strict marketing purposes.

Clock for the clock AMD has the P4 beaten hands down. Also, clock for clock, the Pentium III/M will beat the pants off of anything out there.

Re:Reused??? (2, Insightful)

codeguy007 (179016) | more than 9 years ago | (#12980746)

So what makes the wikipedia the be all and end all of information? A collection of user contributed information. I would hardly use it as proof in an argument.

Suffice to say however AMD calculates it's PR rating really doesn't change the fact that it's to provide a comparison between Athlons and P4. I can guarrantee you if intel released a P4 processor that changed that correlation, AMD would change their PR rating on new processor to match it. Of course now that Intel itself is going to a PR rating of sorts that all changes.

Re:electricity (5, Insightful)

hotdiggitydawg (881316) | more than 9 years ago | (#12980158)

That's a very valid point, we should not assume that this usage comes at no cost to the environment. However, the cost of building and running a separate CPU dedicated to the same purpose is even higher - twice the hardware infrastructure (motherboards, cases, power supplies, what else? monitors, gfx cards, etc.), twice the number of cycles wasted loading software infrastructure (OS, drivers, frameworks eg. Java/Mono). Add to that the fact that hardware is not easily recycled and the "green" part of me suggests that cycle-sharing is a better idea than separate boxes.

The next question is - who pays for the electricity then? University departments are notorious for sqabbling over who picks up the tab for a shared resource - and that's not even considering the wider inclusion of home users...

Re:electricity (4, Insightful)

antispam_ben (591349) | more than 9 years ago | (#12980170)

Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%?

Yes, I do, the same for RAM being accessed and for a hard disk drive when it's seeking. But this is insignificant compared to the overhead of the power supply, fans, hard disk drive spindle motors, other circuitry that runs continuously, and dare I mention all those fancy-dancy computer case lights that are popular now.

The incremental cost of these otherwise-unused cycles is so low that they can be considered free.

So someone prove me wrong, what's the electricity cost of running a CPU at full cycles for a year vs. running at typical load? What's the cost of the lowered processor life due to running at a higher temperature. Chip makers will tell you this is a real cost, but practically, the machine is likely to be replaced with the next generation before the processor has a heat-related problem.

Regardless, the cost is MUCH lower, in both electricity and capital, than buying other machines specifically to do the work assigned to these 'free cycles'.

Wrong (4, Insightful)

imsabbel (611519) | more than 9 years ago | (#12980189)

What you are saying was perfectly correct even 3 years or so ago.

But case in point: My Athlon64 computer doubles its wallplug powerdraw (including everything:PSU, Mainboard, HD, ect) at 100% load compared to idle desktop (ok, cool%quite helps pushing idle power down).

The cpu IS the biggest chunck besides some high-end GPUs (and even those need MUCH less power when idle), and modern cpus need 3-4 times as much power under full load compared to idle.

P4 also doubles usage under load. (1, Interesting)

Anonymous Coward | more than 9 years ago | (#12980418)

My P4 conumes about 200 watts at the plug while under load, less than 100 while idle. All at a crappy power factor of 0.6.

Re:Wrong (4, Funny)

big tex (15917) | more than 9 years ago | (#12980529)

Using the Lap-Burn-O-Meter (TM) as a gauge of overall power consumption with my Powerbook G4, I can definitely say that higher cpu cycle activities (encoding 1hr AAC files, for instance) increase power usage.

I could probably do something fancier by monitoring power draw with it unplugged, but my balls would be fried before I could tabulate the accurate data.

laptop cores are much better (4, Interesting)

steve_l (109732) | more than 9 years ago | (#12980530)

I saw some some posters from the fraunhofer institute in germany on the subject of power, with a graph of specint/watt.

0. all modern cores switch off idle things (like the FPU) and have done for some time.

1. those opteron cores have best in class performance

2. intel centrino cores, like the i740, have about double the specint/watt figure. That means they do their computation twice as efficiently.

In a datacentre, power and air conditioning costs are major operational expenses. If we can move to lower power cores there -and have adaptive aircon that cranks back the cooling when the system is idle, the power savings would be significant. of course, putting the datacentre somewhere cooler with cheap non-fossil-fueled electicity (like British Columbia) is also a good choice.

Re:Wrong (2, Interesting)

kesuki (321456) | more than 9 years ago | (#12980705)

What you are saying was perfectly correct even 3 years or so ago.
Hrm no.
no need to repeat myself [slashdot.org]

Running cpus at full load has made a huge difference in the cost of operation since the early pentium days. His point is that the cost of the 'electricity' is less than the cost of buying/powering new hardware specifically designed to do the work. Remember the electrical cost of the systems that are idle doesn't go away. those systems are on, anyways. Computer lab access is generally 24 hours a day, so the systems always need to be on, thus they always need to use power.

You are right that running under load can double or even triple electricity consumption (the CPU isn't the only piece of electronics in a desktop that has a 'power saving mode') the motherboard shuts down whatever it can, the PSU especially lowers rotational speeds on fans to reduce power, the PSU itself wastes less power on conversion etc etc.. but all that was just as true 5 years ago.

The fact of the matter is your main savings is on the hardware cost. Even if you consider that a true cluster is going to be more efficient than a distributed cluster, the fact that you're increasing electrical draw by buying said cluster without being able to reduce the number of idle systems is enough to offset the slightly greater electrical draw/mips ratio of distributed computing.

A big cluster has way more fans, and cpus, and many many high power server class PSU's, unless you're running it directly from a DC power generating station.

CPU power consumption (5, Informative)

ergo98 (9391) | more than 9 years ago | (#12980222)

http://www.tomshardware.com/cpu/20050509/cual_core _athlon-19.html [tomshardware.com]

60-100W difference between idle and full power consumption. That is not an insignificant amount of power.

Re:CPU power consumption (2, Interesting)

Anonymous Coward | more than 9 years ago | (#12980370)

Great link!

FTA: there is something that we can't really tolerate: the Pentium D system manages to burn over 200 watts as soon as it's turned on, even when it isn't doing anything. It even exceeds 310 W when working and 350+ W with the graphics card employed! AMD proves that this is not necessary at all: a range of 125 to 190 Watts is much more acceptable (235 counting the graphics card). And that is without Cool & Quiet even enabled.

end quote.

Bottom line, if you care about energy conservation at all, buy an AMD and don't sweat letting it run full-bore.

And the Pentium M?? (2, Informative)

Grendel Drago (41496) | more than 9 years ago | (#12980590)

Err, not precisely. Intel's Pentium M [tomshardware.com] can create a system that draws 132 watts [tomshardware.com] at maximum CPU load, and runs nearly as fast.

I've been buying AMD for about five years, but I think my next system will be a Pentium M. Just as soon as they're a bit cheaper...

--grendel drago

Re:And the Pentium M?? (0)

Anonymous Coward | more than 9 years ago | (#12980636)

Err, not precisely. Intel's Pentium M can create a system that draws 132 watts at maximum CPU load, and runs nearly as fast.

Incorrect. It does decently in gaming benchmarks, but for everything else it falls behind. I believe Anandtech had an informative article on this some time ago.

Pentium M Benchmarks. (1)

Grendel Drago (41496) | more than 9 years ago | (#12980723)

Really [tomshardware.com] ? Looked like it did fine in media encoding as well; it's just the synthetic benchmarks (PCMark04) that it falls behind in.

--grendel drago

Re:electricity (3, Interesting)

Profane MuthaFucka (574406) | more than 9 years ago | (#12980236)

First, figure the (Watts fully loaded) - (watts at idle) and call it something like margin watts. Then, figure out how much a kilowatt hour of electricity costs in your area. Say 7 cents.

Since a watt is a watt, and for rough purposes you can either choose to ignore or treat power supply inefficiency as a constant, you can get an idea of what it costs.

Chip: 2.2Ghz Athlon 64
Idle: 117 watts
Max: 143 watts
difference: 25 watts
Kilowatt hour / 25 watts = 40 hours.

It takes 40 hours for a loaded chip to use a kilowatt hour more electricity than an idle chip. Over a year, this will cost you $15.34 in electricity. Since your power supply isn't 100 percent efficient, it'll be more. Say 20 bucks a year.

Re:electricity (2, Informative)

Jeff DeMaagd (2015) | more than 9 years ago | (#12980314)

My questions are in relation to the public distributed computing projects.

Who pays for that extra electricity? What if the program was poorly written and destabilizes the computer?

Few to none of the distributed computing projects don't factor this in. It's a nice way of cost-shifting, I think.

I think it is a good way for an organization to make better use of their computers though, I really don't want any part of it.

Re:electricity (1)

kesuki (321456) | more than 9 years ago | (#12980424)

So someone prove me wrong, what's the electricity cost of running a CPU at full cycles for a year vs. running at typical load?

I can't tell you a whole year, but I can tell you for a month. Alright let's go back to DivX ;-) days when it was a hacked M$ codec... I was paying $20 a month ($10 over the 'minimum') for electricity. The first month I started doing DivX ;-) encoding, from various sources... my monthly bill shot up to $45. So, $25 a month more than at idle, per computer. (this assumes you run at full load)

Keep in mind that's still pretty cheap, if you've got decent, fairly new computers you're getting a pretty good mips/$ ratio. If however you've got a lab full of pentium 2's and you're running all this specialized software on it, well frakly the cost ratio slides downhill fast. Especially when you consider you could replace 'just what needs to be replaced' to get those systems up to entry level semptron-64's for less than a year's worth of electricity.

The cost to benefit ratio of doing stuff like this entirely depends on the class of desktop computers you're running it on. So your point is only valid if the technology powering the desktops is less than 3-4 years old, otherwise it makes more sense to either A. upgrade the desktops and continue with idle cycle usage plan or B. buy a real cluster.

Green fancy-dancy! (2, Funny)

TheStonepedo (885845) | more than 9 years ago | (#12980453)

The voltage from my idle memory cycles goes through a series of capacitors and ICs to make my fancy-dancy lights blink so I won't have to buy new computers and waste power - and all of this is within a 133 MHz underclocked pentium box with 32 MB ram running linux.

I'm saving the world!

Re:electricity (4, Insightful)

hazem (472289) | more than 9 years ago | (#12980513)

What all of you working from the electricity cost issue are missing is that at most universities, money for capital is different than money for operations. Capital money is hard to get. An increase in your operations cost just kind of get ignored if they're not too big.

This has political ramifications.

The goal: get a great, powerful, cluster of compute power.

You can't go to the administration and say, "We need to spend $150k on a compute cluster". The answer will be "we don't have one now, and everything's just fine. No."

So, you, being resourceful, implement this campus-wide cluster system that taps spare resources. Power bills go up a bit - nobody cares.

Now, a couple years later, lots of projects are using the cluster. But the thing isn't working well because the power's not there during normal peak usage.

At his point you go the administration, "we're losing tuition-paying students, and several grants are at risk because our compute cluster is not powerful enough. We need to spend $250k on a new compute cluster.

And THAT is how you manipulate your operations budget to augment your capital budget.

Re:electricity (2, Funny)

seanadams.com (463190) | more than 9 years ago | (#12980805)

You forgot the bit where you sell the cluster, and then lease it back from the company you sold it to - that way it comes out of the monthly current budget, and not the capital account!

Re:electricity (1)

johansalk (818687) | more than 9 years ago | (#12980672)

At 100% my fan draws 2 watts, at 100% my HD draws 12 watts, at 100% my cpu draws 89 watts.

CPU cycles are *A* if not *THE* major power burner.

Cleanup and maintenance costs? (1)

HermanAB (661181) | more than 9 years ago | (#12980213)

Before you can use the idle cycles, you first have to remove all the spambots, spybots, adware and screen savers that are already running on these machines. Also, about ten seconds after the regular user comes back from lunch, the shiny new grid computing app will be broken and all the crap apps will be back, so the maintenance cost of this system will be huge.

Re:electricity (1)

AaronGTurner (731883) | more than 9 years ago | (#12980272)

Those of us developing Campus Grids do take this into account in costing models!

Re:electricity (0)

Anonymous Coward | more than 9 years ago | (#12980308)

Does anyone realize that a CPU runs at 100% all the time? It just depends on what it does.

Re:electricity (1)

Rosco P. Coltrane (209368) | more than 9 years ago | (#12980599)

Does anyone realize that a CPU runs at 100% all the time? It just depends on what it does.

No it doesn't. When it does nothing, it idles. Most, if not all modern OSes explicitely tell the CPU when nothing is being scheduled in the scheduler, and the CPU puts itself in low-power idle mode as a result. Look inside the Linux scheduler, in the idle thread code, if you don't believe me.

Most programs in an underused computer are waiting either for interrupts (which happen all the time, but for much less compounded time than idling), and for other programs to wake themselves. Therefore the idle thread is called often, and the CPU goes idle most of the time.

What you say was true in the DOS days, which was essentially doing polling in a loop when nothing was happening.

Re:electricity (2, Funny)

Smiffa2001 (823436) | more than 9 years ago | (#12980338)

What would be amusing was if global warming research was being done with the 'spare' cycles:

"Sir, we've completed the study and all the results are in. It's pretty shocking..."
"Go on..."
"Well, since we started, it's gotten much worse compared to before. The rate of change increased. We think it's the increased power use..."
"D'Oh!!!"


NOTE: Scientific accuracy might be impaired during the length of this feature. Thank you for reading.

Re:electricity (1)

Doc Ruby (173196) | more than 9 years ago | (#12980535)

What about the power consumption to produce the hardware? That power is invested in a maximum count of cycles, amortized across the computer's lifetime. If the computer is at 10% CPU, the manufacturing/delivery power investment only pays off 1/10th what it would at 100%. So the question is, of course, the value of the extra 90%. Of course, if the value is greater than the costs of the electricity (a good bet), but less than the costs of manufacturing 10x more CPUs (running at 10%), then this approach is the best alternative. I'd say it is, but of course the entire value proposition depends on the value of the use of the extra, harvested cycles. Most investments require ongoing maintenance costs to return on the initial investment.

You're Missing the Point (4, Interesting)

kf6auf (719514) | more than 9 years ago | (#12980675)

Your choices are:

  1. Use distributed computing to use all of the computer cycles that you already have.
  2. Buy new rackmount computers which will cost additional money up front for the hardware and then they have their electricity and cooling costs.
  3. Spend absolutely no money and get no more computing power.

Note that the solution in this article is obviously not free due to electricity and other support costs, but it is undoubtedly cheaper than buying your own cluster and then paying for electricity and the support costs.

Re:electricity (2, Insightful)

Jeet81 (613099) | more than 9 years ago | (#12980837)

I very much agree with you. With summer electricity bills soaring I sometimes think of shutting down my PC at night just to save a few dollars. With higher CPU usage comes more electricity and more heat.

--
Free Credit Report [mycreditreportinfo.com]

Play fair on the resources (3, Insightful)

Mattygfunk1 (596840) | more than 9 years ago | (#12980085)

I think it's great as long as they're careful not to impede on the user working. Done badly these applications get annoying if they are too pushy about beginning their processing before a reasonable user timeout.

Google's desktop search is one example where the timing and recovery back to the user is really done well.
__
Laugh daily funny adult videos [laughdaily.com]

Re:Play fair on the resources (1)

Mattygfunk1 (596840) | more than 9 years ago | (#12980119)

I should add that I didn't mean to imply that Google's desktop search is doing a similiar style of mass-computing job as this grid will be used for, but it does do a similiar thing using processing cycles that would not be used for it's local indexing.

__
Laugh daily funny adult videos [laughdaily.com]

Simple: put the user in control (1)

antispam_ben (591349) | more than 9 years ago | (#12980227)

I think it's great as long as they're careful not to impede on the user working. Done badly these applications get annoying if they are too pushy about beginning their processing before a reasonable user timeout.

Even back in the Windows NT4 days I would put a long-running task to Idle priority and the machine would be as responsive as when the task wasn't running (though I don't recall running a disk-intensive task that way). I've noticed the badly written apps tend to be viruses and P2P software, crap you don't want to be running anyway.

Re:Simple: put the user in control (0)

Anonymous Coward | more than 9 years ago | (#12980409)

In Windows 2000 and XP there is no such thing as Idle priority, its Low.

Re:Play fair on the resources (1)

lorque (882311) | more than 9 years ago | (#12980663)

If these applications are anything like Folding@Home , IBM's World Community Grid etc, there shouldn't be any problem. I haven't noticed any slowdowns since installing the latter, running Win XP. If the application is placed on 'low' priority, you should be able to do whatever you normally do withouth the application disturbing you.

GridMP is a commercial distributed computing impl. (4, Interesting)

ReformedExCon (897248) | more than 9 years ago | (#12980093)

There are several non-commercial distributed computing systems, so the GridMP system isn't anything particularly new or groundbreaking. However, in companies that run very resource intensive applications and simulations, such a distributed system that uses unused CPU cycles has some serious applications.

However, the most critical aspect of this type of system is not just that the application in question is just multithreaded, but that it be multithreaded based on the GridMP APIs. To do such would require either a significant rewrite of existing code or a rewrite of it from scratch. This is not a minor undertaking, by any means.

If the performance of the application and every cycle counts, then that investment is definitely worth it.

Re:GridMP is a commercial distributed computing im (1)

codeguy007 (179016) | more than 9 years ago | (#12980300)

It would be interesting to compare costs between a campus wide grid like this and a dedicated beowulf cluster. I believe you will find that the Beowulf cluster will still be more the efficient solution. Of course each situation would be different making it hard to get a true comparison.

Of course the grid will be less money up front, but I think you will find that performance to power consumed will be higher (Especially if you use a water cooled cluster). The Adminstration costs will definitely be higher as a campus wide grid will be a much more complex animal than a straight beowulf cluster.

Now onto the performance issues.

With such a grid you limit yourself in several performance related ways:
  • Parallel Code. - To achieve good performance on a cluster with such slow communications (network latency), you need problems and code that lend themselves to lots of parallelization. If your code has too many serial routines in it, your performance will suck.

  • Limited Network Bandwidth. Not only will a grid give you poor network latency which limits MPI performance but it also will provide very poor network bandwidth. This is a problem if you have large data sets to be processed.

  • Heterogeneous Hardware - This is a major issue. Because a campus wide grid is going to be made up of Heterogeneous Hardware, you are limited in how much you can tweak your code for a specific processor. Sure you can create multiple sets of code for different processors and architectures but that takes time and you will never be able to optimize your code for all the different options available.

    Most Beowulf cluster are built with Homogeneous Hardware which allows for code optimization.

    Also having different speed machines means that results are going to be obtained at differnet intervals so you would have to limit or remove the interdependence between the nodes.

Re:GridMP is a commercial distributed computing im (0)

Anonymous Coward | more than 9 years ago | (#12980378)

It is true that the Beowulf cluster would be faster/more efficient for dedicated tasks. However, using free cycles on idle computers is not quite the same thing.

The Beowulf cluster is designed to process the task using as many machines as necessary. The Grid computing system must be able to handle the computation being interrupted at any time and only process on machines that are available.

The two domains are related, but are completely opposite ways of approaching the problem. The Beowulf cluster is designed to tackle the problem head on by throwing sufficient power at the task. The grid system is designed to squeeze out whatever spare cycles are available from existing resources.

Heterogeneous Hardware & mathematical accuracy (3, Interesting)

mosel-saar-ruwer (732341) | more than 9 years ago | (#12980405)


Heterogeneous Hardware - This is a major issue.

The kinds of things that interest high-end computing geeks tend to be extremely sensitive to round-off error.

If you're trying to get accurate results by spreading calculations around among disparate machines that might deploy e.g. IEEE 64-bit doubles, IEEE 96-bit doubles [Intel & AMD], IEEE 128-bit doubles [Sparc], or various hardware cheats [MMX, SSE, 3dNow, Altivec], then trying to make any sense of the results will drive you absolutely bonkers.

PS: A good place to start in understanding the uselessness of e.g. 64-bit doubles is Professor Kahan's site at UC-Berkeley [berkeley.edu] ; you might want to glance at the following PDF files:

Another interesting article on rounding error. (1)

mosel-saar-ruwer (732341) | more than 9 years ago | (#12980597)


In addition to Professor Kahan's site, listed above, you might want to read this article over at Sun [which references SPARC's 128-bit IEEE double, known as the "SPARC-quad"]:
Unfortunately, I don't think it lists an elapsed time for the 128-bit calculation [only for the 64-bit calculation].

Re:GridMP is a commercial distributed computing im (2, Insightful)

Smart Teapans (897293) | more than 9 years ago | (#12980451)

Good point. Having to rewrite the application to make use of a parallel MPI can be a pain. Condor [wisc.edu] is a free full-featured batch system that allows you to run apps on remote machines without having to recompile them

Sure about that? (4, Insightful)

brwski (622056) | more than 9 years ago | (#12980097)

REusing idle cycles? Really?

Re:Sure about that? (1, Funny)

Anonymous Coward | more than 9 years ago | (#12980592)

Sure, just imagine the karmatic synergy, not to mention product entropy, attainable from both using waste cycles and recylcing them!

Spambots (2, Funny)

HermanAB (661181) | more than 9 years ago | (#12980099)

are harvesting spare cycles all the time. I don't think there are much cycles left over anymore!

If I could only use this to improve rendering time (1)

waif69 (322360) | more than 9 years ago | (#12980100)

in Final Cut Express or Shake or DVD Studio by using the PCs that I have on the network to do the heavy lifting required.

Re:If I could only use this to improve rendering t (2, Informative)

SlamMan (221834) | more than 9 years ago | (#12980381)

If you have extra Macs, you can with DVD studio and Shake. Look up qmaster.

So Is This The New Enron.... (1, Funny)

Anonymous Coward | more than 9 years ago | (#12980106)

storing our unused CPU cycles in a data warehouse somewhere so that, when a user pays enough, they can buy the needed cycles?

Like water, like electricity, like natural gas and shares of Enron stock?

Lousy title. (1)

Ziviyr (95582) | more than 9 years ago | (#12980113)

Okay, I'm gathering up lots of unused CPU cycles. But if I were going to reuse a cycle, I'd probably want to reuse a cycle that did something...

Who wants to pay me for the past three years of my computer idle time?

Hey, pay for my idle time too, arcades aren't getting cheaper.

"Compute" should only be used as a verb. (3, Interesting)

Anonymous Coward | more than 9 years ago | (#12980129)

"Compute" as an adjective is just weird. Keep your creepy clustering terms to yourself kthx

any similar fre solutions that can be implimented? (1)

BipinG (860191) | more than 9 years ago | (#12980146)

---> Does anyone realize that running a CPU at 100% takes more electricity than running a CPU at 10%? ---
is it? i'm WAITING FOR OTHERS TO ROAR!

bty: do we have some OSS software of similar kind... that can be used for similar network (of frends... & within companies & its branches?)

I haven't seen any SIMILAR free & flexible solution that can be imprimented by anyone willing to...

Imagine (0, Redundant)

Progman3K (515744) | more than 9 years ago | (#12980147)

Imagine a cluster of those -
never mind...

GridEngine (3, Interesting)

Anonymous Coward | more than 9 years ago | (#12980156)

http://gridengine.sunsource.net/ [sunsource.net]

Free and opensource, runs on almost all operating systems.

sunsource.net (2, Informative)

Jose-S (890442) | more than 9 years ago | (#12980309)

This seems to be a new site, right? Found this in their FAQ:

Q: Will Sun make Java Technology Open Source? A: Sun's goal is to make Java as open as possible and available to the largest developer community possible. We continue to move in that direction through the Java Community Process (JCP). Sun has published the Java source code, and developers can examine and modify the code. For six years we have successfully been striking a balance between sharing the technology, ensuring compatibility, and considering the needs of a growing installed base of more than 2.5 million Java developers who depend on us. We are certainly evolving Java through the JCP to a model that works for all involved but that also ensures compatibility. Cross-platform compatibility has always been the key to Java's success and integrity; a notion we feel was protected by Microsoft's agreement in January 2001 to settle the lawsuit regarding Java technology.

I take it that's a 'no.'

Re:sunsource.net (1, Interesting)

Anonymous Coward | more than 9 years ago | (#12980356)

What do you mean by "no"??

If you spend a minute or two more to find out more before jumping into the conclusion, you will find:

http://gridengine.sunsource.net/servlets/ProjectSo urce [sunsource.net]

Spyware, Adware & Malware (4, Funny)

Krankheit (830769) | more than 9 years ago | (#12980183)

I thought that was what spyware was for? When you are not using your computer, and while you are using your computer too, let your computer send out e-mail and perform security audits on other Microsoft Windows computers! In exchange, you will get free, unlimited access to special money saving offers for products from many reputable companies, such as Pfizer.

This platforum (1)

Fantasy Football (886971) | more than 9 years ago | (#12980185)

The platform supports three types of user jobs:

* Batch jobs -- Users can use the mpsub command to run the batch jobs in which a single executable is forwarded by the Grid MP system to run on a single remote desktop.
* MPI jobs -- Users can submit MPI jobs using the ud_mpirun command. The system selects a set of desktop machines and coordinates the initiation of the MPI application across this set of machines. Currently, MPICH and LAM/MPI are supported.
* Data-parallel jobs -- The platform supports coarse-grained parallelism for large jobs that can be decomposed into several independent pieces. Developers can create application scripts to work in conjunction with application executables to implement a data-parallel solution. These applications can then be hosted on the Grid MP and provided as application services available to users.

1st Grid Design: GNU Jet Fighter (3, Interesting)

reporter (666905) | more than 9 years ago | (#12980197)

Let's do something really interesting with this grid technology. Instead of participating in SETI, let's use this grid to design the first GNU jet fighter (GJF). Our target performance would be the Phantom F-4J, modified with a gattling cannon. We could design and test the GJF entirely in cyberspace. The design would be freely available to any foreign country.

Could we really do this stunt? I see no reason why we could not. Dassault has done it.

Dassault, a French company, designed and tested its new Falcon 7X entirely in a virtual reality [economist.com] . The company did not create a physical prototype. Rather, the first build is destined for sale to the customer.

Re:1st Grid Design: GNU Jet Fighter (1)

codeguy007 (179016) | more than 9 years ago | (#12980427)

Remind me to boycott that first customers airplanes. I really don't want to be used to alpha test a new aircraft.

Re:1st Grid Design: GNU Jet Fighter (3, Insightful)

DigiShaman (671371) | more than 9 years ago | (#12980469)

Ya, let's countries such as China and N. Korea have such access to free engineering. After all, we want oppressive regimes to have as much power over their own citizens. I mean, when was the last time YOU could fly your own jet? Such gaps between non-democratic governments and it's citizens make much-needed revolutions that much harder to achive.

Re:1st Grid Design: GNU Jet Fighter (2, Informative)

nbritton (823086) | more than 9 years ago | (#12980541)


How about we do something that's a little more pratical and useful such as finding new drugs that will cure cancer. [grid.org]

Don't invent your own mouse trap (4, Insightful)

mi (197448) | more than 9 years ago | (#12980244)

It is almost a 'meme' -- when people start on projects like this, they tend to think, off-the-shelf software (free and otherwise) is not for them and they need to write their own...

PVM [ornl.gov] offers both the spec and the implementation, MPI [anl.gov] offers a newer spec with several solid implementations. But no, NIH-syndrom [wikipedia.org] prevails and another piece of half-baked software is born.

Where I work, the monstrosity uses Java RMI to pass the input data and computation results around -- encapsulated in XML, no less...

It is very hard to fight -- I did a comparision implementing the same task in PVM and in our own software. Depending on the weight of the individual computation being distributed, PVM was from 10 to 300% faster and used 5 times less bandwidth. Upper management saw the white paper...

Guess, what we continue to develop and push to our clients?

Re:Don't invent your own mouse trap (1)

dsci (658278) | more than 9 years ago | (#12980363)

It is very hard to fight

Yeah, 'grid' or 'distributed' computing has become a buzzword. Many folks that see this as a panacea seemingly fail to realize:

(1) many problems that can benefit from parallel crunching are not suitable to so-called grid computing; they fail to account for the granularity of the problem and communication latency.

(2) parallel implementation of a problem is not unique; how you implement the parallel mapping to one architecture is not necessarily the best mapping on another. In other words, good, high performance parallel implementation is no 'black box' solution.

(3) many problems have unique requirements for parallel implementation; providing libraries for basic calls to the network layer may be useful, but globally useful parallel routines are probably not globally useful.

Just some thoughts I have every time I see an article about 'grid computing.'

Re:Don't invent your own mouse trap (2, Interesting)

Coryoth (254751) | more than 9 years ago | (#12980845)

MPI is great. I used to work at a shop that had a lot of Sun workstations. After doing some reading I managed to recode some of our more processor intensive software to run distributed across the workstation pool (automatically reniced to lowest priority) using MPI. As long as you managed to get a large enough workstation pool (which wasn't that hard, given how many people had one sitting on their desk) the distributed version was every bit as fast as standard version running on high performance servers.

In effect, using MPI and a bit of recoding effort, I managed to double the number of available servers.

Jedidiah.

Pre-grid harvesting (1)

davidwr (791652) | more than 9 years ago | (#12980274)

Technology like this is interesting, but there are other ways to harvest spare CPU cycles.

Mainframes have used batch jobs that ran during the night forever.

10 years ago my college let you run batch jobs on any of the hundreds of Unix workstations. The catch is they would go to sleep when someone logged into the console and would only wake up when that person walked out. In theory this improved CPU utilization a lot. In practice it didn't, since there weren't that many people running jobs that took more than a few minutes to finish, i.e. almost everyone sat down at a terminal, logged in, did their thing, and logged out.

Heck, your PC probably does "janitorial services" like like antivirus scanning, indexing, disk-optimization, and other tasks when the CPU is relatively idle.

Re:Pre-grid harvesting (1)

Ziviyr (95582) | more than 9 years ago | (#12980476)

Heck, your PC probably does "janitorial services" like like antivirus scanning, indexing, disk-optimization, and other tasks when the CPU is relatively idle.

CPU is not a big issue for drive based processes like those.

Parallels to the ethanol debate (3, Interesting)

mc6809e (214243) | more than 9 years ago | (#12980280)

How much energy does it take to harvest the energy?

How many cycles does it take to harvest the idle cycles?

Is the balance positive or negative?

Virtual Dual Processing (1)

OsirisX11 (598587) | more than 9 years ago | (#12980287)

I apologize in advance as this question will clearly illustrate my ignorance of these types of programs.

Is it possible for two different CPU-reaping programs, such as Folding@Home and Seti@Home to be doing the same math problems in some instances?

Say, although ludacris, bear with me, they both wanted to use computer to find out 5 * 7, and they both wanted to know 7 * 9. Could one computer run those sets of instructions once, and distribute the result to both programs to do in essence, double the work?

Re:Virtual Dual Processing (0, Funny)

Anonymous Coward | more than 9 years ago | (#12980444)

Rapper Ludacris [wikipedia.org] is working on virtual dual processing? Doesn't really seem like his forte. ... Oh, you meant ludicrous [reference.com] . My mistake.

Re:Virtual Dual Processing (1)

OsirisX11 (598587) | more than 9 years ago | (#12980568)

Yepp.. I'm an idiot. :)

Re:Virtual Dual Processing (0)

Anonymous Coward | more than 9 years ago | (#12980528)

No. Also... ludacris?? You are an idiot.

Re:Virtual Dual Processing (1)

OsirisX11 (598587) | more than 9 years ago | (#12980589)

It was a typo fucker. at least the guy above you was funny about it. Yeah I made a mistake.

Re:Virtual Dual Processing (1, Informative)

Anonymous Coward | more than 9 years ago | (#12980554)

In theory? Yes, I'm pretty sure it's possible.

However, in practice, it's almost certainly more work than it's worth. You've got to have a LOT of code tracking what program wants what results, when it wants the results, etc. etc. Although it might work if they had large amounts of overlap, in other cases, chances are you'd spend a good deal more CPU power just doing the coordination than the sharing would save.

Distributed computing less efficient (3, Interesting)

imstanny (722685) | more than 9 years ago | (#12980292)

Everyone is saying that the cost of making a machine to do the same process that can be distributed to a computer is overlooking a very crucial point.

Distributing computing processes to third parties is much more inefficient. The workload has to be distributed in smaller packets, it has to be confirmed & rechecked more often, and the same workload has to be done multiple times due to not everyone runs a dedicated machine or always has 'spare cpu cycles.'

I would agree that distributing the work load is cheaper in the long run, especially with an increase in the amount of participants, but it is not a 1 to 1 cycle comparison, and therefore it is not necessarily 'taht much cheaper', 'more efficient', or 'more prudent' for a research facility to rely on others for computing cycles.

Re:Distributed computing less efficient (1)

codeguy007 (179016) | more than 9 years ago | (#12980481)

I would agree that distributing the work load is cheaper in the long run

I think you have that backwards. Grid computing is cheaper upfront because you don't have the expensive of buying an extremely expensive serial supercomputer or a beowulf cluster. But it requires more administration, isn't as efficient powerwise. Thus you can end up spending more in the long run or just get no where near the same performance. (Unless you aren't paying the power bill for all the nodes)

Grid Computing makes sense for things like SETI online or DNETC where the cycles are donated. It also makes more sense if you want to Grid together big clusters because it allows you to leverage the high performance of several clusters for a combined project. However using your secretaries' computers at night really doesn't provide a great solution but it may look good to PHBs and shareholders.

Condor? (0)

Anonymous Coward | more than 9 years ago | (#12980331)

Why not use condor [wisc.edu] ?

It seems like this article exists merely to hype the IBM clustering solution.

wear & tear associated with running at 100% cy (2)

espek (797676) | more than 9 years ago | (#12980414)

Is there "wear and tear" associated with running a computer at 100% CPU cycles all the time via one of these distributed computing programs like Folding@Home?

Will running these programs make my computer less reliable later? Shorten it's productive life (2-3 years)?

I have a Dual 2.0 Mac that I leave running all the time because it's also acts as my personal web server, and because it's just easier to leave the computer on (not asleep) all the time. I run Folding@home because I believe in the science and research and know that my contributions actually help good science. But the idea of wear and tear of the machine has crossed my mind and want to know what the negatives are to doing this to the machine (besides having to pay for the electricity).

Re:wear & tear associated with running at 100% (1)

Ziviyr (95582) | more than 9 years ago | (#12980452)

Only concern I imagine is when those apps hit the drive (wear and tear on the one important piece of non-solid-state equipment a computer has).

Or if you have an unstable cooling system that extra heat pushes over the edge. The latter would be good to know about anyways. :-)

Re:wear & tear associated with running at 100% (1)

ericdano (113424) | more than 9 years ago | (#12980470)

Um, no. I've been running Setiathome on my Dual 450Mhz Pentium III server for years. Like 6 years.

I'd never use a G5 for a webserver. What a waste! Go build a CHEAP PC and slap Unix on it, and use that. Cheap PCs are good for that.

I stopped using Setiathome a couple of weeks ago when I tried to use the latest version of FreeBSD 4.11. Boinc, the new client, seems not to run at all. Never connects to the server, nada.....:-(

Re:wear & tear associated with running at 100% (1)

linsys (793123) | more than 9 years ago | (#12980472)

The short answer is Yes. The long answer is YES chances are it will shorten the productive life of your system..

As others have stated aboive running your system at 100% will increase heat etc... computers are like anthing else, run it hard enough and you will shorten the life span.

Re:wear & tear associated with running at 100% (1)

ergo98 (9391) | more than 9 years ago | (#12980598)

As others have stated aboive running your system at 100% will increase heat etc... computers are like anthing else, run it hard enough and you will shorten the life span.

I overclocked my Celeron 300a several years back to 450Mhz, requiring me to boost the voltage/power consumption, and thus heat. The common understanding was that the increased running heat increased silicon decay, but it does so at a rate that is irrelevant - e.g. that chip has long been sitting in my garage in a "gotta get rid of this" pile, so whether it would die in 5 more years rather than 10 is somewhat irrelevant.

Re:wear & tear associated with running at 100% (0)

Anonymous Coward | more than 9 years ago | (#12980742)

Several years ago I also bought a 300a and overclocked it to 450. The sales guy told me it would die within a year (probably a scare tactic), but the chip is still running just fine!

In fact, my sister inherited the machine, so with the amount of spyware/malware I'm sure she has installed, it's probably been worked pretty hard over the last several years.

Sorry. This is hardly news (2, Interesting)

Moderation abuser (184013) | more than 9 years ago | (#12980645)

Seriously. We're talking about literally a 30 year old idea. By now it should really be built into every OS sold. The default configuration for every machine put on a network should link it into the existing network queueing system that you all have running at your sites.

Cycle-Sharing (1)

blechx (767202) | more than 9 years ago | (#12980694)

Woulnt it be neat to build a system for sharing idle cpu-time?

Just imagine the enormous amount of "wasted" cycles that could be utilized. If everyone that participated in this network installed software that was capable of computing certain things, maybe for rendering or compiling or whatever. Then when you wanted something compiled you just sent out lil chunks of work and the global net of cpus could compute this and send you back the results.

A ratio system could even be used to prevent misuse, like, the more cycles you contribute, the more you can use.

Are there networks like these already?

Yup. (0)

Anonymous Coward | more than 9 years ago | (#12980696)

Been done before. Nowadays, the big thing is better power management. How 'bout scoring a little more for originality here: "Harvesting and Reusing Idle Computer Cycles with Blue LEDs and Multicolored Sugar Sparkles".

Mmmmm (0)

Francis85 (875901) | more than 9 years ago | (#12980785)

Yummah... idle cpus.. mmmmm, i'll take four please, with extra CPU HTL topping, and a bit of thermal paste to the mix (drool)
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>