Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Energy Star For Servers Falls Short

timothy posted more than 5 years ago | from the is-there-no-market-for-such-ratings? dept.

Power 69

tsamsoniw writes "The newly released Energy Star requirements for servers may not prove all too useful for companies shopping for the most energy-efficient machines on the market, InfoWorld reports. For starters, the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving. Also, the spec doesn't care whether a server's processors have one core or multiple cores — even though multi-core servers deliver more work at fewer watts. Though this first version of Energy Star for servers isn't entirely without merit, the EPA needs to refine the spec to make it more meaningful."

Sorry! There are no comments related to the filter you selected.

Improved Version Coming Next Year (5, Informative)

1sockchuck (826398) | more than 5 years ago | (#28050993)

All fair criticisms, but it's a first step. The EPA plans to address many of the shortcomings of the current Energy Star for Servers program in an expanded Tier 2 spec [datacenterknowledge.com] that is scheduled to arrive in the fall of 2010. The update is intended to expand the program to include blade servers and servers with more than four processors.

Re:Improved Version Coming Next Year (-1)

timmarhy (659436) | more than 5 years ago | (#28051247)

no doubt they paid consultants thousands for this crap i could come up with in a weekend.

Re:Improved Version Coming Next Year (1)

NewWorldDan (899800) | more than 5 years ago | (#28055757)

No, the big problem is that it takes something complex, like specifying server hardware and dumbs it down to a little sticker. When I evalutate servers, power consumption is relatively low on my list, following after reliability and performance. Still, I wish Dell/HP/IBM would do a better job of showing power consumption with their server specs. Not that I'm particularly concerned about being green, but I do need to account for the loads on my cooling systems, ups, and backup generator.

Re:Improved Version Coming Next Year (2, Interesting)

TooMuchToDo (882796) | more than 5 years ago | (#28055869)

You're not the target market. My employer purchases tens of thousands of servers a year. One of our primary considerations is power efficiency. You know, total cost of ownership and all that jazz.

Re:Improved Version Coming Next Year (1)

Z00L00K (682162) | more than 5 years ago | (#28056299)

The work pattern of computers varies wildly, regardless of servers or workstations.

But many servers have long idle periods and some have very low loads during long periods so then the idle consumption factor is valid.

just power usage (0)

Anonymous Coward | more than 5 years ago | (#28051033)

Power usage at idle, half load & peak for different configs would be useful from the manufacturers. We don't need to mess this up with a potential 'computing work factor'.

Atom (5, Informative)

googlesmith123 (1546733) | more than 5 years ago | (#28051039)

Intel is releasing an Atom cpu for servers. It's not very powerful, but I reckon it has the highest power per watt of anything out there.

Re:Atom (5, Interesting)

derGoldstein (1494129) | more than 5 years ago | (#28051187)

There's also the FAWN project [technologyreview.com] (also on /. [slashdot.org] )

Cores-per-die is not a valid metric, not with emerging prototypes that could drastically change how web content is served.

Re:Atom (2, Interesting)

mangu (126918) | more than 5 years ago | (#28051561)

There's also the FAWN project

That's a very interesting link, I had never heard of that. I wonder how it compares with Cuda for parallel numerical computation? The article mention that they are considering using this concept for scientific computation.

Re:Atom (1)

derGoldstein (1494129) | more than 5 years ago | (#28053745)

The project's site is located here [cmu.edu] . There's quite a bit of information there (check out the first PDF [cmu.edu] at the bottom of the page).

nVidia's CUDA would have a drastically different method for paralleling, as well as a fundamentally different instruction set, which I assume is more appropriate for heavy computation. The cores are on the same die, for one thing, and I'm willing to bet it's easier to program out of the box. Of course, I'm just inferring, I've never worked with the architecture.

Re:Atom (3, Insightful)

drinkypoo (153816) | more than 5 years ago | (#28052155)

FAWN is what Google is already doing. If you tried getting even cheaper compute nodes you'd run into price-per-port problems making it all talk. There IS a form of this that works, though, It's called blade computing, and we do it already. Using a stack of 500 MHz Geodes is NOT an effective way to get work done. Turning off idle servers IS. Server consolidation IS. Using a stack of commodity systems IS sensible, but not super-gutless ones. You need sigificant computer power per network port.

Re:Atom (2, Informative)

derGoldstein (1494129) | more than 5 years ago | (#28053969)

Google is leveraging economy of scale with their cargo containers [slashdot.org] . The primary benefits are modularity, and off-the-shelf components/interfaces.

However if you look at power usage and usage of space (which also translates into power, because of infrastructure costs), if you need "shallow web servers", then paralleling even "weaker" nodes could yield a better bottom line.

Blade computing, specifically, is extremely expensive. The reason is simply that you're buying high-end components which are intended for customers with cash reserves. What Google does is use CPUs/Motherboards/RAM/PSUs that are already on thin margins and massive distribution to a far broader audience. They've created their own modular, near-blade density model, which is far cheaper, and more robust (even if it does take up more space).

Re:Atom (1)

lukas84 (912874) | more than 5 years ago | (#28054631)

Yeah, but it's also important to note that Google's model makes a lot of sense if IT is the _the_ core if your business.

But for a company where IT is just there to provide necessary infrastructure to keep their core business running, developing your own hardware like this makes very little sense.

Re:Atom (1)

Piranhaa (672441) | more than 5 years ago | (#28053651)

Since when did Atom have the highest power/performance per watt? People buy them because they are CHEAP and use only 3watts. That doesn't mean they score high on the P:W scale. The Core2 doesn't have that much higher of a TDP than that of the Atom, yet has the option of increasing its draw if required. While some tasks will pin the Atom to 100% CPU, the Core2 can do it with much less CPU utilization and get the job done way faster (and resume to idle).

http://www.tomshardware.com/reviews/intel-atom-efficiency,2069-12.html [tomshardware.com]

Re:Atom (1)

derGoldstein (1494129) | more than 5 years ago | (#28054113)

Check out Anandtech's Bench [anandtech.com] . Specifically, the new Atom 230 and 330.
It simply depends on what your task is.

Re:Atom (0)

Anonymous Coward | more than 5 years ago | (#28053731)

Intel is releasing an Atom cpu for servers. It's not very powerful, but I reckon it has the highest power per watt of anything out there.

Tomshardware tested an underclocked AMD cpu, it gave much more power out, but consumed less energy than the atom.

Re:Atom (1)

BikeHelmet (1437881) | more than 5 years ago | (#28054681)

Nah, these guys [sicortex.com] have the highest power per watt (excluding initial setup cost)

But they're taking the supercomputer angle rather than server farm angle. Unless you cache everything in RAM, your webserver won't have enough I/O. Not feasible for a company like Google, but potentially feasible for MMOs or even sites like /. (Where you have more processing than disk IO. Heck, with terabytes of memory, just cache everything in RAM until the discussion is locked.)

But honestly, I wouldn't want to deal with another architecture. There's too much good free software on x86.

Re:Atom (1)

TheRaven64 (641858) | more than 5 years ago | (#28054879)

An atom CPU is, clock for clock, more than an order of magnitude more power hungry than an ARM CPU. I'd be very surprised if it does an order of magnitude more per clock (and, yes, the Cortex A8 and A9 do have FPUs and vector units). Something like the OMAP3 draws under 250mW for the Cortex A8 core, the DSP core, the OpenGL 2 ES GPU core, 256MB of RAM and 512MB of flash under full load. The Atom draws closer to 2W for just the CPU core. Using Intel and power efficiency in the same sentence is laughable. Even their XScale (ARM-compatible) cores had appalling power consumption; they managed to get the clock rate up, but they did around 1/3 of the work per clock of their competitors at a much higher power budget, so that wasn't much of an achievement.

No, it isn't (5, Insightful)

Idaho (12907) | more than 5 years ago | (#28051067)

That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving.

No, it's not. As usual, car analogies are stupid.

Cars do no spend the majority of their time idling at traffic lights. Computers (especially servers) however do often end up idling a very large percentage of the time.

Data centers do charge for (actual) power usage, so of course the actual (typically 95th percentile) usage should be taken into account, but still it's a broken analogy.

Re:No, it isn't (5, Funny)

value_added (719364) | more than 5 years ago | (#28051257)

Well, it's less broken if you consider that in major metropolitan areas, cars do spend much of their time idling at traffic lights (typically with air conditioning running), as well as on congested city streets and freeways. Then, of course, there's the drive-thrus for those too fat to get out of their cars. ;-)

As for car analogies generally being stupid, yeah, you're right. But so are most of the alternatives. The reason why "sound bites", for example, are preferrable to hour-long analyses or 5,000 word flabby blog posts isn't that people don't want a full understanding, it's just that doing so is too much work. It's like having to evaluate a car purchase based on specifications instead of ... oh, wait.

Re:No, it isn't (2, Funny)

derGoldstein (1494129) | more than 5 years ago | (#28051361)

Well, it's less broken if you consider that in major metropolitan areas

It's less broken?

Listen here: either it works, or it's broken. There's no grey area here. I'm not going to buy an analogy and have it crap out on me when conditions become a bit sketchy. Reliability is key in this business -- if an analogy has any downtime, I'm liable for it. You might as well buy a car and expect it to...

ugh...

Re:No, it isn't (0)

Anonymous Coward | more than 5 years ago | (#28053943)

I find your lack of respect for ambiguous repair conditions TOTALLY UNACCEPTABLE.

Re:No, it isn't (2, Interesting)

derGoldstein (1494129) | more than 5 years ago | (#28051275)

Regardless of the analogy (they were probably just thinking "dumb it down because we consider the people who read infoworld -- our audience -- to be idiots"), the part about the idling time usually isn't the case. Data centers will often outsource whatever "idle machine time" they have to various institutions, at least if they have any sense.
There are many computing tasks that aren't too time sensitive, and research projects can have considerable leeway in terms of when the final computation is done and the numbers need to be inserted into spreadsheets or whatever.

If there's low traffic during the weekend, get the machines to crunch data for some other purpose, otherwise they're not paying for themselves.

Possibly a better analogy for this would be "machine time scheduling" in a machine shop. You don't let the $200k CNC milling machine just sit there and take up space -- it cost too much. Find *something* for it to do.

By the way, your sig should say: "Every expression is true, for *any* given value of 'true'", IMHO.

Re:No, it isn't (2, Insightful)

Zerth (26112) | more than 5 years ago | (#28052947)

Unless that CNC is the chokepoint for your shop or doesn't interact with any other resources in your shop, it should sit idle some of the time. Otherwise you are just creating excess work-in-process inventory.

Re:No, it isn't (1)

derGoldstein (1494129) | more than 5 years ago | (#28053475)

I'm not entirely sure whether you were referring to the CNC analogy or an actual machine shop, so I'll assume both:

Analogy: You're always going to have exceptions, but if you can quickly re-task servers then there's no reason for them to sit still any of the time, unless you can't manage to find clients for your resource.

Literally a machine shop: I worked on a few projects involving automation and software/hardware interfaces, most of which required on-site installation. I don't know much about actually running such a place, and/or the economics involved, but I did catch one recurring pattern: an idle machine is money down the drain. The more expensive the machine, the more tight the schedule was, and the more "alerts" went off when it was not in use. I literally heard machinists complaining "but it's just sitting there!".
I should mention that none of these were product lines, they were all general-purpose, multi-client shops that did custom orders. A mass-production line with one purpose would obviously be a different case.

Re:No, it isn't (1)

Zerth (26112) | more than 5 years ago | (#28054281)

Yah, I was taking the literal case:) If it is a job shop where the CNC is the only step on a piece, then sure it should be running as much as possible.

But if it is part of a line, then even if the machine is expensive, it shouldn't be be running near full time unless it is the slowest machine in the line. The only machine running near full utilization should be the bottleneck, every other step should have excess/unused capacity in relation to that bottleneck.

If you fall into the trap of "the machines are expensive, we must utilize each fully", your Work In Progress inventory will lock up all your cash flow and actually cost the entire company more, even when your department looks really cost-effective on paper. Either that, or you'll get bizarre waves of material, like the traffic jams that show up near underpasses where everyone "just taps" their brakes when the sun hits them in the eyes.

Re:No, it isn't (3, Informative)

ergo98 (9391) | more than 5 years ago | (#28053835)

Data centers will often outsource whatever "idle machine time" they have to various institutions, at least if they have any sense.

I think you just imagined that.

Very, very, very, very (x4) few data centers do anything of the sort. And the truth is that the vast majority of servers spend the vast majority of their time waiting for something to do.

Re:No, it isn't (1)

derGoldstein (1494129) | more than 5 years ago | (#28054191)

And the truth is that the vast majority of servers spend the vast majority of their time waiting for something to do.

True. This is because of the warm and fluffy economy of ~2 years ago.

I said "often", not "most". And the amount will increase if they want to keep a roof over their heads.

Re:No, it isn't (2, Insightful)

mangu (126918) | more than 5 years ago | (#28051623)

Cars do no spend the majority of their time idling at traffic lights.

I live in a place with severe traffic congestion problems, you insensitive clod!

Seriously, I think the car analogy is not so bad here. Too many people drive in the inner city using cars designed for cruising in an open freeway. Consider this: if so many cars weren't used in congested traffic, where would traffic congestion come from?

Re:No, it isn't (2, Insightful)

MobyDisk (75490) | more than 5 years ago | (#28052207)

Actually, I disagree. The analogy is very good.

Cars do no spend the majority of their time idling at traffic lights. Computers (especially servers) however do often end up idling a very large percentage of the time.

Both statements are not universally true.

Taxis, for example, may spend the majority of their time idling. So do big-city rush-hour commuters. And many servers idle 90% of the time, while others idle 10% of the time.

You can't make blanket statements about cars idle time, or computers idle time, since it probably varies 10000:1 based on the usage.

Re:No, it isn't (2, Insightful)

iamhassi (659463) | more than 5 years ago | (#28052425)

"That's like focusing on how much gas a vehicle consumes at stop lights"

No, it's not. As usual, car analogies are stupid.


I'd have to agree, bad analogy. MPG at stoplights is 0 for all cars since you're not moving. You'd have to come up with a whole new rating scheme if you wanted to determine how much gas a vehicle consumes at stop lights, like ounces consumed per hour while idling.

I'd say a better car analogy (if you must have one) would be to focus on what a vehicle gets on the highway only... and I mean all highway: fill-up, drive 60 until empty, fill-up again, compute mileage. Sure, it's a useful number, but it probably won't help you determine daily usage.

Re:No, it isn't (0)

Anonymous Coward | more than 5 years ago | (#28053447)

I'd have to agree, bad analogy. MPG at stoplights is 0 for all cars since you're not moving.
 

They're not talking about MPG, they're talking about fuel consumption. Miles has NOTHING to do with the analogy.

Re:No, it isn't (0)

Anonymous Coward | more than 5 years ago | (#28053423)

That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving.

No, it's not. As usual, car analogies are stupid.

Cars do no spend the majority of their time idling at traffic lights. Computers (especially servers) however do often end up idling a very large percentage of the time.

Data centers do charge for (actual) power usage, so of course the actual (typically 95th percentile) usage should be taken into account, but still it's a broken analogy.

And in the next sentence, they complain how different types of processors are all measured the same
"You can't compare a Ford Transit waiting at a red light to a Mondeo because the Transit could pull so much more".

Re:No, it isn't (1)

AmiMoJo (196126) | more than 5 years ago | (#28053891)

I guess you don't live in the same city as me then - I spend a lot of time getting exactly zero miles per gallon in traffic. Then again, we do have the longest traffic light waits in the country.

Last week I put down a deposit on a Mitsubishi with "stop and go". Basically it turns the engine off when you stop and put the handbreak on. It's clever enough not to do it if you have the wheels turned (i.e. waiting to turn into a road) or if the battery is low etc.

Apparently BMW and Mercedes have something similar. I'll let you know in a year's time how well it works, but from what I read it's more than just a gimmick.

Re:No, it isn't (1)

HTH NE1 (675604) | more than 5 years ago | (#28059737)

That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving.

Cars do not spend the majority of their time idling at traffic lights.

No, they spend it idling in their garage, and then you're less concerned with fuel efficiency than you are in their rate of production of carbon monoxide vs. its rate of escape from the garage.

Wait, what were we talking about again?

Yet Another Bogus Car Analogy (5, Insightful)

Brama (80257) | more than 5 years ago | (#28051083)

Comparing a server idling to a car in front of a red light is seriously wrong. Servers in general tend to spend a _lot_ more time idling than cars wait for a red traffic light. There'll always be servers that _do_ fully utilize their resources, but most of them will idle a lot. So it makes perfect sense to take that as a generic guide-line.

Re:Yet Another Bogus Car Analogy (1)

Chrisq (894406) | more than 5 years ago | (#28051121)

This is certainly true of most servers, but is it true of virtualised servers in really big data centres? I would have thought that sizing, evening of load, etc. would mean that there would be some level of constant use.

Re:Yet Another Bogus Car Analogy (2, Informative)

amorsen (7485) | more than 5 years ago | (#28051179)

It's hard to even away the intra-day variation. I work for a phone company for corporate customers only, and basically all calls happen between 7am and 6pm. We run batch tasks at night, but they can't compare to the load that customers put on the servers during the day. The addition of cell phone calls have given our servers a bit more to play with at night, at least.

I suppose we could try to sell excess capacity at night, but I doubt we could make enough to make up for the required extra staff and hardware. Everyone else has idle servers at night too, except for other time zones, and latency generally kills any ideas to utilize servers across time zones.

Anyway, idle power is free for us (we pay for peak, whether we use it or not), so from an economic perspective there's no point to optimize for it. Marketing us as energy-conscious is worth a bit though, so we would get Energy Star compliant servers if the extra cost is small. So far we're focusing on reducing peak consumption, and in all modesty I think we're fairly good at it. (Our new 7600-routers ruin the score a bit. They suck too much juice. )

Re:Yet Another Bogus Car Analogy (1)

morgan_greywolf (835522) | more than 5 years ago | (#28051223)

This is certainly true of most servers, but is it true of virtualised servers in really big data centres?

No. The biggest reason for virtualized servers is that everyone noticed that typical servers spend much of their time idle, so if we throw a 4 servers into one physical box, the hardware will stay utilized. This means we need fewer physical boxes, which means we need less power.

Re:Yet Another Bogus Car Analogy (1)

rawler (1005089) | more than 5 years ago | (#28051611)

OR, skip the overhead of virtualization, and use the Operating System.

From webster:
operating system : software that controls the operation of a computer and directs the processing of programs (as by assigning storage space in memory and controlling input and output functions)

The key here is program_s_, as opposed to program. A modern server operating system is designed to do most of the things that people is now cheering for virtualisation to do. Virtualisation solutions however, will either evolve into a new operating system, or will lack some of the comforts of an operating system, such as multi-user security, and shared memory optimisations.

One of the funniest uses I've seen in the topic of virtualization were one physical system running two virtual hosts, the first running httpd+tomcat, the second running only the underlying MySQL server used by the tomcat application. What would have been a simple "yum install mysql" were now split into double the maintenance, and crippled to roughly half the performance. (We ourselves ran the exact same software in a bare-OS configuration and achieved close to double the performance on slightly weaker hardware.)

Re:Yet Another Bogus Car Analogy (1)

morgan_greywolf (835522) | more than 5 years ago | (#28051943)

For the most part, I agree with you: I have seen some very stupid implementations. But we don't live in a homogeneous world.

One of the principle advantages of virtualization, however, is that the guest operating systems need not be the same OS. For example, you could have a LAMP stack running on one VM guest and an Exchange server running on another.

Furthermore, there are specific reasons why you might want at least the appearance of separated machines for each tier of N-tier solution. Most of these aren't really technical, but are political and economically related because bean counters like to think of different functions of a system as being completely separate from each other, and they think, for some reason, that having your MySQL server on a different physical (or virtual) machine than your Apache server is going to ease administration of both. Admittedly, it does simplify a few things, but not much.

There's also the security angle, but I'm not one of those people that think the virtual machine separation is the same as physical machine separation.

One technical reason is when you want to do "cloud" computing: you can bring as many virtual machines online as you need on demand, without much regard to the underlying hardware structure.

Re:Yet Another Bogus Car Analogy (1)

rawler (1005089) | more than 5 years ago | (#28059997)

Agreed, there are a lot of valid use-cases for host-level virtualization. Another one is testing, where you're able to set up really close-to production systems for staging test.

For the cross-os problem, yeah, you will have to have a bunch of hosts, either physical or virtual, where virtual may save you some problems, and give you others. (The famous system-clock problem in time-critical apps, for example). The important thing here I'd say, would be to still trying to keep the number of OS instances down, since each OS instance by itself will introduce some maintenance, nomatter the hardware it's running on. (Monitoring log-files, proactive and retroactive, security etc.) I often see virtualisation-promotors overlooking that aspect.

As of elastic-cloud-type of applications, I would really like to see OS-level virtualization and traditional high-performance distributed computing compared to host-virtualization.

Especially, for scaling up many applications (not ondemand-hosting many small apps, of course, there host-level virtualization really shines), I would suspect more virtual hosts will do little to nothing, since the app must be written from start to mind parallelism and scaling up, and when that's done, the networking code is a smaller problem.

Re:Yet Another Bogus Car Analogy (1)

TheRaven64 (641858) | more than 5 years ago | (#28055019)

Two words for you: Process migration. It is not well supported by most operating systems, but it is by most hypervisors.

Imagine you have 4 VMs that are normally idle, but at any given time two of them might be fully loaded. If you had physical machines, you'd need four computers. With VMs, you have two and live-migrate the two busy ones to the two real machines. For bonus points, you can shut down one of the real machines when all four VMs are busy.

The best thing about this kind of solution is that it scales very well. The more VMs you have, the more likely that your load spikes will be averaged out, with them all being at high demand at different times. With more real machines, it's easier to shuffle the VMs around so that none of them overloaded. Both Xen and VMWare can do this kind of thing automatically. In some configurations, they will shut down unused nodes and boot them up when needed. Often you just keep one more machine than you need running so you can spill capacity to it when the demand starts to spike while you wait for the next ones to boot, but the policy is up to the admin.

Re:Yet Another Bogus Car Analogy (1)

rawler (1005089) | more than 5 years ago | (#28059891)

Agreed, there are a few things that hypervisors does better than most OS:es around. I'm not arguing against the use of host-level virtualization, I'm just questioning 90% of how I see it being used in practice.

As for live migration, it's mostly a question about scaling up, which I would from a purely theoretical standpoint assume think app-level architecture would do a lot better than emulated hardware, especially if the workload is I/O-intensive. Secondly it's not really the most common use-case I see in the world, and by itself far from an case for host-level virtualization.

Re:Yet Another Bogus Car Analogy (1)

fast turtle (1118037) | more than 5 years ago | (#28052767)

Server Virtualization is like Car Pooling. It takes the empty seats in your car and puts a body into them. This means you're efficiently using not only your car but the highway to get people to work.

Re:Yet Another Bogus Car Analogy (2, Insightful)

Abcd1234 (188840) | more than 5 years ago | (#28053679)

No. The biggest reason for virtualized servers is that everyone noticed that typical servers spend much of their time idle, so if we throw a 4 servers into one physical box, the hardware will stay utilized. This means we need fewer physical boxes, which means we need less power.

Except, of course, that those servers? Yeah, they're typically busy *at the same times*, because when they're busy, they're busy because people are working.

Personally, I'm extremely skeptical of the idea that virtualization means that overall utilization of hardware settles in at a higher median. My bet is that what you really see are larger swings. ie, all the servers on the virtualization box becoming busy during the same times (ie, work hours during the week), and then going largely idle during the same times (ie, at night and on weekends).

Re:Yet Another Bogus Car Analogy (1)

ergo98 (9391) | more than 5 years ago | (#28053919)

This is certainly true of most servers, but is it true of virtualised servers in really big data centres? I would have thought that sizing, evening of load, etc. would mean that there would be some level of constant use.

Outside of low-end web hosting, virtualization is still generally in its infancy (though I expect products like vSphere 4 to change things considerably).

And even in cases were multiple servers are virtualized onto one set of hardware, the candidates for virtualization tend to be extremely low utilization, so you end up with a box using a lot of memory and storage, but still averaging out at an incredibly low CPU load.

Re:Yet Another Bogus Car Analogy (1)

smoker2 (750216) | more than 5 years ago | (#28053625)

Not only that, but a car idling at the lights is using more fuel per revolution than its most efficient mode, whereas a server at idle is using the very least energy it can. A car is most energy efficient when doing 56 mph but a server is not more energy efficient under a 65% load. So epic car analogy fail all round.

Re:Yet Another Bogus Car Analogy (1)

Quantumstate (1295210) | more than 5 years ago | (#28055069)

A car uses less fuel sitting at a traffic light in a given amount of time unless there is something badly wrong with the design of your car. It naturally uses more fuel per mile because you aren't going anywhere. I would imagine that servers are similar because when they are sitting idle the useful work compared to power is terrible because no work is being done. Whereas with full load more power is used but it is useful work like when you are using your car to get somewhere.

Of course in both situations it is best to be removing the idle time because it is pure waste with nothing useful being done.

I'm looking forward to completing your training (3, Funny)

BadAnalogyGuy (945258) | more than 5 years ago | (#28051107)

the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving

In time, you will call *me* master.

Re:I'm looking forward to completing your training (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#28051115)

Meh.

Penis bird guy was funnier.

Re:I'm looking forward to completing your training (1)

MickyTheIdiot (1032226) | more than 5 years ago | (#28051395)

My name is Torgo. I take care of the place when the master is away.

(sorry. mention "master" and you get a Manos or a Dr. Who quote every time.)

This is a great v1 (3, Insightful)

sirwired (27582) | more than 5 years ago | (#28051239)

Speccing by idle power consumption was a great idea. How exactly was the EPA supposed to grade servers based on CPU "efficiency" when each CPU differs so much? Which of the bazillion CPU benchmarks out there do you choose? This would be a short trip into an epic flame war between vendors, meaning that the spec would never get passed. "Politics is the art of the possible"

Given that most servers spend almost all their time idle anyway, this could certainly be a big money and energy saver. If you ever stroll through an actual large datacenter, you can see, via HDD ligts, that most of that gear just sits there all day long, doing little actual work. Certainly there are some servers lit up constantly, and virtualization will help to clean some of the idle servers up, but many shops don't do much virtualizing yet.

SiWired

Re:This is a great v1 (1)

amiga500 (935789) | more than 5 years ago | (#28052287)

I fully agree. With other EPA ratings, they compare similar sized appliances with each other. Your dorm fridge rating won't be compared to a full size fridge which could be quite a bit more efficient. The customer has probably figured out what size server they want to buy before they look at the energy ratings. If you've decided on the specs of your server, you can look at servers from several different companies who can provide you with similar hardware. At that point, if one has a better Energy Star rating that the others, it might influence your purchasing decisions.

Re:This is a great v1 (1)

Quantumstate (1295210) | more than 5 years ago | (#28054975)

Performance data is vital for something like this. I am not sure whether it is taken into account in any way since the specification download is broken on the energy star website.

Basically this is because it you have a server that can handle twice the load then it can use twice as much idle power and be just as efficient as two low performance servers. So performance of the server although being very hard to measure is needed to make the rating anything other than worthless.

Servers spend a lot of time idle (1)

sirwired (27582) | more than 5 years ago | (#28055211)

Yes, under load, a server that can handle twice the work for the same power is twice as efficient, but very few servers outside of Bazillion $ supercomputer clusters spend all their time under full load.

Also, either a machine is EnergyStar stickered or it isn't. How do you decide on a standard load? Some boxes are I/O monsters, others have crappy I/O, but have fast CPUs. How do you decide which workload the EnergyStar cert is based on?

Yes, it would be nice if the standard could work in performance somehow, but I wouldn't call it "worthless" because it doesn't.

SirWired

Re:Servers spend a lot of time idle (1)

Quantumstate (1295210) | more than 5 years ago | (#28066025)

Whether the server is under load or not is irrelevant as I attempted to point out in my previous post.

To clarify I am working on the assumption that if you need the performance of one good server or two worse servers then you will buy either the one good server or the two worse servers. Thus when the servers are idle you will still have one good server or two worse servers. So if the idle power of one good server is lower than the idle power of two worse servers then you will use less power with the good one.

What the thing does most of the time. (1)

DarkOx (621550) | more than 5 years ago | (#28051289)

the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights

While it would be better to include other metrics in a weighted average or something along with this its not entirely wrong. At least in the micro computer world most servers operate when businesses do. They may not in the majority of businesses be utilized even all of that time. Virtualization is helping to reduce idle time on machines but the way I figure it even VM hosts are likely to be idle more than they are not. In large enterprise these figures are different given time zones and global foot prints, although if your multinational you probably have multiple datacenters which host local services and put the numbers back in line somewhat there as well. I would wager of the total number of microcomputer servers out there most are owned by small to medium businesses, simply because most businesses are in the SMB class.

That means the machines run all the time but probably are idle all but eight to ten hours of the twenty four hours in a day and only five of the seven days in a week. That is roughly 29% of the time in use, the rest is idle time. So efficiency at idle is going to be the driving measure.

mod 04 (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28051569)

previously thought ques7ions, then

How to pass... (1)

Bert64 (520050) | more than 5 years ago | (#28051615)

Build a server with asymmetric processors...
Something like an Atom for idle use, and a bunch of quad cores that get activated when you actually do anything... Configure the disks to shut off when idle etc...

How to save energy (1)

benwiggy (1262536) | more than 5 years ago | (#28051633)

energy consumption at various levels of utilization.

Think of the energy saving, if you just said "use". Particularly if you utilize that word a lot.

Rather see ratings similar to EPA MPG (1)

iamhigh (1252742) | more than 5 years ago | (#28052035)

I mean why try to make a broad, all-encompassing standard for energy efficiency to try to slap a sticker on the ones that "pass"? This works well for a product that is as relatively simple as a washer, dryer, water heater, etc. but I think a better idea would be to have Dell post a 188/304 number on each server. The low is power pull when idle, the high is power pull when running some standard load test software.

Re:Rather see ratings similar to EPA MPG (0)

Anonymous Coward | more than 5 years ago | (#28052953)

A single number won't do it. Power Consumption when idle is a good start, but to accurately show the power use under load, you'd need something like the SPEC Power 2008 graph, showing the energy consumed for each work unit compared to the benchmark score (i.e. the number of work units per second). Otherwise, it would be very hard to compare a 16 way VIA Nano blade center with a server with a single quad core xeon.

Re:Rather see ratings similar to EPA MPG (1)

iamhigh (1252742) | more than 5 years ago | (#28053187)

Perhaps the results of the load testing software could be required to be disclosed, then we would see that for x watts you got x computing power. I agree that you have to take computational power with electricity used, all I am saying is I would rather be provided the raw numbers and allowed to make that call rather than have some agency try to slap a sticker on it labeling it power efficient. Anyone with a geek card will be able to weigh these numbers pretty easily to get a good result (you were able to prove that by your argument and you didn't even actually need numbers to know that), especially those that will be making purchasing decisions for a data center (I hope!).

Contradiction? (1)

pig-power (1069288) | more than 5 years ago | (#28052105)

Putting "Energy Star" and "useful" in the same sentence?

Atom Servers (1)

kenh (9056) | more than 5 years ago | (#28052785)

SGI had an Atom-based supercomputer on the drawing board: http://www.pcmag.com/article2/0,2817,2334887,00.asp [pcmag.com]

Quote:

"The key to the concept, SGI said, was its Kelvin cooling technology, which could pack 10,000 cores into a single rack. Combining the Atom processor with the Kelvin technology could generate seven times better memory performance per watt than a single-rack X86 cluster. Molecule could also process 20,000 concurrent threads, forty times more than the rack, and 15 terabytes/s of memory performance, SGI said."

Supermicro makes a nice server MB with a dual-core Atom 330 CPU:

http://www.pcmag.com/article2/0,2817,2346555,00.asp [pcmag.com]

Quote:

"The X7SLA-L platform from Supermicro is designed around the Atom 230, a single-core chip from Atom that consumes just four watts. The server itself packs four SATA ports with RAID 0, 1, 5 and 10, along with seven USB 2.0 interfaces, 2 Gbytes of DDR2 memory, Intel GMA 950 graphics and a Gigabit Ethernet port. The more robust X7SLA-H uses a dual-core Intel Atom 330 processor, and doubles the Gigagbit Ethernet ports, adding an additional USB and serial connector as well."

Mfg. website: http://supermicro.com/products/motherboard/ATOM/945/X7SLA.cfm?typ=H [supermicro.com]

Just use VMware's DPM (2, Informative)

acoustix (123925) | more than 5 years ago | (#28053189)

VMware Distributed Power Management [youtube.com]

Supposedly it will cut your server power usage by 50%.

Hey. Wait a minute with the criticism. (1)

flynth (1553275) | more than 5 years ago | (#28058705)

I used to run little server at home. Then I've got an electricity bill for £400. Now the server is off. It would be very useful to me to be able to compare server's power usage while idling as this is what my server was doing for 90% of the time.

another solution altogether? (1)

carrathanatos (1182509) | more than 5 years ago | (#28060795)

It seems people are already hard at work at creating a better solution to this, basically allowing servers to run more efficiently than if they are in a standard rack....

http://datacenterjournal.com/index.php?option=com_content&task=view&id=2620&Itemid=43 [datacenterjournal.com]

http://www.missioncriticalmagazine.com/CDA/Articles/Products/BNP_GUID_9-5-2006_A_10000000000000564830 [missioncri...gazine.com]

http://www.youtube.com/watch?v=knTHr8BQ8rc [youtube.com]

http://www.nerdsociety.com/2008/09/24/interview-with-spear-co-founder/ [nerdsociety.com]

Maybe you'll find it as interesting as I do.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?