×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

HP Looks To Improve Power Management Coordination

Zonk posted more than 6 years ago | from the saving-a-little-bit-of-juice-all-the-time dept.

Power 63

tringtring writes "Computer World reports on an HP Labs researcher who foretells a future in which power management features will be built into the processor, memory, server, software and cooling systems. Coordination will be paramount. 'What happens if you turn all these elements on at the same time?' the principal research scientist at HP Labs asks. 'How do I make sure that the system doesn't explode?' This future is the vision of Parthasarathy Ranganathan, the man behind the "No Power Struggles" project at Hewlett-Packard. Power management systems will have to operate holistically, without one component conflicting with another, Ranganathan says. Ranganathan is just one of many researchers at the tech industry's biggest labs researching on how future data centers will handle increasing demands for processing capability and energy efficiency while simplifying IT."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

63 comments

Amen. (4, Insightful)

BronsCon (927697) | more than 6 years ago | (#22619292)

My 10 year old HP laptop gets 5hr 45min on a freshly charged battery. The one I'm sitting at right now barely gets 2hr. It's about time they get back to where they were.

Re:Amen. (4, Insightful)

dreamchaser (49529) | more than 6 years ago | (#22619368)

So you have 10 times the computing power (to be conservative)but over a third the battery life as your old unit. It's called a tradeoff. You can't compare apples to oranges.

Re:Amen. (1)

Anonymous Coward | more than 6 years ago | (#22619432)

Yes I can. Oranges are orange and they have a thick outer covering that it isn't a good idea to eat - plus they're easier to break into sections without knives, once the covering is off. Apples, on the other hand, are usually red, yellow, or green, and eating the skin is OK... not to mention that they aren't conveniently set up so that once the skin's off, you can break them up into sections without the help of a knife. Apples have a bit of a sweet flavor, but oranges are typically sweeter.

See? It wasn't that hard.

Re:Amen. (1)

BKX (5066) | more than 6 years ago | (#22626138)

Man you must have sucky apples wherever you live. Michigan Fujis are typically much, much sweeter than the usual California navel oranges we get around here. The oranges are typically a bit sour as well, while the Fujis are only slightly tart. Now Pink Lady apples, on the other hand, are very tart, and yet still sweet. Other, more lame, apples like Granny Smith and McIntosh are typically less sweet and less crunchy.

There, now I've both compared apples to oranges AND apples to apples. LOL

Re:Amen. (3, Insightful)

BronsCon (927697) | more than 6 years ago | (#22619438)

I also have a battery with twice the volume and a chemistry with 4x the energy density of the old laptop's battery. Not to mention that this battery is much newer and in better overall condition. I have 8x the battery capacity and ~3x the CPU power (1600Mhz vs 475Mhz). Factor in the fact that the older laptop has an internal floppy drive as well as DVD drive, where the newer laptop lacks the floppy drive, a hard disk that draws 5 watts more than the one in the newer laptop. I should be seeing nearly thrice the battery life by your logic.

You made the assumption that I had a 4750Mhz CPU, the same peripherals, the same size battery with the same battery chemistry and that similar peripherals use the same amount of power. You also failed to account for power management systems that are present in current laptops, which did not exist 10 years ago. Yet another thing you failed to account for is the supposed increase in efficiency (and decrease in overall power consumption) claimed by PC manufacturers, especially with regard to laptops. You even forgot to account for the age of the battery; 10 years vs. a week-old warranty replacement of a less-than-nine-month-old battery.

I have a battery with 8x the capacity in a system with less hardware and a supposedly more efficient CPU which is only about 3x faster, components which claim lower power consumption and over all better power management than my 10 year old laptop from the same manufacturer. Why am I seeing 1/3 the battery life of the old system rather than the 3x increase logic and mathematics tell me I should be seeing?

Someone, somewhere, is lying and it's not me.

Oh, and... first post! :)

Re:Amen. (0)

Anonymous Coward | more than 6 years ago | (#22619662)

Isn't it the screen that takes up a significant amount of energy in a laptop? What kind of resolution and brightness do you have on the 10 year old one compared to the current?

Re:Amen. (1)

Spatial (1235392) | more than 6 years ago | (#22620110)

The clock speed doesn't matter unless you're comparing the same architectures, which they are not. The performance differential is not what you said.

Re:Amen. (1)

rm999 (775449) | more than 6 years ago | (#22620594)

"You made the assumption that I had a 4750Mhz CPU"

You made the assumption that to be 10x faster, a CPU needs to run at 10x the speed. The top Intel CPU on this chart is 3200 Mhz, and is 10x faster than the bottom (2800 Mhz).

Remember - Moore's law is about transistor density, not transistor speed.

Re:Amen. (1)

Beliskner (566513) | more than 6 years ago | (#22622228)

Why am I seeing 1/3 the battery life of the old system rather than the 3x increase logic and mathematics tell me I should be seeing?
It's the processor, you have to power that huge pipeline, it's also the motherboard. And for Quad core, heck that's 95W

Re:Amen. (1)

maxume (22995) | more than 6 years ago | (#22623106)

By volume, lithium-ion batteries only have 1-2x the energy density of nimh:

http://en.wikipedia.org/wiki/Rechargeable_battery#Battery_types [wikipedia.org]

(and closer to 3x by mass). It would be better to compare the stated capacities, rather than your assumptions.

I imagine the newer screen is also faster and brighter, both of which increase power draw(LED backlights improve brightness per watt though, so if one of those is involved...). So you aren't lying, but you aren't being very careful.

Re:Amen. (0, Troll)

casualsax3 (875131) | more than 6 years ago | (#22619396)

Where is that, a 33MHz chip and a drive that spins at 500 RPM? All for TWICE the battery life? No thanks.

Re:Amen. (5, Insightful)

Anonymous Coward | more than 6 years ago | (#22619484)

The real question is: what the hell is software doing with all these resources? Why is it always on the shoulders of hardware to improve power specs? I have an idea: how about not requiring billions of processor cycles to support the 12 layers of indirection, redirection, abstraction, obfuscation, 12 megs of NOPs just to change the color of an icon? It is mind-boggling to think about what a modern processor does, I suspect most of it is crud left over from poor software decisions that we must drag around for decades.


I mean a Commodore 64 running GEOS can move a mouse pointer, show icons and have graphical text editing, all on a 1 MHz 64K 8 bit machine. Extrapolating linearly, which I think I am allowed to do, a 33MHz version of a C64 should be easily able to handle higher resolutions, more colors, etc. much much more efficiently and still be competitive as far as basic tasks go. You don't need a dual core 64 bit 2GHz processor to display text or images... yet modern computers still take perceptible time to display a new window, etc... What is the CPU doing? What kind of demented software is this?

Re:Amen. (1)

DamageLabs (980310) | more than 6 years ago | (#22621398)

There is absolutely nothing that can be done about this now. Software and abstractions are a lost cause.

In the whole picture, hardware is just another layer of abstraction, built of more interacting layers. But, todays hardware comes from several magnitudes lower number of suppliers than software and is much more tightly controlled and built to specs.

Another thing: hardware engineers are usually taught in universities. Software "engineers" are usually not.

Re:Amen. (1)

jfim (1167051) | more than 6 years ago | (#22621682)

Another thing: hardware engineers are usually taught in universities. Software "engineers" are usually not.

This depends on where you are. In Canada, the title of engineer is protected by law(see wikipedia [wikipedia.org] or Engineers Canada on MSCEs [engineerscanada.ca]).

As for abstractions, they allow other things that were simply impossible before. Abstractions allow tuning a design on criteria such as maintainability, extensibility, supportability, etc. Yes, making software more maintainable can reduce performance, but it also reduces the maintenance cost. Would you rather pay more for software that has less features but is faster?

Re:Amen. (1)

cgenman (325138) | more than 6 years ago | (#22624042)

Would you rather pay more for software that has less features but is faster?

I'd rather pay more for software that had the same amount of features but less years of krufty hack layered upon krufty hack.

Quite simply, we're talking about Windows here (and maybe Norton). Mac OS7 did a great job of providing both abstraction and speed in a maintainable environment on a 68030: a chip so slow that you wouldn't notice it if it was working as a co-processor on a modern machine.

Vista, on the other hand, requires a pretty beefy on-board graphics chip to do anything at all, and renders every window as a 3D object. It also pulls in tons of useless side-board crap that probably renders in its own wonky and inefficient scripting language. And, from everything I've heard out of Redmond, has grown into an unmaintainable and unsupportable mess anyway.

Right now, the biggest efficiencies to computer speed seem to lie within the realm of software. Specifically, getting rid of the years of bad decisions and bad hacks that are stealing inordinate amounts of processor time and making it impossible to deliver on promised feature upgrades. We currently have processors that are capable of crunching tens of thousands of times more numbers than could have been done in the 68030 days. Where is that power going? Throwing more hardware at the problem isn't the most efficient way of effecting a solution, especially if the problem is preventing necessary code updates.

Re:Amen. (1)

jfim (1167051) | more than 6 years ago | (#22633748)

I'd rather pay more for software that had the same amount of features but less years of krufty hack layered upon krufty hack.

You seem to be downplaying the costs that are incurred when throwing away working code to build new one. Non-trivial code takes a lot of time and effort to build. For example, let's look at Mozilla [wikipedia.org]. The Wikipedia article mentions the decision to scrap the codebase somewhere in 1998. When did the 1.0 version of Mozilla came out? 2002, four years later.

It clearly is not a viable option for commercial software to release nothing for a couple years just because "we're rewriting the code", for potential gains that may or may not exist, unless maintenance is simply too costly compared to the cost of rebuilding. However, since OSS does not need to meet any kind of revenue expectation, they can do such a thing.

Quite simply, we're talking about Windows here (and maybe Norton). Mac OS7 did a great job of providing both abstraction and speed in a maintainable environment on a 68030: a chip so slow that you wouldn't notice it if it was working as a co-processor on a modern machine.

OS 7 does not even have preemptive multitasking, instead relying on cooperative multitasking, just like Windows 3.1 did. I'll take your Motorola 68030 and raise you a 386.

Vista, on the other hand, requires a pretty beefy on-board graphics chip to do anything at all, and renders every window as a 3D object. It also pulls in tons of useless side-board crap that probably renders in its own wonky and inefficient scripting language. And, from everything I've heard out of Redmond, has grown into an unmaintainable and unsupportable mess anyway.

Not if you disable Aero, it'll fallback to the old framebuffer approach. As for the second argument, are you talking about SideShow?

Larger software takes more effort to maintain per LOC than smaller software. Research proves this point over and over again. What would you want, an OS that can only run "Hello World"? I'm sure it will be very maintainable, but not very useful. Since we're pulling random hearsay, I heard that it's mostly a management debacle, with poor interactions between the different groups building the various parts of Windows.

Right now, the biggest efficiencies to computer speed seem to lie within the realm of software. Specifically, getting rid of the years of bad decisions and bad hacks that are stealing inordinate amounts of processor time and making it impossible to deliver on promised feature upgrades. We currently have processors that are capable of crunching tens of thousands of times more numbers than could have been done in the 68030 days. Where is that power going? Throwing more hardware at the problem isn't the most efficient way of effecting a solution, especially if the problem is preventing necessary code updates.

We also have more features which are delivered faster than before and implemented by teams that are smaller(for the same feature). Compare the amount of work and performance when implementing a dynamic website nowadays to what was available a decade ago. A decade ago, for high performance dynamic websites, you had to code it in C/C++ using NSAPI/ISAPI. Nowadays, you just take Ruby on Rails or whatever is the framework of the day and build it faster. Sure it won't run as fast, but it also cost way less to build and has more features to boot.

Throwing more hardware at the problem was, historically, a better choice. I don't hear about a lot of people doing assembly nowadays, though I hear a lot of people writing maintainable code in Java/.NET/Ruby/Python/whatever. Might not be as fast as raw assembly, but it sure comes out quicker. Whether this trend will remain in the face of multicore architectures is still uncertain, however.

Re:Amen. (2, Insightful)

SubComdTaco (1199449) | more than 6 years ago | (#22619666)

All they have to do is look at the work being done on OXs by OLPC, cause that is exactly what they are doing to get their extra long battery life.

Re:Amen. (1)

Urza9814 (883915) | more than 6 years ago | (#22619760)

What kind of laptop is it? My girlfriend always used a Powerbook, and got about 2 hours with it...then she had to get a Dell, and was amazed to discover it got 6+ hours on a charge. To which I responded '...yea...your mac didn't?'

Re:Amen. (1)

BronsCon (927697) | more than 6 years ago | (#22619808)

Both are HP. I thought I made that clear in my original post. I should also state that the CPU is less than 2x as fast as the older laptop when running on battery (it scales back to 800Mhz) and I have the backlight set to drop to 40% when running on battery. Wireless is configured to drop from 54mbps to 24mbps and halve its transmit power when running on battery.

The 10 year old laptop does none of this, everything runs at full power, full brightness, full speed, all the time.

Re:Amen. (2, Funny)

robogun (466062) | more than 6 years ago | (#22620092)

Wait till you plug in an HP All In One printer. You'll get 15 desktop icons and a bunch of Taskbar quick launch icons. With 30 new high priority processes using half your CPU and all your memory, your battery life will drop to minutes, assuming your machine even meets the OS requirements.

I would not recommend HP to write power mamagement software.

What the hell are HP selling now? (4, Funny)

rde (17364) | more than 6 years ago | (#22619296)

"What happens if you turn all these elements on at the same time?" the principal research scientist at HP Labs asks. "How do I make sure that the system doesn't explode?"

That's certainly a worry for me. The last thing I want when I turn on a "processor, memory, server, software and cooling systems" is for the system to explode. Being a dedicated slashdotter, and therefore Linux user, I have little worry that the software will cause any manner of combustion event, but I'd never really considered the dangers of using a processor and memory at the same time. I was thinking of getting more RAM, but given that I'm already running a dual-core, perhaps I should hold off on the extra gig until I hear from HP.

Re:What the hell are HP selling now? (0)

Anonymous Coward | more than 6 years ago | (#22619636)

Being a dedicated slashdotter, and therefore Linux user
Actually, I bet most slashdotters run Windows exclusively. They run Windows, while they piss and moan about everything Microsoft (or M$, in their vernacular) does, and then proclaim how much better Linux is. This is despite the fact that most have never actually used Linux, and they secretly long to ask for help getting that LiveCD to boot, but their tween pride prevents them.

Hey, I calls 'em as I sees 'em.

Re:What the hell are HP selling now? (1)

strabes (1075839) | more than 6 years ago | (#22619812)

Getting a live CD to boot is not hard...unless you're booting feisty with an ATI card not supported by the radeon driver. (all the mobility radeons and some others) I really don't understand how people could be too prideful to ask for help. When I started using linux (ubuntu) I shamelessly asked for help or searched the internets for even the smallest of problems or questions. (fstab, xorg.conf, other easy stuff)

Re:What the hell are HP selling now? (0)

Anonymous Coward | more than 6 years ago | (#22619850)

Yeah, I know, live Cds aren't hard to use. I was making fun of the average slashdotter by saying they couldn't do something easy.

WOW (1)

zoomshorts (137587) | more than 6 years ago | (#22619700)

Stealing yout thoughts right from mid-stream, I quote "That's certainly a worry for me. The last thing I want when I turn on a "processor, memory, server, software and cooling systems" is for the system to explode. Being a dedicated slashdotter, and therefore Linux user" . That LINUX user crack is really uncalled for , under these circumstances :)

Can't mod me down. (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#22619376)

Why do anonymous comments start out at -1 now?

Re:Can't mod me down. (0)

Anonymous Coward | more than 6 years ago | (#22619544)

Because we AC's are cowardly assholes who do not deserve anything better? Post under a fucking real id or stop bitching about it.

Re:Can't mod me down. (0)

Anonymous Coward | more than 6 years ago | (#22619638)

Yeah, you tell em.... AC! Yeah!

Re:Can't mod me down. (0)

Anonymous Coward | more than 6 years ago | (#22620510)

Fuck yeah, buddy, I will! Rollin' hard with my AC Mancock, slappin' ya'll bitches around!

coordination... between brands? (1)

aleph42 (1082389) | more than 6 years ago | (#22619380)

I can see the arguments between brands already:
"-Your chip is sucking all the power and making mine look bad!
  -No, yours is!"

I mean, we have enough problems with benchmarking as it is; I can't see how they would make that kind of "coordination" work, when not all pieces of the computer are of the same brand. Sure you can test what component takes the more power, but they can always say the others aren't sending enough info, etc...

Re:coordination... between brands? (1)

ILuvRamen (1026668) | more than 6 years ago | (#22619916)

I don't care about squabbling about who's better, I care about proprietary, not easily replaceable parts. Whatever moron at HP said "hey, let's start putting specialized hardware in our computers" should be fired. It's like with Dell's crap that you can't replace with standard parts so that they can charge 3x the real price to buy replacement parts from them directly. Guess how much a replacement motherboard was for a 6 year old Dell Dimension? $130! (I bought one used on ebay for $50 though)

Really more of an Ask Slashdot (1, Funny)

Anonymous Coward | more than 6 years ago | (#22619384)

'What happens if you turn all these elements on at the same time?' the principal research scientist at HP Labs asks. 'How do I make sure that the system doesn't explode?'

Re:Really more of an Ask Slashdot (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#22619708)

Really more of an Ask Slashdot (Score:-1, Funny)
by Anonymous Coward on Sun Mar 02, '08 08:44 PM (#22619384)

Starting Score: -1 points
Moderation 0
    100% Funny

Extra 'Funny' Modifier 0 (Edit)
Total Score: -1
wtf is going on with the moderation system? How can there be score: -1 funny with moderation 0 at 100% funny? And since when is starting score -1 for an AC? (hell, I can't even seem to post anonymous while logged in anymore. Post anonymously is checked as I submit this, but it's not anonymous in the preview...). Is funny now some kind of a comment tag?

Re:Really more of an Ask Slashdot (0)

Anonymous Coward | more than 6 years ago | (#22619796)

would seem so yes. Maybe his IP is flagged or something.

Hint: step 1 is user-control (2, Interesting)

Spazmania (174582) | more than 6 years ago | (#22619524)

Step 1 is user control for turning up the cooling features. If the user determines that the fans should run faster then the fans should run faster regardless of what the "holisitic" system thinks.

Seriously, this is the single biggest problem with the current HP DL360. The fans turn down to 30% and the memory overheats. A simple BIOS option to set the minimum fan speed to 60% would solve this.

Automatic is better (3, Interesting)

EmbeddedJanitor (597831) | more than 6 years ago | (#22619626)

The crap design you mention is jsut that: a crap design. It is possible to make a good automatic design.

How many cars these days have manual chokes, advance/retard, mixture settings etc? None. They are all automatic. Give a user a knob and they will fiddle with it and break the system.

Re:Automatic is better (1)

IHC Navistar (967161) | more than 6 years ago | (#22621468)

The downside of not having those "manual" systems is that the user, no matter how well-versed they are, cannot adjust the system to do what tey want.

Yes, your analogy is very valid for the Average Joe in terms of cars, but when a real user needs to make there car or truck do more, they have no way of doing it. If I want to give my truck more gear ratios for better mileage, I just slap on an over/under drive. Plus, automatic isn't always better, such as is the case with four-wheel-drive.

The very real, AND HORRIBLE, downside of automatic systems in cars is that when one control "module" fails or goes out of range, the whole system, or part of the system, will usually shut down or become unavailable.

Prime example: Electronic steering. When the steering control module or fuse fails, there is no movement of the fromt wheels. Manual steering does not have this problem, and "power assisted" steering will still work, albeit a little harder, even with the engine dead. Plus, both manual and assisted steering will still work even if the system is shorted, as they still use manual input instead of electronic input. Also, control modules are usually hideously expensive, and require special installation procedures. Increasingly, it is possible for a module to short-circuit, not only destroying itself, but other modules nearby (a control module in my 1997 Ford Explorer shorted and destroyed itself and two other modules nearby. $600 per module!). Jump-starting a car with a dead battery can also destroy modules. Modiules also will not tolerate the extremes mechanical parts can. Too much heat on an intergrated ignition coil will make the coil fail in no time. An alternator will continue to run. Too much voltage through a steering module will fry it, causing loss of steering ability, instead of a blown fuse and stiffer steering.

Manually adjusted controls will give a faster response and last longer, but require the user pay more attention to preventive maintenance. That is something almost no car owner has the patience for. Automatic parts don't last as long, and give slower responce due to the need for the input to be processed by the ECU, but require no preventive maintenance or care. With mechanical parts, parts can be swapped in or out as the job requires (one aftermarket part will almost always span the entire spectrum of potential applications), whereas electronic parts require reprogramming by a technician, or replacement if the part cannot handle the specifications required. A power surge in an "automatic" car will usually be a death knell for any electronics indides, whereas in a "manual" car, you'll need to just replace a fuse. Plus, electronic throttles, now seemingly the standard, give POSITIVELY CRAPPY control over engine speed. Couple an electronic throttle with an automatic transmission and an experienced user will be fighting to keep the engine at one speed and the transmission from jumping into different gears. Plus when an auto trans jumps into a higher gear, especially on an inclune, engine RPMs will rocket up and possibly damage the engine. The operator then takes his foot off the throttle to reduce RPMs, making the trans shift back down into gear, but losing road speed. It's a giant headache.

So, automatic isn't necessarily better. It's just easier and cheaper..... unless you know what you are doing. Then it's a pain in the ass.

Solution in search of a problem... (1)

drdanny_orig (585847) | more than 6 years ago | (#22619564)

What happens if you turn all these elements on at the same time?' the principal research scientist at HP Labs asks. 'How do I make sure that the system doesn't explode?
I can't say as I've ever worried about that.

Although it seems bizarre (4, Informative)

zappepcs (820751) | more than 6 years ago | (#22619578)

There is something to this. In a data center, if you have a brown-out or full power drop, the strain on power systems to restore power are what can only be described as epic.

When you take a 1400 amp back up system and drop it up and down like a yo-yo in a lightning storm, stress tends to bring out the worst of Murphy's Law. If all the components in a data center were orchestrated, that can be mitigated. It can be mitigated into nearly 'not a worry' status.

Monitors? low priority in most cases. Redundant supplies, in some cases bring them up separately. Cooling fans could be delayed by some seconds depending on usage. It may seem negligent power use, but on startup each system will draw it's max current, and when all do at the same instant, the peak draw can be overwhelming. In fact, computers themselves could bring up hardware in an orchestrated manner to reduce the startup surge.

In addition to this, by adding power management, it's possible to reduce data center power use also. If you monitored temp and turned off fans when not needed, less power used, less heat generated, less cooling needed overall. If all hardware were built in such a way the hardware on a quad nic card that is not used could be powered off after configuration... as an example. Nic cards could be the last thing to be powered up.

This type of design is practically rocket science. If you look at systems that go into space you will see that they count every milliamp of current draw and manage it with precision. Power use is a big concern for space craft.

Re:Although it seems bizarre (1)

ClamIAm (926466) | more than 6 years ago | (#22619754)

If all the components in a data center were orchestrated

Aw, now I want power redundancy systems that play the 1812 Overture as they fight epic brownout conditions. That would be sweet. Although, it would use a bit more power...

Re:Although it seems bizarre (1)

zappepcs (820751) | more than 6 years ago | (#22619794)

Believe it or not, I like to have audio indication on many systems. I am at the point now that hearing certain sounds in conjunction with other events lets me know instinctively what is happening. I'm reasonably certain that one or two power outages would have a symphony of things going on with your proposal, and that symphony is easier to determine what is happening than reading several thousand kilobytes of log files. I like the idea. Even just knowing when something 'not normal' has happened by audible signal would be hugely cool :) especially in a Googleplex size data center blackout.

Re:Although it seems bizarre (1)

Shadow-isoHunt (1014539) | more than 6 years ago | (#22620422)

Nic cards could be the last thing to be powered up.
Breaking WoL.

Re:Although it seems bizarre (1)

TheThiefMaster (992038) | more than 6 years ago | (#22621596)

What does WoL matter when the machine is already being powered up?

Re:Although it seems bizarre (0)

Anonymous Coward | more than 6 years ago | (#22624668)

Well you can't exactly use WoL if the NIC has no power in the first place. You're saying the NIC should be killed after it recieves a WoL packet, or something like that? The NIC uses next to no power on idle, man.

Re:Although it seems bizarre (1)

Beliskner (566513) | more than 6 years ago | (#22622244)

In addition to this, by adding power management, it's possible to reduce data center power use also. If you monitored temp and turned off fans when not needed, less power used, less heat generated, less cooling needed overall. If all hardware were built in such a way the hardware on a quad nic card that is not used could be powered off after configuration... as an example. Nic cards could be the last thing to be powered up.

If heat and energy usage were that much of a problem then the laws of capitalism dictate that all the World's datacentres would be located in Antarctica.

also go to a dc power bus for the data center (1, Interesting)

Joe The Dragon (967727) | more than 6 years ago | (#22619642)

dropping ac to dc psu in each system and replacing them with dc to dc ones will drop heat and power use.

Re:also go to a dc power bus for the data center (1)

Detritus (11846) | more than 6 years ago | (#22622332)

How so? You've replaced the AC input to the power supply with a DC input, at a lower voltage and with increased transmission losses. Whatever the input, it gets converted into AC in the voltage regulator.

Duh. (1)

palegray.net (1195047) | more than 6 years ago | (#22619688)

How do I make sure that the system doesn't explode?
Don't connect the detonating charge, unless of course you just can't handle another reboot...

Re:Duh. (1)

calebt3 (1098475) | more than 6 years ago | (#22619764)

Don't connect the detonating charge...
Or in this case, don't complete the electrical circuit if you have a Sony battery.

stupid (0)

Anonymous Coward | more than 6 years ago | (#22619776)

processors already have power management features.

But here's the problem. Power is used to produce
performance. Especially in terms of reducing latency.

So: it's easy to reduce power when you don't need performance.
But the assumption is you have the high power device because
you need the performance sometimes. SO: the problem becomes
how to meet the performance constraints of the system, while
using less power. I mean, if power is the solution to the problem,
then if you use less power, at some point you're not solving the
problem.

The real solution will be to have software plus hardware solve
the problems, using devices that use lower power.

Now if you're just talking about average power, and willing to design
and build your data center for peaks, and let that peak delivery
solve the problem when necessary, then sure that's a solution.

If you're just talking about not producing systems that allow everything
to peak (memory, i/o, cpu) simultaneously, and forcing software to be
tested using such throttled subsystems, then yeah that's already
in devices.

I guess I don't get what the point is here. The future will be more
of the same. We're still not as hot as the 10 ft tall vacuum tubes that
were produced to produce radar for the dew line. Now that was a problem
that was hard to solve.

Making sure everyone gets to see the latest YouTube thingee...that's easy.

Nothing to see here, move along please.

-l

Enterprise users will pay big $ for this (3, Insightful)

ejoe_mac (560743) | more than 6 years ago | (#22619800)

So when you're purchasing power from the grid and you're metered not on use, but on peak draw, this will save you a LOT of money. Coordinating the power on of a number of systems which draw a lot at power on verses their normal draw (think turning on 100 laser printers all at the same time!).

RAID controllers have been doing this for years (1)

davidwr (791652) | more than 6 years ago | (#22619844)

You do not want to spin up a bunch of motors all at once if you can avoid it.

BIOS & the OS (1)

Midnight Thunder (17205) | more than 6 years ago | (#22624040)

Improving power management in the hardware is a good idea, then again the problem is probably simpler. Currently PCs uses a power management protocol that doesn't seem to be easy to understand and in certain case just badly implemented. It really gets on my nerves when I buy a new motherboard and there is no way to get the system to go to sleep. I am not too sure whether to blame this on Windows, the hardware or a bad specification?

Can anyone tell me whether EFI (replacement of BIOS), provides a better way of talking with the hardware for power management needs?

Re:BIOS & the OS (1)

Wesley Felter (138342) | more than 6 years ago | (#22626702)

Can anyone tell me whether EFI (replacement of BIOS), provides a better way of talking with the hardware for power management needs?
I think EFI still uses ACPI for power management, so it's the same old fail.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...