Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Building the Green Data Center

CowboyNeal posted more than 6 years ago | from the bang-for-the-buck dept.

Earth 86

blackbearnh writes "O'Reilly News talked to Bill Coleman, former founder of BEA and current founder and CEO of Cassett Corporation, about the challenges involved in building more energy-efficient data centers. Coleman's company is trying to change the way resources in the data center are used, by more efficiently leveraging virtualization to utilize servers to a higher degree. In the interview, Coleman touches on this topic, but spends most of his time discussing how modern data centers grossly overcool and overdeploy hardware, leading to abysmal levels of efficiency."

Sorry! There are no comments related to the filter you selected.

First (anonymous) post! (0)

Anonymous Coward | more than 6 years ago | (#23886045)

This is why I purchase Sun server for my organization, half the power of a Dell and equivalent performance.

Lean Code = Green Code (4, Interesting)

BigZaphod (12942) | more than 6 years ago | (#23886147)

Software has an impact, too. Messy, heavy code takes longer to run, takes more CPUs, etc. Imagine how much energy could be saved if there wasn't so much code bloat!

Re:Lean Code = Green Code (2, Insightful)

cp.tar (871488) | more than 6 years ago | (#23886213)

Software has an impact, too. Messy, heavy code takes longer to run, takes more CPUs, etc. Imagine how much energy could be saved if there wasn't so much code bloat!

So that means that servers should be built the Gentoo way, from scratch, using just the things you need, no more, no less.
How much does it cost to deploy such a server?
How much does it cost to pay someone qualified enough to do it properly?

The code bloat is paired with feature bloat. And the more features there are, the more you have to pick and choose -- or, if you cannot choose, support. Because your users will want them, more likely than not.

Now, cleaning up the world's code... sounds like great work. So great, in fact, that I doubt it will ever be undertaken, even if the whole world went open source.

Re:Lean Code = Green Code (1)

Colin Smith (2679) | more than 6 years ago | (#23886303)

o that means that servers should be built the Gentoo way, from scratch, using just the things you need, no more, no less.
How much does it cost to deploy such a server?
How much does it cost to pay someone qualified enough to do it properly?
Frankly anyone with half a brain can pretty much use mkinitrd to make such a server.

How much does it cost to hire 500 admins for thousands of machines rather than half a dozen? How much does electricity and AC cost?

Meh, no point explaining. The price of oil and the economics will do that job.

 

Re:Lean Code = Green Code (0)

Anonymous Coward | more than 6 years ago | (#23886803)

I doubt that loading libraries that you end up not using has any significant cost.

What you want to spend your time on is optimizing your application code (the one that is actually running), which is what your parent was talking about before you jumped in with "So that means...".

Re:Lean Code = Green Code (1)

gbjbaanb (229885) | more than 6 years ago | (#23887291)

possibly not, perhaps he means that software should be built without the 'make it easy for the developer' features that modern languages contain. I mean, its easy to write an app in a scripting language but it will be bigger, slower, require a VM to host the script, will use more memory (especially so if it has a garbage collector), and so on.

There is a trend of saying that programmer productivity is everything, and if it requires faster computers with more RAM, then that's just too bad. I'm sure that one day, when we're paying massively more for electricity a new 'killer app' will arrive that does what an existing one does but with much less resource usage. That day will be when everyone suddenly rediscovers lean programming systems written by professional programmers that are built to last.

The same amount of features will be there (though, perhaps in dynamically loaded modules) but the way they're written will be much more efficient.

Re:Lean Code = Green Code (0)

Anonymous Coward | more than 6 years ago | (#23886293)

Indeed. Database optimization should be the priority among software development.

Re:Lean Code = Green Code (2, Funny)

frank_adrian314159 (469671) | more than 6 years ago | (#23888503)

Messy, heavy code takes longer to run, takes more CPUs, etc.

Do you guys have to bring Vista into every thread?

Re:Lean Code = Green Code (1)

rootooftheworld (1284968) | more than 6 years ago | (#23893387)

YES! ...Duh.

The outback (2, Interesting)

stainlesssteelpat (905359) | more than 6 years ago | (#23886177)

In all seriousness I've often wondered why they don't just slap server farms and data centres in the australian desert, well any for that matter. Solar power galore (almost no cloud cover in central Oz), and if the miners get paid to go live out there and work for $$$, surely if you cut out alot of your overheads IT guys would take big bucks to do it. Offer the work to Residency applicants even to cut the wage. Also there are enough big mining outfits out that way, so they would probably relish being able to out-source their IT needs. y

Re:The outback (2, Insightful)

FooAtWFU (699187) | more than 6 years ago | (#23886225)

No one in the server farm business is going to try and break into the solar-power business. It's not their area of expertise. It's an entirely different sort of business altogether. If there were a ton of solar power stations littering the outback, or if someone enterprising were ready to put some up in the hopes of attracting power-hungry industries with cheap electricity, that'd be another thing. But I would imagine it's still a rather risky proposition, as far as things go.

Besides, the bandwidth and latency to Australia from the rest of the world... not the greatest.

Re:The outback (1)

GleeBot (1301227) | more than 6 years ago | (#23886363)

No one in the server farm business is going to try and break into the solar-power business.

Kinda like how the roof of Google's headquarters isn't covered in solar panels [google.com] ?

Maybe they don't want to get into the business of supplying solar panel, but there's plenty of interest in using renewable technology to try and lower energy costs. (Not to mention even photovoltaic solar can help reduce cooling bills, because insolation is being used to generate electricity instead of simply heating the interior.)

Re:The outback (1)

stainlesssteelpat (905359) | more than 6 years ago | (#23886441)

Precisely what I was getting at. Also the latency/ bandwidth thing is getting cleared up a little. There are new cables getting layed that should ease the traffic somewhat. Although I doubt it will really have much impact on the domestic market. Though getting through the 1024K/sec barrier would be nice on adsl.

Re:The outback (0)

Anonymous Coward | more than 6 years ago | (#23886695)

Google is Special and has a lot to gain from publicity stunts such as these. Investors and potential employees and their hippie neighbors up here in San Francisco love them for it. They also have a few billion dollars sitting idle above and beyond the typical data center provider.

Re:The outback (1)

paulgrant (592593) | more than 6 years ago | (#23888153)

dust is the problem. ever try diagnosing a fault in your operating system when everything is correct, the power supply is operational (and come to find out) that a fine layer of micron-sized dust on the contacts of your graphics card has been adjusting the signal just enough to cause the driver to crash? now multiply that by a thousand machines, and you'll know why nobody is rushing out into the desert to build data centers.

Re:The outback (1)

jhw539 (982431) | more than 6 years ago | (#23889261)

While I think desert datacenters are a bad plan, dust is an easily solved issue. Filters are an incredibly mature technology, and so little outside air is brought in (basically just enough to keep the floor positively pressurized so nothing sneaks in through the cracks) it's a non-issue. That said, the most efficient datacenter designs I see use 100% outside air (with appropriate low-face velocity, low fan power filtration) much of the time to cool the space, but their in temperate climes.

Re:The outback (1)

Bandman (86149) | more than 6 years ago | (#23886863)

I would guess a combination of difficulty and expense in

a) bandwidth
b) cooling
c) supplies
d) available utilities (i.e. running water, available healthcare)

He's missing real world experience (4, Informative)

afidel (530433) | more than 6 years ago | (#23886183)

He talks about turning off unused capacity like it's some future panacea, HP and VMWare have been doing it for a couple years already. He also dismisses turning servers off as not being a big deal but anyone who's run a datacenter knows that servers that have been running for years often fail when they are shut off. There are numerous physical reason for this from inrush current to bearing wear. A modern boot from SAN server is probably much less likely to fail at boot then older ones with DAS, but the chance is very much non-zero. Of course with a good dynamic provisioning system a single host failure doesn't matter because that new VM will just get spun up on a different host that's woken up.

Re:He's missing real world experience (2, Interesting)

symbolset (646467) | more than 6 years ago | (#23886423)

Of course with a good dynamic provisioning system a single host failure doesn't matter because that new VM will just get spun up on a different host that's woken up.

Bingo. A node is just a node. A decent control system will detect a node failing to come up, flag it for service and bring up another one. In some datacenters not designed for this sort of redundancy a server failure is a big deal where people have to come in on a holiday weekend. If you do it right the dead server just sits there until you get around to that rack in the regular course.

Re:He's missing real world experience (1)

Calinous (985536) | more than 6 years ago | (#23886559)

Or as I've heard an opinion about the Sun's "container server farm", only 90% of the servers are working, and every one that breaks is replaced by one unused before - and when there are not enough spares, the entire container is replaced at the customer

Re:He's missing real world experience (1)

rathaven (1253420) | more than 6 years ago | (#23886927)

Exactly! He's right - in the virtualisation scenario a single server is not important. Your virtual servers are important. With virtualisation - if you've designed your clusters correctly, the servers that are brought back online have all their hardware checks taking place before your virtual servers start running on the nodes anyway. A dead server or a required cluster aware service fails to start - no downtime. You're still running all your virtual servers on other hardware - just not as much of it as the load required is less. All you are losing by shutting down nodes is spare headroom capacity during the hours it is not required.

Re:He's missing real world experience (1)

Brian Gordon (987471) | more than 6 years ago | (#23886503)

..also I'm sure server admins would be pretty wary of running their hardware at the fail point of heat instead of "grossly overcooling" them and being able to sleep at night.

Re:He's missing real world experience (2, Informative)

Calinous (985536) | more than 6 years ago | (#23886737)

This "grossly overcooling" business is done for several reasons:
      There are 18 Celsius just out from the cooling units, but there might be pockets of warmer air in the data warehouse (based on rack position and use).
      This "grossly overcooling" allows the servers to have a long duration of functionality when the air conditioning breaks.
      The PSUs are working better at lower temperatures (even if they are perfectly fine otherwise). Also, the cooling fans (plenty of them in thin servers) work easier at lower temperatures.
     

Re:He's missing real world experience (1)

GaryOlson (737642) | more than 6 years ago | (#23887471)

The article and this discussion also fails to address the financial and organizational problems inherent in purchasing and allocating systems. Virtualization and shared systems are not always politically and operationally feasible. The computing cloud is a great concept; but the implementation is more complicated by people -- discreet individuals with discreet goals and discreet financing methods. When cloud computing can provide simple and comprehensive chargebacks on an effective granular level, then we can discuss removing excess capacity in physical systems.

You might as well say... (1)

symbolset (646467) | more than 6 years ago | (#23888937)

"We're a Windows shop." You only hint at your real concerns -- that license tracking and organizational inertia prevents it in your case. That's too bad for you.

The technology is obviously available and immensely powerful. Some will use it, some will shun it. In the corporate world which do you suppose is going to out-compete the other?

Re:He's missing real world experience (1)

afidel (530433) | more than 6 years ago | (#23889089)

This is an area where the Citrix acquisition of XenSource will help them since Citrix realized the value of chargebacks way back in the MetaFrame XP days. My problem with a chargeback model is that it is hard to justify spare capacity in a chargeback environment yet you must have it to provide effective fault tolerance, do you overcharge a fixed amount for spare capacity, or is it a percentage so that as an applications requirements grow you can keep up with the bigger spare hardware needed to accommodate it. How do you sell it to the business if their budget is getting squeezed (I know this applies to physical servers as well, but if you have capacity in the virtual cluster to fit their app in it's a lot harder to say no).

Re:He's missing real world experience (1)

symbolset (646467) | more than 6 years ago | (#23889887)

How do you sell it to the business if their budget is getting squeezed (I know this applies to physical servers as well, but if you have capacity in the virtual cluster to fit their app in it's a lot harder to say no).

Oh, you're looking at it from a salesman's point of view, rather than a customer's. That can't be good for your customer. Since Xen is an open source project RedHat's new approach using KVM [cnet.com] could prove more interesting.

1 dual processor/8 core server running Oracle with in-memory cache option and support: roughly $200,000.

50 dual processor/8 core servers each running several VM's of postgresql with pgpool-II [mricon.com] and memcached [danga.com] : roughly $200,000. The freedom to PXEBoot a blank box into a replicant node faster than you can rack a box: priceless.

Depending on your customer's workload, one of these choices might be better than the other and vice versa. Now, which one are you going to recommend in every case?

Hm, good summary (2, Funny)

MrMr (219533) | more than 6 years ago | (#23886187)

by more efficiently leveraging virtualization to utilize servers to a higher degree
I should have printed a fresh stack of these. [bullshitbingo.net]

Managed power distribution units (3, Interesting)

Colin Smith (2679) | more than 6 years ago | (#23886215)

Switch the machines off at the the socket. You can do it using SNMP.
Monitor the average load on your machines, if too low, migrate everything off it and switch a machine off. If too high, switch one on.

Course it assumes you know how to create highly available load balanced clusters. Automatic installations, network booting and all that. Not so difficult.

 

Re:Managed power distribution units (1)

symbolset (646467) | more than 6 years ago | (#23886365)

This is actually getting remarkably easy for Linux clusters, and the help is coming from a bizarre source - LTSP.

I wrote a journal piece [slashdot.org] about it just recently. I'm setting this up for me and it's interesting.

People are doing some interesting stuff with LTSP -- call centers with IP softphones, render farms. Soon we may see entire infrastructure with redundant servers powering on to serve demand spikes and shutting off when not in use.

Diskless servers (1)

Colin Smith (2679) | more than 6 years ago | (#23886479)

We do something like this, but from scratch rather than using LTSP. It's really not difficult, just a slightly different way of looking at how an operating system and server application should work. Think botnet. It's a fundamental shift in the mathematics of computing infrastructure, from linear or worse to logarithmic.

 

Re:Diskless servers (1)

symbolset (646467) | more than 6 years ago | (#23886637)

I'm liking the LTSP model because I can do it without investing my time writing code. I intend to pull up an on demand render farm without writing a single line of code.

I am also interested in the potential of exploiting the unused resources of desktop computers to turn an entire organization into an on-demand compute cluster and/or distributed redundant storage. Joe the typist doesn't need a quad-core 4GB machine to draft a letter, but as long as he's got one we may as well do something useful under the screensaver that's playing on it 90% of the time.

But I would love to hear some details about your config. Are you PXE booting? How do you get the virtual machines to launch correctly? Is there a common package for the management piece?

Re:Diskless servers (1)

Bandman (86149) | more than 6 years ago | (#23886949)

Count me in too. I want config details as well

Re:Diskless servers (1)

orangesquid (79734) | more than 6 years ago | (#23887169)

Depending on the size of your organization, and how "corporate" the network is (are workstations in software lock-down?), you may have to spend a lot of time designing software that can ensure some level of code authenticity for any deployed work. Otherwise, don't expect to get approval to run this on "most every" random workstation. I had a project where machines would automatically download code and data sets to run as requested, synchronized by a central server. Three things stood in my way: (a) limitations in the development environment made debugging difficult (MFC is such a buggy piece of crap!), (b) eventually the machines that originally took hours to run the simulations were upgraded to bleeding-edge systems that ran them in the duration of a coffee break, meaning I would have had to do some serious research into tight, fast synchronization and networking code in order to gain any improvements by using a cluster due to the nature of the data blocks (they were very small, and before a new set could be deployed, all results of the last set had to be tallied); another approach could have been to redesign the algorithm to let part of the network move on to a new data set while everything was tallied (but depending on the tally, some results would have to be thrown out), and (c) since the workstations were starting to go to software lock-down during the latest refresh, a lot of additional time would have to be spent developing some very high guarantees of security (my code did md5summing of all code modules, including the client itself, but someone could have potentially written a hacked client that returned "the right" md5sums when asked while running unsafe code, so a different system would have had to be designed).

Sorry for the run-ons, I'm a little busy/distracted recovering data on a server with failed disks :( I was *just about* to upgrade it to mirroring RAID, too (at least I did regular backups over the network, though).

Re:Diskless servers (1)

Lennie (16154) | more than 6 years ago | (#23892261)

I always think when I see things like this, we are moving more and more to something like the ideas of
Linux NOW [9fans.net] .

Re:Diskless servers (1)

Colin Smith (2679) | more than 5 years ago | (#23901299)

Tiny base OS (linux), booted from PXE/TFTP server running from ramdisk. networking, storage, snmp, ssh, grid engine, "botnet" client and bugger all else. On top of that you run a VM host, Xen, VMware, vserver or whatever fits your requirements. This is the infrastructure platform. It can be rolled out to anything which supports PXE. Tens of thousands of machines if required. They can be functional literally as fast as machines can be fitted into racks.

Basically you don't touch a machine till it comes up and reports that it's ready. At that point you know you've got a host there waiting to provide whichever service you're interested in. If it's an application server you simply add it to the list and the load distribution system handles the migration of applications. when the average load is too low, below 1 for each CPU for instance, you pop a machine off the list, the controller tells the machine to get rid of it's applications. Then you switch it off at the socket. It's all just scripting, there is no real coding required.

In terms of application scalability and redundancy, well that depends on the application itself and how it's configured. DB clusters for instance are a bit of a pain to manage because the physical location and remoteness from other cluster machines severely reduces performance so sometimes you have to specify which machines are able to run which applications.

It's not suitable to handle 5min peaks in load, it takes between 15 and 30 mins for a machine to configure itself, but over night, weekends it works.

Re:Diskless servers (1)

symbolset (646467) | more than 5 years ago | (#23902713)

Thanks. That's the ticket. I assume if it takes 15-30 minutes to configure, you are downloading and chain booting a disk image. I suppose if I take that route I can preload the disk images on a spare server with one boot image that then puts the server back to sleep. Then when the load comes up the provisioned server can be awakened in short order.

It takes under a minute to bring up my clients because everything runs in the ramdisk so far.

I'd let the load get much lower -- maybe .5 on each cpu before I started killing off servers, but I suppose that's a good spot for a configurable parameter.

What's got me curious is how to make the management piece redundant and load balanced as well. I'll just have to work on it.

Re:Diskless servers (1)

Colin Smith (2679) | more than 5 years ago | (#23907517)

Thanks. That's the ticket. I assume if it takes 15-30 minutes to configure, you are downloading and chain booting a disk image
Kind of. The base OS boots and runs 100% from ramdisk, takes about 15 seconds to download and maybe another 30 to boot. About 100Mb or so uncompressed. Doing it from scratch keeps the underlying infrastructure OS small. What takes the time is the application image. It usually has some data attached with which it is packaged, anything from a few hundred mb to gigabytes. The local storage is purely for the application packages or VM images while they're hosted on that machine.

What's got me curious is how to make the management piece redundant and load balanced as well. I'll just have to work on it.
Not sure which load balancing you're talking about. If the control servers, think botnet. Take a look at energymech or eggdrop. If you mean load balancing for the whole system, well grid engine does the heavy lifting there.

Re:Managed power distribution units (1)

Calinous (985536) | more than 6 years ago | (#23886671)

What you say is shutting off computers, and then starting them with network starts. My own PSU (450W unit) uses more than 15W when connected to mains - so I switch it off from the mains when not in use.
      Now, 15W loss in a 500W PSU when off is a drop in a bucket - yet it might help a bit

Re:Managed power distribution units (0)

Anonymous Coward | more than 6 years ago | (#23887801)

15 Watts are a lot for a shut down (not sleeping) PC; are you sure there aren't other devices drawing current or the PSU is in good health. I've tested lots of them and never measured more than 5-6 Watts. when the PC is off.

Re:Managed power distribution units (1)

Calinous (985536) | more than 6 years ago | (#23888717)

Mouse, monitor, and I think this would be all - I've measured the power on the socket, not specifically the PSU

Re:Managed power distribution units (1)

bobbozzo (622815) | more than 6 years ago | (#23889309)

The monitor is probably the culprit, not the computer.

Re:Managed power distribution units (1)

antirelic (1030688) | more than 6 years ago | (#23887679)

Am I missing something? Outside of a few very large organizations, isnt the operation of a data center separated from the equipment inside the data centers? Dont most data centers rent out space for customers? If this is the case, outside of increasing customers bills for energy consumption, there is nothing that a data center can do to change the way the customer does business. Not every customer is going to find it practical to have a managed virtual server environment, or be ok with allowing systems to be powered down.

The real challenge for data centers is going to be how they isolate cooling they provide to their customers. Cooling seems like a massive cost compared to all other aspects of a data center.

Re:Managed power distribution units (1)

gnuman99 (746007) | more than 6 years ago | (#23889623)

Depends on the application. Not everything fits in the LAMP, web server model.

Overcooling? (1, Interesting)

Anonymous Coward | more than 6 years ago | (#23886233)

I think this guy confuses heat and temperature. In datacenters, cooling costs are mostly proportional to the heat produced, and have little to do with the temperature you maintain in the steady state.

Re:Overcooling? (2, Informative)

jabuzz (182671) | more than 6 years ago | (#23886397)

And I think you don't understand thermodynamics either. Cooling to say 18 Celsius when you can happily get away with 25 Celsius will have a big impact on your cooling bill even through you are getting rid of the same amount of heat.

Re:Overcooling? (1)

Calinous (985536) | more than 6 years ago | (#23886607)

This cooling thing is a thermal machine - and the best efficiency depends on both the current temperature (inside or outside) and the temperature difference. That is, if you have outside 25 Celsius, you need twice the power to cool at 15 Celsius as opposed to cooling at 20 Celsius

Why cool systems? (2, Interesting)

rathaven (1253420) | more than 6 years ago | (#23886777)

Good point, however, since this is just thermodynamics - why do we actively cool systems? Managed properly the heat should be able to be utilised in ways far more effective than air conditioning. I think people often forget that air conditioning isn't actually a cooling solution if you take the whole picture. You are providing more energy and therefore more heat to make a small area temporarily cooler.

Re:Why cool systems? (1)

rathaven (1253420) | more than 6 years ago | (#23886789)

Forgive me - I seem to have had a case of grammar ineptitude. That should have read,"I think people often forget that air conditioning isn't actually a cooling solution - if you take the complete picture."

Re:Overcooling? (1)

oodaloop (1229816) | more than 6 years ago | (#23886469)

Exactly. Just maintain the servers at a couple of degrees above absolute zero. The heat you remove each day will be the same, and you get the added benefit of having superconductors! Just don't step in the liquified air on the server floor.

Four steps to a green data center (4, Funny)

harry666t (1062422) | more than 6 years ago | (#23886417)

1. Get a data center
2. Paint it green
3. ???
4. Cthulhu

Re:Four steps to a green data center (1)

stainlesssteelpat (905359) | more than 6 years ago | (#23886499)

Cthulu. You've got to lay of this colour co-ordination habit [wikipedia.org] . You'll get paranoid.

how about leveraging GPUs for parallel/fft/matrix? (0)

Anonymous Coward | more than 6 years ago | (#23886517)

applications? Seems that there have been several Linux clusters built up to do jobs that modern video cards could do orders of magnitude faster... Like looking for intelligent signals from beyond, Gravity waves, Machine vision, etc... I know that these are niche apps, but these are usually the type of apps where clusters are applied.. no?

coat oif paint (0)

Anonymous Coward | more than 6 years ago | (#23886525)

small the rento por90y this handsharek.

former founder (2, Insightful)

rpillala (583965) | more than 6 years ago | (#23886533)

How can you be a former founder of something? Someone else can't come along later and found it again can they?

Re:former founder (1)

kaizokuace (1082079) | more than 6 years ago | (#23888753)

money can buy history.

Re:former founder (1)

PheniciaBarimen (1302003) | more than 5 years ago | (#23904441)

Organizations die, and then people later realize that they we're not so bad and restart it with only a fraction of the original founders (if any).

lead worded oddly (1)

/dev/trash (182850) | more than 6 years ago | (#23886619)

He's no longer the founder of BEA? Who is then?

Xserves, xserves, and more xserves (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23886725)

Because one Xserve can do the work of 3-5 Dells or HP servers, lots of SMBs are racking those up.

Plus, OS XS is arguably 100% secure against remote attacks, so there is no worry about separating tasks on different machines like on Linux or Windows, where you are risking all to run your DB server on a DC.

Re:Xserves, xserves, and more xserves (1)

Bandman (86149) | more than 6 years ago | (#23886851)

Yea, but the downside is that you've got to run OSX

Don't get me wrong. OSX is a great operating system for a user. It's probably the best laptop OS in existence. I'm writing this comment on a Powerbook G4 right now, actually. But in the server room, OSX sucks if it has to interact with any non-OSX services.

The very fact that they took things in Unix that had worked for 20 years and broke them for no good reason except they didn't fit their idea of how something should work is asinine.

Granted, the recent releases have gotten better, but I was so burned by 10.2-10.3 that I literally have a pile of Mac Servers that I'm going to be selling on Ebay / Craigslist.

Re:Xserves, xserves, and more xserves (1)

symbolset (646467) | more than 6 years ago | (#23890061)

The very fact that they took things in Unix that had worked for 20 years and broke them for no good reason except they didn't fit their idea of how something should work is asinine.

Hey... It's been working for Microsoft.

Meet those who drive stuff to the ground (0)

Anonymous Coward | more than 6 years ago | (#23886787)

Overcooled? Has he calculated how much energy goes into replacing hardware? Does he know that the rule is 10 degrees celcius cooler doubles the time the hardware will run(in certain models)? We don't just love cool because it means you can overclock, it very well could decrease energy use and carbon emissions related to running your business.

Dell's tool (1)

Bandman (86149) | more than 6 years ago | (#23886799)

I've used Dell's Greenprint Calculator [dell.com] to determine usage in my racks pretty often.

It's got a nice interface and gives you all the energy information you need on their equipment, plus allows you to insert your own equipment's energy profile to calculate total usage.

It's very handy

Re:Dell's tool (1)

emj (15659) | more than 6 years ago | (#23997765)

Yes this is a very good tool, I tried it when upgrading a couple of servers, and was amazed how much heat output memory modules has.

Good info sources on Green Data Centers (3, Informative)

1sockchuck (826398) | more than 6 years ago | (#23886835)

This is a huge topic, since so many different strategies are being brought to bear. For data center operators, energy efficiency is a business imperative since the power bills are soaring. Here are some sources offering ongoing reading about Green Data Centers:

The Green Data Center Blog [greenm3.com]
Data Center Knowledge [datacenterknowledge.com]
Groves Green IT [typepad.com]
The Big List of Green Technology Blogs [datacenterknowledge.com]

Northern Climates? (2, Interesting)

photon317 (208409) | more than 6 years ago | (#23887109)


What I've always wondered is why we don't build more datacenters in colder climates here in north america. Why put huge commercial datacenters in places like Dallas or San Diego (there are plenty in each) when you could place them in Canada or Alaska? In a cold enough climate, you could just about heatsink the racks to the outside ambient temperature and have little left to do for cooling. I suppose the downside is 20ms of extra latency to some places, and perhaps having to put more fiber and power infrastructure in a remote place. But surely in the long run the cooling savings would win no?

Re:Northern Climates? (1)

Josef Meixner (1020161) | more than 6 years ago | (#23887969)

Because I would guess, that the other things get much more expensive. I would guess, that few personell would like to live in some remote Alaskan or Canadian village, so you will have to pay them more, if you can even find some. Then you need a lot of power, I somehow doubt that that is available so easily either. Next is the problem with connectivity. A single connection is not exactly a good thing for a datacenter, you want to have redundancy. Also you have to move the equipment, whenever new hardware is needed you have to ship it to a remote place. And lastly, I would expect it will cost a lot to get experts to your datacenter in the wild when something goes wrong and you need to get the guys from the company which designed the cooling or electrical system or any other of the systems you can't do by yourself to your remote datacenter.

All together I have doubts that it would really help to put datacenters in relatively remote areas and even when put into bigger cities, some of the effects probably still have a noticeable effect on the costs. Also just having a heatsink won't work, you still need to have a way to get the heat from your equipment away, so you don't even save all of the costs of cooling.

Re:Northern Climates? (1)

Herger (48454) | more than 6 years ago | (#23888149)

I've wondered why they don't put datacenters in old textile industry centers like Lowell, MA and Augusta, GA. Both of these places have canals that once supplied the mills with running water that drove turbines. You could rebuild the turbines to generate electricity and draw water off the canal for cooling. Plus mill towns tend not to be too far away from fiber, if there isn't already enough capacity there.

If someone has a couple million in venture capital to spare, I would like to attempt a project like this; I used to live in Augusta, and they already successfully converted an old mill building into a self-sustaining (using water power) small business center and lofts [enterprisemill.com] . The neighboring Sibley Mill and King Mill properties are just waiting for new tenants.

Re:Northern Climates? (1)

jhw539 (982431) | more than 6 years ago | (#23889231)

Most big datacenters I've seen are cited to a great extent based on the availability of power. Finding 20+ MW of unused capacity of adequate reliability is difficult, and it is expensive to have that scale infrastructure built out just for your datacenter. I have heard of datacenters catching the 'green' bug when they ran out of power and were told tough - build your own damn plant then. The other issue is of course good feeds to the internet, although that seems to be coming up less and less as a problem.

Re:Northern Climates? (1)

gnuman99 (746007) | more than 6 years ago | (#23889731)

Yeah, like Manitoba Hydro.

http://hydro.mb.ca/ [hydro.mb.ca]

Winter gets you cold, cold, cold temperatures. Hydro power here costs you 5c/kWh and much cheaper for larger users. Want it even cheaper, get up north to Thompson - closer to the source.

Yet, no large data centers here. And in a "town" of 600,000+ people.

Re:Northern Climates? (0)

Anonymous Coward | more than 6 years ago | (#23890817)

> But surely in the long run the cooling savings
> would win no?

Cheap cooling is a win.

But consider other requirements that may not be common in far northern locations:
    - access to redundant fiber paths (so that if a fiber conduit breaks in the middle of Alaska, you're not screwed for the days it takes to repair it)
    - cheap electricity (the electrical grid isn't that large/robust in Alaska vs. right next to a large hydro plant or a coal plant)
    - a pool of [not too expensive] local talent to be sysadsmins to run the datacenter
    - cheap land, low property taxes, etc.

Re:Northern Climates? (1)

MrKane (804219) | more than 6 years ago | (#23892449)

You're not thinking of the planet! Imagine Google moved all their datacentres to the poles. The ice caps would be gone in a weekend!!!11!!

Re:Northern Climates? (1)

falstaff (96005) | more than 6 years ago | (#23898903)

Plus Canada has lots of green hydro electricity. And data stored in Canada is exempt from the US Patriot act.

Easy solution (1)

RockyM109 (1311881) | more than 6 years ago | (#23887697)

Some companies make it a total no-brainer to control the number of servers running -- according to the current computing demand. Alpiron makes a product that integrates seamlessly with Citrix and Terminal Server [alpiron.com] , so that the users are at no point affected. Additionally, it takes 20 minutes to set up, and you get alarms, power-saving reports etc. for free.

this is "news"? (1)

snsh (968808) | more than 6 years ago | (#23888389)

The headline on /. is "News: Building the Green Data Center". Every IT publication for the past year has put "building the green data center" on its cover. It's not news anymore!

turning servers off ? (1)

KernelMuncher (989766) | more than 6 years ago | (#23889071)

This guy is insane for wanting to turn servers off !! We never do except to apply some patch or install some new hardware. No telling what will actually happen when the machine boots up again. And to do this on a regular basis. With N-tiered applications ? Crazy talk.

Easiest savings come from free cooling... (1)

jhw539 (982431) | more than 6 years ago | (#23889193)

Speaking strictly to the cooling generation side of things, the biggest thing that saves energy is implementing freecooling, that is bringing in outside air directly when it is cold and using it to cool the building (contamination is a easy known problem to deal with - filtering is not hard). If you're in a dry climate, use a cooling tower to make cold water and use that in coils. Blindingly simple, but datacenters just don't do it, even though their 24/7 load that is independent of outdoor air temperature is a great match for it. Part of the reason is it doesn't save peak load, and peak kW is where it's really at. Many datacenters I've seen have maxed out their utility feed, and paying for new infrastructure on the MW scale is not cheap.

Re:Easiest savings come from free cooling... (1)

bobbozzo (622815) | more than 6 years ago | (#23889359)

Outside air can be very humid at night... not sure how high humidity it's safe to run a datacenter at though.

Re:Easiest savings come from free cooling... (1)

gnuman99 (746007) | more than 6 years ago | (#23889799)

No, you are just ignorant like vast majority of people. It is called "RELATIVE HUMIDITY", NOT humidity.

100% saturated air at 0C is dry when heated to 20C = about 26% @ 20C.

http://einstein.atmos.colostate.edu/~mcnoldy/Humidity.html [colostate.edu]

100% humid air at -30C, raised temperature to 20C gives you humidity of 2% @ 20C. That is DANGEROUSLY TOO LOW for data centers. You want about 30% AFAIK otherwise you risks static charge problems with people zapping servers and servers even zapping themselves.

Re:Easiest savings come from free cooling... (1)

bobbozzo (622815) | more than 6 years ago | (#23890415)

I know about relative humidity.

I have no idea where you're finding -30C air... maybe you should sell it.

I'm in southern California; on a cool summer's night, it might get down to the high 50's Fahrenheit with 70% or higher humidity.

60% (1)

bill_mcgonigle (4333) | more than 5 years ago | (#23910067)

not sure how high humidity it's safe to run a datacenter at though

60% seems to be the common recommendation among datacenter humidifier vendors, even those who could sell more gear by changing that number. Static sucks.

Re:60% (1)

bobbozzo (622815) | more than 6 years ago | (#23946677)

Cool. Turns out it's been over 80% humid at night here lately. (so Cal, 63F and 84% humidity at the moment.)

Re:60% (1)

bill_mcgonigle (4333) | more than 6 years ago | (#23951793)

Be careful though, you can get from nice humidity to condensing humidity in short order. Static sucks for servers, but drops of water can be worse!

Re:60% (1)

bobbozzo (622815) | more than 6 years ago | (#24012595)

I know, that's what I was trying to point out to jhw539 above; that outside air at night may be TOO humid to use for pumping into the datacenter, and I'm guessing that dehumidifying it may not be much cheaper than running the A/C instead.

And I was thanking you for the 60% ref.

Personally, I think more swimming pools should be used as liquid cooling to improve the efficency of air conditioners... you heat the pool for almost free, and your a/c runs cooler.
I don't know how bad the corrosion would be, but some people do it.

"Easy" (1)

geekoid (135745) | more than 6 years ago | (#23890007)

Get the self contained Toshiba 5MW reactor, build data center around it.

For desalination plant design, see above.

Re: (1)

clint999 (1277046) | more than 6 years ago | (#23897427)

He's no longer the founder of BEA? Who is then?
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?