Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Open Compute Hardware Adapted For Colo Centers

timothy posted about 2 years ago | from the I'll-take-a-sample dept.

Data Storage 21

1sockchuck writes "Facebook has now adapted its Open Compute servers to work in leased data center space a step that could make the highly-efficient 'open hardware' designs accessible to a broader range of users. The Open Compute Project was launched last year to bring standards and repeatable designs to IT infrastructure, and has been gaining traction as more hardware vendors join the effort. Facebook's move to open its designs has been a welcome departure from the historic secrecy surrounding data center design and operations. But energy-saving customizations that work in Facebook's data centers present challenges in multi-tenant facilities. To make it work, Facebook hacked a rack and gave up some energy savings by using standard 208V power."

cancel ×

21 comments

Sorry! There are no comments related to the filter you selected.

208v? ha! (1, Interesting)

CRC'99 (96526) | about 2 years ago | (#41763951)

Ok, so they're getting in on what the rest of the world does with a single phase.

Most of the world is 240v single phase, 415v 3 phase. I don't quite understand how they give up energy savings by using a higher input voltage?

Lower voltage = more amps = more heat
Higher voltate = less amps = less heat.

Re:208v? ha! (0)

Anonymous Coward | about 2 years ago | (#41764131)

Because the summary writer is retarded, or article writer if it was a quote.

Re:208v? ha! (1)

swalve (1980968) | about 2 years ago | (#41764293)

My impression was that their power supplies are rated at 190v, so giving them 208v wastes some energy. But the upside is, I guess, that the power supplies can withstand power sags better, and go into a larger variety of locations without having to upgrade the power at the locations.

Re:208v? ha! (0)

Anonymous Coward | about 2 years ago | (#41764389)

Impressions aside - the article states that the power supplies typically run at 277V, but are rated down to 190V. 208V is in the range, but is not optimal.

Re:208v? ha! (3, Informative)

Anonymous Coward | about 2 years ago | (#41765255)

The equipment was originally designed to run at 277V (1 leg of a 3-phase 480V system), but is instead running at 208V (3-phase system where each leg is 120V). So while 208V may be higher than most US equipment, it's still lower than what they typically use.

dom

Re:208v? ha! (1)

umrguy76 (114837) | about 2 years ago | (#41765735)

I don't quite understand how they give up energy savings by using a higher input voltage?

You lose efficiency, thus wasting energy, when you convert the 208v AC into the low DC voltages necessary to run the computer. Instead of each computer having a power supply that converts from high AC to low DC some companies are using large AC to DC power supplies to power whole racks of servers. These servers run on DC.

Re:208v? ha! (3, Informative)

tlhIngan (30335) | about 2 years ago | (#41765923)

You lose efficiency, thus wasting energy, when you convert the 208v AC into the low DC voltages necessary to run the computer. Instead of each computer having a power supply that converts from high AC to low DC some companies are using large AC to DC power supplies to power whole racks of servers. These servers run on DC.

Low voltage DC is piss-poor for distribution because power losses in wires increases at the SQUARE of the current. 120V@1A will have far lower losses than 12V@10A - 100 times less.

The big AC to DC places use high-voltage DC for that reason - lower current cables are far easier to handle than high current cables (the thickness of a conductor depends on its current - ampacity. The insulator does have to get thicker for higher voltages, but it's a lot more flexible than a thick 00-gauge wire.

DC-DC converters are fairly efficient and converting down to where you need has less losses than trying to shove 100A of 12VDC to a rack (assuming said rack only consumes 1200W. I think a modern rack can easily draw 3600/4800W fully loaded with servers which would mean up to 400A at 12V to the rack - calling for seriously thick cabling).

Oh, and what happens when you have high currents flowing at low voltages? You get welding. Because IIR heating is far more effective when you're passing huge currents through.

Re:208v? ha! (0)

Anonymous Coward | about 2 years ago | (#41786247)

No, 1200W is typical per square foot in most high density datacenters. If you fill 42RU with 40x 300W 2 socket servers, and a 100W top of rack switch, you're looking at 12kW. Even at 48V DC that's a honking big bus bar.

Data centers look archaic to me now (1)

concealment (2447304) | about 2 years ago | (#41764127)

The modern data center is a vestige of the time when computing power was expensive.

Now, computing power is cheap and storage is cheap. The question is scaling. I think we tend to discount the role that physical hardware plays in this process when we talk about "the Cloud."

Back in the late 1990s, people were predicting that the future data center would look like something out of Star Trek: many small "cells" which stored data or executed processing tasks, linked together by a neural net-like mesh that adaptively responded to traffic.

I think of that vision any time I wander into a data center, which now looks to me like the rows of industrial machines from the 1890s. Big steaming servers, pumping out tons of heat in a roar of fans. It seems so crude and ineffective.

Perhaps in another decade we'll look back on this dinosaur iron and say things like, "LOL, the unsubtle computing of the 10s, what a ball and chain that must have been! I hear you could take most of them down with coordinated SYN attacks!"

Re:Data centers look archaic to me now (2)

VortexCortex (1117377) | about 2 years ago | (#41764381)

The World is Distributed. People are Distributed. The web is Distributed. Centralized Computing / Centralized Storage is irrelevant. Resistance is futile, you will be distributated.

Re:Data centers look archaic to me now (2)

eyegor (148503) | about 2 years ago | (#41764805)

Clouds, virtual systems, clusters, stand-alone servers all benefit from being in an environmentally friendly facility where there's lots of networking capacity and sufficient power and cooling. While home users have dedicated desktop or laptop computers, it's far more power efficient to use technologies like blade systems to package computing power. Regardless, everything's still in a data center where the equipment can be protected.

I used to work at a very large ISP where there were a half dozen data centers, each containing racks and racks of servers, storage and backup. The data centers I visit now still resemble the old ones, but they're more power efficient and the equipment has much higher densities and the networks much higher capacity.

Unless someone can make a computer or cloud that doesn't require much in the way of power, cooling or physical security, then the data centers will probably continue their current trend for the foreseeable future.

Re:Data centers look archaic to me now (1)

Lennie (16154) | about 2 years ago | (#41775467)

Have a good look at what Google and Facebook are doing and how Facebook is very open about it and collaborating with others in the OpenCompute project.

Companies like HP and Dell are looking very closely at what they can use from these designs to build servers for the rest of us. I think Dell is even one of the members of the Open Compute project.

The most important "innovation" if you ask me is to close off the hot corridore and have all the connectors at the front of the server in the cold corridore and make it possible to do all maintaince from front too. Then you use free cooling for the cold corridore with higher temperatures then most people use now.

The hot corridore will get really hot, but you will never have to enter it (other than maybe replace a ceiling fan or something like that).

In the Wired article I think Google mentions, no1 enters the hot corridor unless all servers in the rows for that corridore are turned off. That makes it pretty clear how hot it is and it also shows you where that hot air is going directly out of the datacenter in the open air.

Some say ARM servers are the future:

http://www.youtube.com/watch?v=njmQBqUuYqU [youtube.com]

Re:Data centers look archaic to me now (1)

bastion_xx (233612) | about 2 years ago | (#41776141)

What's remarkable is the PUE factors Google, Facebook and Apple can get in their data centers. I stil think these are due to the homogenous nature of the equipment the place there, and the fact they don't have to worry about the multi-tenancy of commercial data centers. Middle of nowhere locations where things such as venting from the hot aisle are possible. In NYC, the 111 8th Avenue data centers are a good example of the constraints put on the various operators. Hopefully Google can help remediate that.

In regards to power and cooling, I still have a lot to learn about the tradeoffs in ease of use versus PUE. Good thread.

Re:Data centers look archaic to me now (1)

Lennie (16154) | about 2 years ago | (#41787611)

If HP, Dell, Supermicro and others come up with a "standard" which puts all the connectors and indicators of servers on the front, then maybe we could all benefit the same way.

Re:Data centers look archaic to me now (2)

mlts (1038732) | about 2 years ago | (#41765305)

Data centers likely won't be going anywhere anytime soon. Businesses [1] tend to like keeping their critical stuff in a secured spot.

What I see happening in a data center are a few changes:

1: Data center rack widths will increase. This allows more stuff to be packed in per rack unit.

2: There will be a standard for liquid cooling where CPUs, RAM, GPUs, and other components that normally use heat sinks will use water jackets. Instead of a HVAC system, just the chilled water supply and a heat exchanger would do the trick. Of course, the issue is someone making valves and fittings that are leak resistant, are quick connects (disconnect hose, it shuts off the water flow), and can handle a number of connection and disconnection cycles before giving up and leaking. There would be leak sensors to shut off automatically any damaged cooling part and the machines attached to it similar to how a CPU shuts down if its heat sink gets bumped off.

3: A move to DC power because it means that every rack unit just needs to step up and down the incoming voltage. No power supply needed. Of course, there are dangers with DC power (muscle lock), but telcos already use 48VDC. Of course, switching DC power is a PITA due to no zero crossings, thus having to deal with arcs and pitted contacts. However, there are always rack-level PDUs which can take the 208 VAC power and turn it into 12-48 VDC, with low amounts of voltage loss due to the relative short distances.

4: A move to a passive backplane type of architecture. This way, specialized CPU boards can be added as needed, as well as "external RAM" [2]. It will allow the latest/greatest network and disk protocols to be changed out as need be.

5: More high end SAN features like real time block level deduplication making it into the onboard motherboard RAID chips.

6: Hypervisors built into all motherboards where a utility like Xen or vSphere will be more of an admin shell.

7: More security appliances, which are specialized in tasks. For example, an appliance that just stores username and password hashes so when a Web server authenticates a user, it uses that. Too many wrong guesses of a user's password would result in blocks/delays on the appliance level (something even a compromised Web server could not get around.) This would be used to ensure that an intruder couldn't make off with the /etc/shadow equivilent.

8: A resurgance of tape. Disk media was cheap and improved exponentially for a while. Now, tape is starting to catch up, and offers a lot more surface area, so ariel densities are not as critical compared to reliability and storage. No matter how one slices it, tape is not going anywhere soon because nothing beats it for reliability and price. D2D2T will still remain the norm provided there is no new media revolution (like a new optical format.)

9: More technologies for deduplication. IBM has tape deduplication as well as an appliance which sits between machines and the SAN fabric and deduplicates data on the fly.

10: A push for more technologies that can be remotely run via a Web page or a SSH connection. This allows for unmanned data centers to not just be possible, but easily done.

[1]: Those businesses that didn't trade their heroes for ghosts and move to the cloud, that is. However, cloud providers use data centers.

[2]: This may be DRAM, or it may be some other mass media technology. We have been hearing about holographic storage for decades now. It would be a tier of storage with a speed level between disk and normal RAM that would be used as swap or cache.

Re:Data centers look archaic to me now (1)

Lennie (16154) | about 2 years ago | (#41787591)

This is what I do know:
1. Well Facebook does use servers with a height of 1.5 rack units. So they can use larger fans (which obviously need less rotations)

2. OVH which says they own and operate the largest datacenter in the world do use water cooling: https://www.youtube.com/watch?v=4e97g7_qSxA [youtube.com] http://www.ovh.co.uk/dedicated_servers/hg_2012_watercooling.xml [ovh.co.uk]

3. non of the providers which did their own custom servers choose DC. AFAIK only Facebook uses some DC. The servers have a AC and DC input. The DC input is connected to the rack with UPS which keeps the other 2 racks (in the set of 3 racks) powered in case the AC is out.

Re:Data centers look archaic to me now (0)

Anonymous Coward | about 2 years ago | (#41766593)

Just where do you think "the cloud" is? I work for a cloud provider and we run something that sounds a lot like what you describe. We have rows of "big steaming server" and they are "pumping out tons of heat in a roar of fans."

The difference is that thanks to virtualization those servers are actually incredibly efficient. The cloud isn't literally water vapor in the sky. It's exactly the type of data center you're mocking. The difference is that in a traditional corporate data center those machines are almost entirely idle. The cloud is simply concentration of those workloads to leverage overall efficiency and cost. There is still a significant need for data center equipment. It's just not needed at every company's site anymore. As these data center specific products get better you'll see the benefits as lowered costs in the cloud.

Colo Centers ? (-1)

Anonymous Coward | about 2 years ago | (#41764839)

WTF are "Colo Centers ?

Is it something to do with the colon ?
(like colostomy and colorectal cancer)

I had a colonoscopy last year - the Dr said I was OK

Re:Colo Centers ? (2)

dshk (838175) | about 2 years ago | (#41766407)

Colo means co-location, in which customers rent rack space, and they move their own hardware into the data center.

YOU F)aIL IT! (-1)

Anonymous Coward | about 2 years ago | (#41764993)

Ma83 (-1)

Anonymous Coward | about 2 years ago | (#41765989)

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>