Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google Reveals "Secret" Server Designs

CmdrTaco posted more than 5 years ago | from the so-secret-everybody-knew dept.

Google 386

Hugh Pickens writes "Most companies buy servers from the likes of Dell, Hewlett-Packard, IBM or Sun Microsystems, but Google, which has hundreds of thousands of servers and considers running them part of its core expertise, designs and builds its own. For the first time, Google revealed the hardware at the core of its Internet might at a conference this week about data center efficiency. Google's big surprise: each server has its own 12-volt battery to supply power if there's a problem with the main source of electricity. 'This is much cheaper than huge centralized UPS,' says Google server designer Ben Jai. 'Therefore no wasted capacity.' Efficiency is a major financial factor. Large UPSs can reach 92 to 95 percent efficiency, meaning that a large amount of power is squandered. The server-mounted batteries do better, Jai said: 'We were able to measure our actual usage to greater than 99.9 percent efficiency.' Google has patents on the built-in battery design, 'but I think we'd be willing to license them to vendors,' says Urs Hoelzle, Google's vice president of operations. Google has an obsessive focus on energy efficiency. 'Early on, there was an emphasis on the dollar per (search) query,' says Hoelzle. 'We were forced to focus. Revenue per query is very low.'"

Sorry! There are no comments related to the filter you selected.

The New Mainframe (5, Insightful)

AKAImBatman (238306) | more than 5 years ago | (#27431717)

Most people buy computers one at a time, but Google thinks on a very different scale. Jimmy Clidaras revealed that the core of the company's data centers are composed of standard 1AAA shipping containers packed with 1,160 servers each, with many containers in each data center.

Mainstream servers with x86 processors were the only option, he added. "Ten years ago...it was clear the only way to make (search) work as free product was to run on relatively cheap hardware. You can't run it on a mainframe. The margins just don't work out," he said.

I think Google may be selling themselves short. Once you start building standardized data centers in shipping containers with singular hookups between the container and the outside world, you've stopped building individual rack-mounted machines. Instead, you've begun building a much larger machine with thousands of networked components. In effect, Google is building the mainframes of the 21st century. No longer are we talking about dozens of mainboards hooked up via multi-gigabit backplanes. We're talking about complete computing elements wired up via a self-contained, high speed network with a combined computing power that far exceeds anything currently identified as a mainframe.

The industry needs to stop thinking of these systems as portable data centers, and start recognizing them for what they are: Incredibly advanced machines with massive, distributed computing power. And since high-end computing has been headed toward multiprocessing for some time now, the market is ripe for these sorts of solutions. It's not a "cloud". It's the new mainframe.

Re:The New Mainframe (5, Interesting)

spiffmastercow (1001386) | more than 5 years ago | (#27431867)

But wasn't the mainframe just the old cloud? I seem to remember there was a reason we moved away from doing all the processing on the server back in the 80s.. If only I could remember what it was.

Re:The New Mainframe (5, Interesting)

AKAImBatman (238306) | more than 5 years ago | (#27431987)

I don't know which 80's you lived through, but mainframe processing was alive and well in the 80's I lived through. Minicomputers were a joke back then, and were seen as mostly a way to play video games. (With a smattering of spreadsheet and word processing here and there.) In the 90's, PCs started to take hold. They took over the word processing and spreadsheet functionality of the mainframe helper systems. (Anybody here remember BTOS? No? Damn. I'm getting old.)

Note that this didn't retire the mainframe despite public impressions. It only caused a number of bridge solutions to pop up. It was the rise of the World Wide Web that led to a general shift toward PC server systems over mainframes. All we're doing now is reinventing the mainframe concept in a more modern fashion that supports multimedia and interactivity.

Welcome to Web 2.0. It's not thin-client, it's rich terminal. The mainframe is sitting in a cargo container somewhere far away and we're all communicating with it over a worldwide telecom infrastructure known as the "internet". MULTICS, eat your heart out.

Re:The New Mainframe (5, Funny)

AKAImBatman (238306) | more than 5 years ago | (#27432119)

Derr... minicomputers should say microcomputers. My old brain is failing me. Help! Help! Help! He-- wait. What was I screaming for help for again?

Re:The New Mainframe (4, Interesting)

jellomizer (103300) | more than 5 years ago | (#27432585)

Technology sways back and forth, and there is nothing wrong with that.

1980s 2400/9600 bps Serial connections displayed the data that the people wanted fast enough for them to get their work done. And the computer had a lot of processing that can handle a lot of people for such simple tasks. And computers were expensive heck it was a few thousand bucks for a VT terminal.

1990s More graphic intensive programs are coming out, Color Displays, Serial didn't cut it, way to slow. Cheaper hardware made it possible for people to have individual computers and networks were mostly for file sharing. So you are better off processing locally and allowed more load per demmand

2000s Now people have high speed networks across wide distances Security and stability issues begin to happen so it is better to have your data and a lot of the processing done in one spot. So we go back to the thin client and server where the client actually still does a lot of work but the server does too to give us the correct data.

Re:The New Mainframe (1)

crispin_bollocks (1144567) | more than 5 years ago | (#27432853)

There was a mass exodus to personal computers so we would no longer have to deal with IT or MIS or whatever the keepers of the temple were called back then.

Re:The New Mainframe (3, Informative)

DerekLyons (302214) | more than 5 years ago | (#27431897)

We're talking about complete computing elements wired up via a self-contained, high speed network with a combined computing power that far exceeds anything currently identified as a mainframe.

By some measurements they exceed the computing power of a mainframe, by others they don't.

Re:The New Mainframe (4, Insightful)

AKAImBatman (238306) | more than 5 years ago | (#27432065)

By some measurements they exceed the computing power of a mainframe, by others they don't.

A fair point. However, I should probably point out that mainframe systems are always purpose built with a specific goal in mind. No one invests in a hugely expensive machine unless they already have clear and specific intentions for its usage. When used for the purpose this machine was built for, these cargo containers outperform a traditional mainframe tasked for the same purpose.

They are computers, no more advanced than before (0, Troll)

Anonymous Coward | more than 5 years ago | (#27432097)

Google is basically re-implementing the efficiency that already exists in a laptop. In fact, laptops outperform rackmount specifications if not for the fold-open HID and effeminate socket to an external resilient power-brick. What is more advanced than Google's software on Google's hardware is a stack of laptops running DNS for a Freenet backend, stationed at the peak of a mountain like PYRAMID LAKE California with a bat of "wiffi" 802.11abgn serving minimal free internet access throttled down to 2Kbps.

You want to know what's more advanced than Google? My middle fignerrrr.

Re:They are computers, no more advanced than befor (5, Funny)

AKAImBatman (238306) | more than 5 years ago | (#27432761)

Google is basically re-implementing the efficiency that already exists in a laptop.

You have a laptop with >1000 processors, consisting of several times that many cores, with its own built-in gigabit ethernet running on built-in gigabit switches?

I'd hate to sit next to you on an airplane!

Re:They are computers, no more advanced than befor (4, Funny)

Chyeld (713439) | more than 5 years ago | (#27432879)

Google is basically re-implementing the efficiency that already exists in a laptop

...

You want to know what's more advanced than Google? My middle fignerrrr.

You have a laptop with >1000 processors, consisting of several times that many cores, with its own built-in gigabit ethernet running on built-in gigabit switches?

I'd hate to sit next to you on an airplane!

It's ok, appearently he stores it in his middle finger.

Patents & Catch-22 (5, Informative)

eldavojohn (898314) | more than 5 years ago | (#27431769)

From 2007 [slashdot.org] , the modular data center patent [uspto.gov] (where the bottommost image of the article comes from). There's no lack of patents [uspto.gov] revealing piece by piece how their power management setup works.

Ah, the catch--22 of the patent--being forced to reveal your hand in order to protect it while underpaid workers at Baidu figure out how to integrate your ideas into their hardware.

Re:Patents & Catch-22 (4, Informative)

Shivetya (243324) | more than 5 years ago | (#27431903)

considering some of the mini's I worked on had similar setups in additions to external UPS.

then again, we achieve all sorts of power, cooling, and reliability, when we consolidated many "pc" style servers into minis which do the same work. (the heat change alone was staggering)

Re:Patents & Catch-22 (5, Informative)

dfenstrate (202098) | more than 5 years ago | (#27432273)

Ah, the catch--22 of the patent--being forced to reveal your hand in order to protect it while underpaid workers at Baidu figure out how to integrate your ideas into their hardware.

That's not a catch-22, that's the point. In exchange for everyone learning from what you've done, you get society's protection for a limited number of years.

Also, the workers at Baidu are not underpaid- if they where, they'd leave for better oppurtunities. The workers in question have obviously decided they're better off making stuff for google- they don't need your 'superior' judgement to tell them they should go back to subsistenance farming or melting hazardous materials for precious metals in their homes.

A decision to work, or not to work, and to hire, or not to hire, are based on realistic alternatives, not what some westerner sitting at a keyboard 9,000 miles away thinks is best.

Re:Patents & Catch-22 (5, Interesting)

Anonymous Coward | more than 5 years ago | (#27432961)

Wow, you missed the point. Poster is contending that the patent FAILS to protect IP, BY MAKING AVAILABLE the instructions to REPLICATE said IP.

Yeah, it may work against Yahoo!, but it doesn't save you from companies in China and India, who can undercut you on labor costs, and have a much more rapidly expanding market.

Re:Patents & Catch-22 (0)

Anonymous Coward | more than 5 years ago | (#27432589)

Ah, the catch--22 of the patent--being forced to reveal your hand in order to protect it...

And one of the best things about our "new" age of instant communications is that it becomes more difficult to hoard information. It's either use it or lose it.

Kidding Me? (5, Funny)

wtbname (926051) | more than 5 years ago | (#27431771)

he said. "I worked 14-hour days for two and a half years,"

Get that man a beer.

Re:Kidding Me? (0)

Anonymous Coward | more than 5 years ago | (#27432893)

Uh, he doesn't like beer.

Re:Kidding Me? (2, Funny)

Chyeld (713439) | more than 5 years ago | (#27432943)

You parsed it wrong: he worked 14 'one hour days' over two and a half years.

On the other hand... getting away with that deserves a beer too...

Hey google, want to save some money? (0, Flamebait)

geekoid (135745) | more than 5 years ago | (#27431827)

Get some mainframes.
For cryinh out load, with 1 mainfram you can't have a mainframe with 30,000 or more intances of your operating enviroment on it. Possible up to 100K.

Put 5 of these in each data center. Cheaper to power, you would only need a few people to keep it running, it would run for 20 years.

Want to save more money, here is another way:
Build your data center in the desert and build 150 MW industrial solar thermal system to power it. Sell the extra power.

Oh, and if you are not up to date on Big Iron, don't fucking reply becasue your going to look like a fool.

Re:Hey google, want to save some money? (5, Insightful)

Bill, Shooter of Bul (629286) | more than 5 years ago | (#27431947)

Google claims they did the math and found it was cheaper with commodity hardware. I advise everyone else to do the same and run the calculations for themselves to determine the optimal hardware for their particular load. With out the specifics of their situation, its difficult to criticize in an intelligent fashion, other than a more generalized statement expressing surprise at their configuration.

Re:Hey google, want to save some money? (4, Insightful)

EvilMonkeySlayer (826044) | more than 5 years ago | (#27432075)

I've a few questions, if the data centre is built in the desert don't you have a number of issues?

* Latency, if you have all your data centre's located in essentially a single part of the USA (lets ignore the rest of the world for this.. regardless that there are no deserts in Europe for example) won't that increase latency quite a bit to the more further away places that want the search results?
* Bandwidth/redundancy, if you have all your eggs in one basket as it were aren't you going to have to pay extra to have lots of extra fibre laid down to be able to handle all that extra traffic? What about natural disasters, if you have all your data centres in a single location then surely you run the risk of things going pear shaped if it burns down, suffers earthquakes, aliens destroy the building etc.
* Cooling, because it's in the desert isn't a lot of the electricity that is generated going to be cooling not only the building because of the outside heat, but also the heat generated by the servers? Surely it makes more logical sense to build in a colder climate say further north and use hydroelectricity? (if you're talking of using exclusively non active polluting (and non radioactive) natural electricity solutions)

Re:Hey google, want to save some money? (4, Informative)

SQLGuru (980662) | more than 5 years ago | (#27432517)

A desert does not describe the temperature of a region but the (lack of) rainfall/moisture.
http://desertgardens.suite101.com/article.cfm/definition_of_a_desert [suite101.com] (link found using Google).

And besides, put the containers underground and I'm pretty sure that "hot" you refer to becomes a non-issue as well.

Re:Hey google, want to save some money? (0)

Anonymous Coward | more than 5 years ago | (#27432919)

And besides, put the containers underground and I'm pretty sure that "hot" you refer to becomes a non-issue as well.

The the problem then is telling the data centers from the secret WMD labs from the illegal meth labs and hydroponics-filled MJ farms.

Re:Hey google, want to save some money? (0)

Anonymous Coward | more than 5 years ago | (#27432777)

* Latency, if you have all your data centre's located in essentially a single part of the USA (lets ignore the rest of the world for this.. regardless that there are no deserts in Europe for example) won't that increase latency quite a bit to the more further away places that want the search results?

Technically yes, but who is to say they only have one data center for the entire continent? More likely, for a geographical region as large as North America, they will have several. This will also address the redundancy issue you mentioned.

With regards to Europe... They say the ocean is a desert with it's life underground... Floating, offshore datacenters would be a great solution for Western Europe. Reuse old oil rigs, perhaps? It's really not until you get into Russia and Asia, but even there, you've got the Gobi, and Siberia, and god knows what else.

* Cooling, because it's in the desert isn't a lot of the electricity that is generated going to be cooling not only the building because of the outside heat, but also the heat generated by the servers? Surely it makes more logical sense to build in a colder climate say further north and use hydroelectricity? (if you're talking of using exclusively non active polluting (and non radioactive) natural electricity solutions)

Deserts often become bitterly cold in the evenings. Additionally, the center does not have to be located above ground, if you are out of the direct sunlight, heat will not be an issue. That said, they are optimal locations for solar power.

Re:Hey google, want to save some money? (1)

iminplaya (723125) | more than 5 years ago | (#27432787)

Well, cooling is "easy". Just run heat pipes deep(not that deep [allexperts.com] !) into the ground. Which I believe is a cool 55(12c?) degrees or so?

Re:Hey google, want to save some money? (0)

Anonymous Coward | more than 5 years ago | (#27432467)

If only Google had hired you, then they might've become a successful company. Poor Google.

Re:Hey google, want to save some money? (1)

tsalmark (1265778) | more than 5 years ago | (#27432593)

Next to your trash talk I doubt anybody will look the fool. As a business Google seems to do well enough, lets assume they did the math.

Re:Hey google, want to save some money? (4, Interesting)

TheSunborn (68004) | more than 5 years ago | (#27432785)

A google mainframe would be stupid.

If you take the price of a mainframe, and compare that to what google can get for the same money using their current solution, their current solution offers at least 10 times as much cpu performance, and much much more aggregate io(Both hard disk and memory) bandwidth.

There are only 2 reasons to use mainframes now.

1: Development cost. Building software that can scale on commodity hardware is expensive and difficult. It require top notch software developers and project managers. It make sense for Google to do it, because they use so much hardware(>100000 computers at last count).

2: Legacy support.

Pretty cool stuff (3, Interesting)

Sethus (609631) | more than 5 years ago | (#27431833)

I'm no guru of servers, but from my own limited experience in installing servers at the small to midsized company I work at, space is always a looming issue. And shrinking the size of the UPS you need can only save money and space in the long run; which any IT manager will tell you is a huge benefit and a great selling point.

Nothing to do but wait for a finished product at this point though.

Re:Pretty cool stuff (1)

Chyeld (713439) | more than 5 years ago | (#27432987)

I think it's cool too, but I'm not so certain you are getting a space savings here. Efficency, yes. But I can't see the total sized taken by the individual batteries (which in the pictures look like they hang off the edge of the case) taking less room than a large "single serve" UPS for the same number of machines.

No way (1, Redundant)

flyingfsck (986395) | more than 5 years ago | (#27431849)

Greater than 99.9% efficiency? They likely made a mistake in their measurements.

Re:No way (4, Insightful)

Anonymous Coward | more than 5 years ago | (#27432121)

Greater than 99.9% efficiency? They likely made a mistake in their measurements.

Maybe they measured 99.92% efficiency.

That is greater than 99.9% efficiency and they aren't breaking any laws of thermodynamics.

Re:No way (4, Informative)

mftb (1522365) | more than 5 years ago | (#27432789)

They'd still have a computer there that is staggeringly efficient, especially since a computer's output energy is entirely heat - information is not energy, computers are all 0% efficient. Still, this isn't what they meant and the 99.9% figure probably comes from battery in/out figures.

Re:No way (0)

Anonymous Coward | more than 5 years ago | (#27432889)

I imagine that they mean efficiency of power from batteries delivered to the motherboard.
Obviously if it has a fan or a heatsink it's not 99.9% efficient and you can see both fans on the CPUS's (duh) and heatsinks on the motherboard power supply just behind the processors.
12V is around the optimal for point of load based regulation like motherboards do, but it's still rarely better than 90-95%.

Stop the lies (5, Funny)

Thanshin (1188877) | more than 5 years ago | (#27431855)

We all know the searches are actually being done by a large amount of people in suspended animation, being fed the corpses of the previous people.

The thing about each server having its own battery is a cruel joke.

Re:Stop the lies (1)

Samschnooks (1415697) | more than 5 years ago | (#27432083)

We all know the searches are actually being done by a large amount of people in suspended animation, being fed the corpses of the previous people.

The thing about each server having its own battery is a cruel joke.

*Wakes up. Unplugs self.* Ptewie! I thought that was cherry Jell-O that hasn't quite hardened!

Re:Stop the lies (3, Funny)

hansamurai (907719) | more than 5 years ago | (#27432195)

Don't you remember in the Matrix where Morpheus holds up the Duracell battery to describe what the people are being used for? Google just managed to actually do it.

Re:Stop the lies (0)

Anonymous Coward | more than 5 years ago | (#27432215)

Pigeons, actually [google.com]

Don't worry. (4, Funny)

neo (4625) | more than 5 years ago | (#27432253)

I'm working on a solution. If only I can contact Oracle.

Re:Don't worry. (4, Funny)

Tumbleweed (3706) | more than 5 years ago | (#27432601)

I'm working on a solution. If only I can contact Oracle.

"Thank you for calling Oracle. For English press 1, para en Español marque el numero dos.

*beep*

You have reached the Oracle Help Line. Please hold for the current Oracle. All calls are answered in the order received. There are currently [1,983,457] callers ahead of you. Estimated wait time is [5,347,987] minutes.

Have you tried knowing thyself? Try checking our website at thereisnospoon.oracle.com.

Thank you for holding."

Re:Don't worry. (0)

Anonymous Coward | more than 5 years ago | (#27432953)

by neo (4625) on Thursday April 02, @12:37PM

Whoa.

Onboard UPS not new (5, Informative)

Y2K is bogus (7647) | more than 5 years ago | (#27431863)

The in-computer onboard UPS is not a new idea. I don't see how they could have gotten any patents on it since I used it have one of these (my day might still). The device I saw had a gel cell mounted on an 8-bit ISA card, full length. It had +5/12v pass through connectors for powering the drives and it powered the computer through the main bus. There was more logic to it, as it had some monitoring capabilities too.

What's next, patenting a hard drive on a plugin board? Been there, it was called the Hard Card and put a 20mb HDD in an 8 bit full length ISA slot, a truly neat idea for upgrading old XT computers back in the day. You could make them work with AT computers too by putting a regular disk controller, without a drive connected, on the bus too and the BIOS would see the XT controller and boot from it.

Re:Onboard UPS not new (5, Funny)

ColdWetDog (752185) | more than 5 years ago | (#27431979)

The in-computer onboard UPS is not a new idea

Indeed. (Stares at laptop).

Re:Onboard UPS not new (1)

sootman (158191) | more than 5 years ago | (#27432975)

Yeah. It's only been a problem for a few decades. [asktog.com] (See bug #2.) Maybe someday, someone, somewhere will take notice and do something about it.

It's always funny when the power goes out in my building. The network gear is on UPSs and all the laptop users just keep working. The rest of us sit by the windows and take a break.

Re:Onboard UPS not new (5, Insightful)

geekoid (135745) | more than 5 years ago | (#27432045)

A patent is an implementation of an idea.

You can have the idea of how to put an UPS in a computer one way, and I can do it another way and both be valid patents.

I do know this gets abused, and companies try to sue becasue it's there 'idea', but that's ot how it works.

If you find a different way to do a hard drive plugin board, then yes you can patent it. I would advise you only do it if it's better in some way, and there is a demand.

Re:Onboard UPS not new (1)

Pentium100 (1240090) | more than 5 years ago | (#27432343)

I actually thought about this - I can get an ATX power supply which works from 12V, then add the battery and a charger (big 50Hz transformer + rectifier) that is powerful enough to run the PC and charge the battery. I only saw a 12V ATX PSU up to about 150W though. Could still be used to run a PC with Atom CPU (also, mini ITX boards are small, so I could have a lot of free space for a big battery in a standard case). I will probably do this if I ever have a faster connection (and need a faster router).

Re:Onboard UPS not new (1)

guruevi (827432) | more than 5 years ago | (#27432577)

I used to have one of those cards as well. My battery was connected with velcro to the power supply and it basically sat in between the power supply and the rest of the computer. Nothing new.

Re:Onboard UPS not new (2, Insightful)

silentsteel (1116795) | more than 5 years ago | (#27432807)

Yes, but without looking at the specs, I would imagine that if the technology is significantly different, Google would still be eligible for a patent. Especially so if they were aware of the "prior art" and took the necessary steps not to include language that would overlap. Though IANAL, nor am I a patent expert.

Re:Onboard UPS not new (3, Informative)

BigDish (636009) | more than 5 years ago | (#27432999)

Agreed, the onboard UPS is not new. I have a ~10 year old (I believe the CPU is a K6-233) device meant as a SOHO file/print/webserver from IBM that has a built-in gel-cell battery for UPS power just like this server does. Google is 5+ years too late.

Anyone want my prior art to invalidate the patent?

No shit? (5, Funny)

LordKaT (619540) | more than 5 years ago | (#27432001)

When the weather gets warmer, Google notices is that it's harder to keep servers cool.

Brilliant journalistic work there.

Re:No shit? (0)

Anonymous Coward | more than 5 years ago | (#27432535)

It's also an April 1 press release.

Be skeptical. Be very skeptical.

Re:No shit? (1)

averner (1341263) | more than 5 years ago | (#27433023)

I don't think it was April 1st anywhere in the world at the time this story was posted :P

They use DeathStars! (-1, Troll)

citking (551907) | more than 5 years ago | (#27432107)

So it appears that Google uses Hitatchi/IBM DeskStars (DeathStars [wikipedia.org] ) for hard drives. Now I am starting to see how they've saved costs.

I had 2 or 3 of these things and wouldn't trust them any further than I can throw them.

Re:They use DeathStars! (2, Funny)

PatrickThomson (712694) | more than 5 years ago | (#27432191)

wouldn't trust them any further than I can throw them

Given the reliability, it's likely that someone has already measured that particular parameter for you. Have you checked the data sheets?

Re:They use DeathStars! (1)

citking (551907) | more than 5 years ago | (#27432415)

wouldn't trust them any further than I can throw them

Given the reliability, it's likely that someone has already measured that particular parameter for you. Have you checked the data sheets?

No, I haven't. But I choose, in this case, to predict the future based on past events. It's not an investment so I feel safe doing so. Many computer users do the same - if you've had a bad nVidia graphics card or a bad ASUS motherboard people tend to shy away from buying those particular brands again. With storage being as cheap as it is combined with the importance of reliability I'd rather stay with a brand like Western Digital and, up until recently, Seagate than use a brand that has had performance and reliability issues in the past.

Re:They use DeathStars! (2, Funny)

AKAImBatman (238306) | more than 5 years ago | (#27432837)

I believe the joke was that the distance a DeskStar can be thrown may be published in the data sheets. Being such a common concern and all. :-)

Re:They use DeathStars! (0)

Anonymous Coward | more than 5 years ago | (#27432939)

Whoosh!

Re:They use DeathStars! (1, Insightful)

Anonymous Coward | more than 5 years ago | (#27433027)

Clearly Google's entire business model has failed because of your insight!

Read the Google paper on hard drive failure. They may have thought about things.

OMG!!! Google patented laptop (3, Funny)

rohis (248695) | more than 5 years ago | (#27432115)

Googles secret is that all there computers have battery.

I think, it is called a laptop.

Always wondered.... (3, Insightful)

zogger (617870) | more than 5 years ago | (#27432129)

...why desktops didn't have a built in battery deal that lived in an expansion bay. If you could even keep RAM alive for extended periods even with the machine shut down that would be spiffy as an option, let alone as a little general UPS.

Oh, for God's sake people... (4, Informative)

nebulus4 (799015) | more than 5 years ago | (#27432137)

look at the date the article was published.

Mod Parent up (1)

IsThisNickTaken (555227) | more than 5 years ago | (#27432603)

look at the date the article was published.

I can't believe how far I had to scroll down the comments before someone realizing that.

99.9% efficiency (4, Insightful)

Anonymous Coward | more than 5 years ago | (#27432159)

This is a questionable number. The best DC-DC conversion is around 95% so they aren't including voltage conversions from the battery to what the system is actually using.

Re:99.9% efficiency (0)

Anonymous Coward | more than 5 years ago | (#27432537)

Well they probably are including only the losses from the battery to the computer which is going to approach 100% when it's straight from a DC power source. This is compared to say a UPS that often outputs AC that each computer then converts back to DC before feeding the computer.

Obviously the whole system isn't 99.9% efficient otherwise they almost wouldn't have to worry about cooling at all.

Re:99.9% efficiency (0)

Anonymous Coward | more than 5 years ago | (#27432547)

That's exactly what I think. They are only talking about the loss in the circuitry that is used to switch the battery's +12 volts onto the computer power lines. Probably the loss across a power MOSFET used in an active diode ORing circuit.

There will be losses in the multiple SMPS supplies generating +5 (maybe), +3.3 and whatever lower voltages are used by the processors. They can optimize these designs somewhat because they have understanding o how much current needs to be supplied but the efficiency would never approach 99.9% in these supplies.

Re:99.9% efficiency (1, Insightful)

Anonymous Coward | more than 5 years ago | (#27432913)

Or the in-computer batteries just aren't kicking in very often. With a UPS, you're constantly doing the conversion.

Date centre fire risk? (5, Interesting)

David Gerard (12369) | more than 5 years ago | (#27432245)

Many data centres expressly forbid UPSes or batteries bigger than a CMOS battery in installed systems - because when the fire department hits the Big Red Button, the power is meant to go OFF. IMMEDIATELY.

So while this is a nice idea, applying it outside Google may produce interesting negotiation problems ...

Re:Date centre fire risk? (3, Interesting)

rotide (1015173) | more than 5 years ago | (#27432427)

Isn't the red button for safety of the employees? As in, I'm under the floor and somehow the sheathing on a power feed to the rack next to me gets stripped? I start to light up and someone notices and hits the "candy red button" to save me?

Pretty sure if the fire department is coming in to throw water lines around, they are going to cut the power to the building and not to just the circuit on the datacenter floor.

I could be mistaken, but I don't think a 12 volt battery backup in these applications are going to pose much of a "life" risk. Obviously you don't want to put your tongue on the terminals, but I don't think they pose the same threat that the power lines under the floor do.

Re:Date centre fire risk? (1)

David Gerard (12369) | more than 5 years ago | (#27432715)

I'm speaking of the one we dealt with on this matter. Some feckwit hit the Big Red Button in the data centre by mistake, taking our servers out, sodomising our Oracle databases and giving us a few days' downtime. We considered the UPS of our own approach and were told "NO" in no uncertain terms for fire risk reasons. We then installed a complete spare set of kit for failover in another data centre down the road. Same hosting company, unfortunately. They were idiots. I forget their name, but I'll remember it and shout "HELL NO" if I ever hear it again.

yeah, but does it run Ubuntu? (1)

neo (4625) | more than 5 years ago | (#27432315)

You're probably thinking "man, these things are just too big, no one will want one for their home" but in a few years these things will be on everyone's desktop. Sure, the first few desks will be crushed, but I'm 100% sure they will make them fit nicely into your cubicle.

From the diagram it looks like they just need to put a chair in there and you're good to go. Now to compile Counter Strike for this thing.

Re:yeah, but does it run Ubuntu? (0)

PPH (736903) | more than 5 years ago | (#27432433)

I've got one. Its called a laptop.

Outgassing hydrogen? (1)

mlwmohawk (801821) | more than 5 years ago | (#27432391)

Anyone concerned that when a SLA batter is charged, hydrogen is one of the by-products?

Re:Outgassing hydrogen? (1)

Tumbleweed (3706) | more than 5 years ago | (#27432457)

Anyone concerned that when a SLA batter is charged, hydrogen is one of the by-products?

New Google revenue stream - capture the Hydrogen and sell it! Or use it to generate more electricity, and up their total efficiency.

Re:Outgassing hydrogen? (1)

camperdave (969942) | more than 5 years ago | (#27432791)

No. A PC case is a well ventilated environment. The hydrogen won't accumulate there.

Re:Outgassing hydrogen? (1)

mlwmohawk (801821) | more than 5 years ago | (#27432927)

No. A PC case is a well ventilated environment. The hydrogen won't accumulate there.

How many of these hydrogen factories are there per cubic yard?

April fool? (0)

Anonymous Coward | more than 5 years ago | (#27432411)

April Fool?

Who swaps out all those dead batteries? (2, Interesting)

wsanders (114993) | more than 5 years ago | (#27432429)

Hundreds of thousands of servers == thousands of dead batteries each month, since those batteries don't last more than a few years.

Now I'd think their design could be gentle on the 12V batteries, since it's possible to design UPSes that don't murder batteries at the rate cheap store-bought UPSes do. But still, they must have an army of droids swapping out batteries on a continuous basis.

Or maybe they are more selective, and only swap out batteries on hosts that have suffered one or two outages. It only takes one or two instances of draining a gel cell to exhaustion before it is unusable.

Re:Who swaps out all those dead batteries? (4, Insightful)

WPIDalamar (122110) | more than 5 years ago | (#27432553)

Or maybe they think bigger...

They're deploying containers of servers. Maybe when a container gets a to a certain age or a certain failure rate, they replace/refurbish the entire container.

I doubt they care if some of their nodes go down in a power outage as long as some percentage of them stay up.

Re:Who swaps out all those dead batteries? (4, Insightful)

mlwmohawk (801821) | more than 5 years ago | (#27432635)

Hundreds of thousands of servers == thousands of dead batteries each month, since those batteries don't last more than a few years.

I would imagine that the battery replacement schedule mimics the server obsolescence perfectly.

LOL, when the battery catches fire, time to replace the server.

fascinating read. (1)

DragonTHC (208439) | more than 5 years ago | (#27432439)

I am intrigued at the idea of a battery as the power supply. This means you can use a smaller inverter with high quality components and it will produce much less heat. With the supply being a battery, I would imagine you have much less worry about ripple as well. I wonder what happens if a battery explodes?

I would guess they are measuring battery temperature as well as other temps. Batteries only explode when they get really hot. So I would imagine the machine turns off before it heats up too much.

Re:fascinating read. (1)

mlwmohawk (801821) | more than 5 years ago | (#27432655)

I am intrigued at the idea of a battery as the power supply.

Same basic idea:
http://www.linuxpcrobot.org/ [linuxpcrobot.org]

A quick peek at the picutres says a lot (3, Insightful)

Khopesh (112447) | more than 5 years ago | (#27432459)

This is composed purely of commodity parts. The power supply is the same thing you'd buy for your desktop, those are SATA disks (not SAS), and that looks like a desktop motherboard (see the profile view where all the ports on the "back" are lined up in the same manner they would need for a standard desktop enclosure).

Only the battery is custom (or even non-consumer grade), and you can note that since the power goes through the PSU first, that's DC power. DC is significantly better than AC, since the PSU then has to convert AC-to-DC (which wastes power and generates needless heat). While you can get DC battery supplies for server-grade systems, these are not server-grade systems. Built-in DC battery backup therefore affords them the ability to keep the motherboards cheaper. Very smart.

Also, if you recall from a few months ago, Google has applied pressure on its suppliers (I'm not sure why Dell comes to mind...) to develop servers that can tolerate a significantly higher operating temperature (IIRC, they wanted at 20 degree (Fahrenheit?) boost). I wouldn't be surprised if the higher temperature cuts down on operating expenses more than smarter battery placement.

Re:A quick peek at the picutres says a lot (1)

fm6 (162816) | more than 5 years ago | (#27432981)

those are SATA disks (not SAS)

SATA and SAS both rate as "commodity" technologies. In the server world, you don't pay extra for SAS unless you really need the higher burst speed and reliability.

I work for Sun, and most of our servers take SAS or SATA. That's not hard to do, because spec-compliant SAS host bus adapters also support SATA drives. (Which should tell you about the importance of SATA in the server world.) The exceptions are our storage servers [sun.com] , which are SATA only.

Too Bad, that they do not carry it further (2, Insightful)

WindBourne (631190) | more than 5 years ago | (#27432485)

Dawned on me the other day how little innovation occurs in our industry EXCEPT by hungry companies. For example, Desktops and Laptops have not really changed, while both have a piss poor design. ABout 4 years ago, it dawned on me that a much better way to design these is to merge them. Basically, different cases where the laptop has keyboard and a monitor hookup while the desktop is sans the prior. The smart move is to move the battery OUT of the case and into the power supply. Right now, you do not get to buy variable amounts of batteries. But a company would do well to sell an external power supply with varying storage capacities, but with a simple 12V line. In this fashion, ppl can pick the parts for a laptop similar to a desktop, while the desktop gets to take advantage of the drop in prices of the laptop linage.

Re:Too Bad, that they do not carry it further (1)

Amazing Quantum Man (458715) | more than 5 years ago | (#27432711)

The smart move is to move the battery OUT of the case and into the power supply

That'll be real useful when you want to use your laptop where there's no external power.

mass servers = "21st century energy refineries"? (2, Informative)

peter303 (12292) | more than 5 years ago | (#27432487)

Peter Huber [manhattan-institute.org] in his book [manhattan-institute.org] on energy policy introduces the concepts of the "energy pyramid" and "energy refining". The thesis that new forms of energy technology use more technology and are subsequently more useful. The pyramid levels include wood, coal, petroleum, electricity, computing and optical. When I read the book a few years ago I always found it curious that he included computing in the pyramid. But I hear about aggregate gigawatts of hundreds of mass server farms in the world, it may start making sense. The web has transformed human technology and the server farms are the battery of the web. When Huber wrote the book he used the example of the automobile as it started being mostly petroleum energy, then acquired more electricity sub systems, and now more computing.

Re:mass servers = "21st century energy refineries" (0)

Anonymous Coward | more than 5 years ago | (#27432549)

He used a car as a metaphor to explain technology? Brilliant!

What's his /. userid?

Quote (0)

Anonymous Coward | more than 5 years ago | (#27432493)

"I worked 14-hour days for two and a half years." -Ben Jai, Google engineer

And I thought these guys were supposed to be smart?

April Fool's (0)

Anonymous Coward | more than 5 years ago | (#27432525)

This is an April fool's joke. Look at the date of the article.

Also the two hard drives are plugged into each other.

Still an interesting idea though.

Fark scooped Slashdot (-1, Troll)

Anonymous Coward | more than 5 years ago | (#27432543)

Ha ha. I already read this story through Fark this morning. All the comments on this topic have already made their course. Nothing to see here at Slashdot. Move along.

Power supply design? (2, Insightful)

derGoldstein (1494129) | more than 5 years ago | (#27432633)

"Google's designs supply only 12-volt power, with the necessary conversions taking place on the motherboard"

This seems to be a more interesting point than the battery part. 12V-only?
This means that there's some serious power conversion done on each of the motherboards, and with SMPS evolving at the rate that it is, this could be relevant to anything larger than a laptop.
How much exactly is gained by making such a big change, to a point where you'd need to redesign all of your motherboards, each time for each different chipset? (they mention they use both Intel and AMD)

Will this particular change make it into desktops? How much *more* efficient would it make the overall system?

FCC? UL? (1)

Deton8 (522248) | more than 5 years ago | (#27432685)

Why is it that Google doesn't have to worry about FCC, CE, or UL safety and EMC regulations? And why are they using a RoHS prohibited battery which uses lead???

Re:FCC? UL? (1)

ID000001 (753578) | more than 5 years ago | (#27432871)

You don't need to meet regulation safety if you are not selling them, just for your own use, and you worked it out with your insurance company.

Keyboard, Mouse and two USBs? And slots? (3, Interesting)

wonkavader (605434) | more than 5 years ago | (#27432691)

I'm a little surprised by the keyboard and mouse port and the two USB ports. If it uses USB, why not just use that for the keyboard and mouse? And why the second USB port? I suspect the second port doesn't consume extra energy directly, but it causes air resistance where they'd like to clear path to drag air across the RAM and CPUs.

And why the slots which will never get used? In quantities like Google buys, you'd think those would be left off.

Maybe they don't make any demands on Gigabyte (the manufacturer) and just buy a commodity board? When they're buying this many, you'd think Gigabyte would be happy to make a simpler board for them. On a trivial search, I don't see the ga-9ivdp for sale anywhere, but maybe it's just old.

Air flow / cooling fans (2, Interesting)

ehud42 (314607) | more than 5 years ago | (#27432719)

So this sounds like one of those "so obvious, no one thought of it" questions - if Google is so concerned about precious mW that it standardizes on 12V hardware to reduce current losses of sending 5V & 3V power from the powersupply to the board, why do the CPU's have fans???? The side view of the chasis seems to suggest that with a few minor tweaks the units could rely on passive cooling and use the data centre / container fans for air flow.

1) Move hotter components like the CPUs to the front and replace fans with larger passive heat sinks.
2) RAM modules lined up to ensure proper airflow to the back of the chasis, chipset heat sinks lined up accordingly.
3) HD's laid over top of voltage regulators with appropriate heatsinks
4) power supply and battery at the rear.

Have the hot air return duct work arranged at the back of the rack with appropriate holes and seals so that the units make a good connection to maximize airflow.

Am I the only one... (2, Funny)

hbr (556774) | more than 5 years ago | (#27433033)

...imagining a Beowulf cluster of these?

Aww - nevermind.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?