Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

When the Power Goes Out At Google

Soulskill posted more than 4 years ago | from the larry-and-sergey-ghost-stories dept.

Australia 135

1sockchuck writes "What happens when the power goes out in one of Google's mighty data centers? The company has issued an incident report on a Feb. 24 outage for Google App Engine, which went offline when an entire data center lost power. The post-mortem outlines what went wrong and why, lessons learned and steps taken, which include additional training and documentation for staff and new datastore configurations for App Engine. Google is earning strong reviews for its openness, which is being hailed as an excellent model for industry outage reports. At the other end of the spectrum is Australian host Datacom, where executives are denying that a Melbourne data center experienced water damage during weekend flooding, forcing tech media to document the outage via photos, user stories and emails from the NOC."

cancel ×

135 comments

Sorry! There are no comments related to the filter you selected.

Nothing to see here people. (0)

Anonymous Coward | more than 4 years ago | (#31401462)

Google was open. What exactly is the issue?

Re:Nothing to see here people. (1)

teknopurge (199509) | more than 4 years ago | (#31401666)

The fact there was one.

Really Quiet (0)

Anonymous Coward | more than 4 years ago | (#31401478)

It gets really quite.

what about having people onsite? (2, Insightful)

alen (225700) | more than 4 years ago | (#31401480)

aren't there any people in the data center to tell them that yes there has been a power outage, so and so machines are affected, etc? sounds like all they have is remote monitoring and if something happens than someone has to drive to the location to see what's wrong

Re:what about having people onsite? (0)

Anonymous Coward | more than 4 years ago | (#31401548)

You are thinking too small-scale. Of course there are people on-site. Google has data centers all over the world -- how are they going to drive there?

Re:what about having people onsite? (1, Funny)

Anonymous Coward | more than 4 years ago | (#31401954)

You are thinking too small-scale. Of course there are people on-site. Google has data centers all over the world -- how are they going to drive there?

http://en.wikipedia.org/wiki/DUKW [wikipedia.org]

'nuff said.

Re:what about having people onsite? (0)

Anonymous Coward | more than 4 years ago | (#31402016)

No silly, those are for trans-oceanic Street View.

Re:what about having people onsite? (1, Troll)

dch24 (904899) | more than 4 years ago | (#31401856)

What I want to know is, what caused the outage?

The post on the google-appengine group details all the things they did wrong and are going to fix, after the power went out. Fine, I have to plan for outages too. But what caused the unplanned outage?

Re:what about having people onsite? (3, Insightful)

nedlohs (1335013) | more than 4 years ago | (#31402160)

Who cares?

Power failures are expected, what you can do is have plans for when they occur - batteries, generators, service migration to other sites, etc, etc. Those plans (and the execution of them) are what they had problems with.

multiple datacenters (1)

Colin Smith (2679) | more than 4 years ago | (#31402940)

Power failures are expected, what you can do is have plans for when they occur - batteries, generators, service migration to other sites, etc, etc

Too small scale, too complex, too much human intervention and too unreliable. Minimum of 2 datacenters on opposite sides of the world and you only send half the traffic to each. When the first vanishes the second picks up the traffic. The exact mechanism depends on the level of service you want to provide.

 

Re:what about having people onsite? (1)

afidel (530433) | more than 4 years ago | (#31403150)

No, the question is why did the end users *see* the power outage? I would guess Google's insistence on using cheap motherboards with local battery and non-redundant PSU's bit them in the butt here. In a properly designed and maintained datacenter the loss of main power and a single generator won't take out a single server or piece of networking gear, but Google has gone with the RAED (redundant array of expensive datacenters) model instead of the traditional dual PSU, dual PDU, dual UPS, dual generator with redundant data paths setup typical of an HA datacenter.

Re:what about having people onsite? (2, Insightful)

hedwards (940851) | more than 4 years ago | (#31402382)

My parents once lost power for several hours because a crow got fried in one of the transformers down the street. People around here lose power from time to time when a tree falls on a line. Unplanned power outages are going to happen. Even though line reliability is probably higher now than at any time in the past, it still happens and companies like Google that rely upon it being always there should have plans.

This isn't just about keeping the people that use Google services informed, this is an admission that there's something to fix and that they're going to fix what they can. There isn't any particular reason why they need to disclose such plans beyond being a huge player and not wanting to scare away the numerous people that count on them for important work.

Re:what about having people onsite? (0, Flamebait)

dave562 (969951) | more than 4 years ago | (#31405156)

This just goes to show that Google is as "incompetent" as anyone else. There was a discussion on here the other day and a poster asked why Microsoft, with all of their resources, hasn't come up with a secure OS yet. It was suggested that the know how to create such an OS is out there, and it would just take money and will on Microsoft's part. This seems like the Google equivalent.

Google is trying to push Apps as a replacement for Exchange and Office. They are trying to push it as a replacement for hosting in house. I steered my organization away from Apps for the time being because I wasn't impressed with their support and there are a whole slew of other people who feel like they are being jerked around by Google for what should be simple support issues. It is not reassuring that Google hasn't gotten high availability down yet for one of their flagship products. I'm glad that they are being transparent about where they screwed up, but come on now, really? They haven't figured out fail-over yet? This is Google, the multi-hundred billion dollar organization. They can't fail-over one of their core offerings?

Useless for large scale problems (5, Interesting)

mcrbids (148650) | more than 4 years ago | (#31402172)

Of COURSE there are people onsite. Most likely they have anywhere from a dozen to a hundred people onsite. But what's that going to do for you in the case of a large-scale problem?

The otherwise top rated 365 Main [365main.com] facility in San Francisco went down a few years ago. They had all the shizz, multipoint redundant power, multiple data feeds, earthquake-resistant building, the works. Yet, their equipment wasn't well equipped to handle what actually took them down - a recurring brown-out. It confused their equipment, which failed to "see" the situation as one requiring emergency power, causing the whole building to go dark.

So there you are, with perhaps 25 staff a 4-story building with tens of thousands of servers, the power is out, nobody can figure out why, and the phone lines are so loaded it's worthless. Even when the power comes back on, it's not like you are going to get "hot hands" in anything less than a week!

Hey, even with all the best planning, disasters like this DO happen! I had to spend 2 wracking days driving to S.F. (several hours drive) to witness a disaster zone. HUNDREDS of techs just like myself carefully nursing their servers back to health, running disk checks, talking in tense tones on cell phones, etc.

But what pissed me off (and why I don't host with them anymore) was the overly terse statement that was obviously carefully reviewed to make it damned hard to sue them. Was I ever going to sue them? Probably not, maybe just ask for a break on that month's hosting or something. I mean, I just want the damned stuff to work, and I appreciate that even in the best of situations, things *can* go wrong.

So now I host with Herakles data center [slashdot.org] which is just as nice as the S.F. facility, except that it's closer, and it's even noticably cheaper. Redundant power, redundant network feeds, just like 365 main. (Better: they had redundancy all the way into my cage, 365 Main just had redundancy to the cage's main power feed)

And, after a year or two of hosting with Herakles, they had a "brown-out" situation, where one of their main Cisco routers went partially dark, working well enough that their redundant router didn't kick in right away, leaving some routes up and others down while they tried to figure out what was going on.

When all was said and done, they simply sent out a statement of "Here's what happened, it violates some of your TOS agreements, and here's a claim form". It was so nice, and so open, that out of sheer goodwill, I didn't bother to fill out a claim form, and can't praise them highly enough!

Re:Useless for large scale problems (4, Insightful)

Critical Facilities (850111) | more than 4 years ago | (#31403100)

The otherwise top rated 365 Main [365main.com] facility in San Francisco went down a few years ago. They had all the shizz, multipoint redundant power, multiple data feeds, earthquake-resistant building, the works. Yet, their equipment wasn't well equipped to handle what actually took them down - a recurring brown-out. It confused their equipment, which failed to "see" the situation as one requiring emergency power, causing the whole building to go dark.

I think you made the right decision in changing providers. I remember that story about the 365 outage, and while I am too lazy to look up the details again, I recall it being as you're telling it. To that end, I'd simply say that they most certainly did have the proper equipment to handle the brown out, but obviously not the proper management. If you're having regular (if intermittent) power problems (brown outs, phase imbalances, voltage harmonic anomolies, spikes, etc), just roll to generator, that's what they're there for.

I'm sick of people making the assumption that the operators of the facility were just at the mercy of a power quality issue because they have redundant power feeds and automatic transfer switches. Yes, in a perfect world, all the PLCs will function as designed, and the critical load will stay online by itself. However, it takes some foresight and some common sense sometimes to make a decision to mitigate where necessary. I direct all my guys to pre-emptively transfer to our generators if there are frequent irregularities on both of our power feeds (i.e. during a violent thunderstorm, simultaneous utility problems, etc).

In other words, I'm agreeing with you that the service you received was unacceptable. Along with that (and in rebuttal to the parent post), I'm saying that it's not enough to talk about how they came back from the dead, but why they got there in the first place.

Re:Useless for large scale problems (1)

interkin3tic (1469267) | more than 4 years ago | (#31404786)

But what pissed me off (and why I don't host with them anymore) was the overly terse statement that was obviously carefully reviewed to make it damned hard to sue them. Was I ever going to sue them? Probably not, maybe just ask for a break on that month's hosting or something.

You wouldn't but come on, you know how we Americans are. We sue when we can't play Halo for a few days [gamespot.com] .

Chances aren't bad that someone was looking for a lawsuit, heading it off at the pass had a chance to prevent some stupid lawsuits which would waste time and only benefit lawyers, possibly requiring some invasive, poorly thought-out court-ordered hinderance which would have slowed the recovery.

Re:what about having people onsite? (0)

Anonymous Coward | more than 4 years ago | (#31402262)

Google doesn't use traditional data centers. They build theirs out of modules constructed from shipping containers. cf. Google Data Center Video [blogoscoped.com] , data center secrets revealed [engadget.com] .

So, remote monitoring, and then someone goes to check the module the alarm came from. They may have to walk 100 meters to get to the module, though.

An "Incident"? (0, Offtopic)

No Lucifer (1620685) | more than 4 years ago | (#31401506)

Jack must've forgotten to enter the code...

Re:An "Incident"? (1)

GNUALMAFUERTE (697061) | more than 4 years ago | (#31402520)

So, rewatching season 2? The addiction is terrible. I recommend a dosis of Flashforward ...

Isn't this part of their SLA? (0, Troll)

HerculesMO (693085) | more than 4 years ago | (#31401512)

I thought that contracts required Google to disclose the cause and time of their downtime, and this disclosure is part of that.

Right now though, Google is making Microsoft look like they have better uptime for SaaS.

Re:Isn't this part of their SLA? (2, Insightful)

hedwards (940851) | more than 4 years ago | (#31402404)

That's the downside, anytime you acknowledge a mistake you're then looking like you have more than the idiots that have hundreds of mistakes that they don't disclose until caught making.

Re:Isn't this part of their SLA? (1)

eth1 (94901) | more than 4 years ago | (#31403310)

Depends on who's doing the shopping.

If you're looking for a serious hosting facility, then incident response should be one of the things you look at. If they haven't had an incident*, then you have no idea how they'll handle it when (not if!) one happens. They can hand you all the documentation in the world, but that can't speak to execution.

* that they've admitted to

and what about openess during the incident? (-1)

alen (225700) | more than 4 years ago | (#31401528)

if i called Google support during the incident, would i have been told the truth. Or would they have told me that everything is fine and to check my end

Re:and what about openess during the incident? (3, Funny)

theIsovist (1348209) | more than 4 years ago | (#31401612)

Glen Beck, is that you!?

Re:and what about openess during the incident? (1, Insightful)

dburkland (1526971) | more than 4 years ago | (#31402228)

Keith Olbermann, is that you!?

Fixed that for you

Re:and what about openess during the incident? (1)

Spazztastic (814296) | more than 4 years ago | (#31403770)

$Political_Pundit_I_Disagree_With, is that you!?

Fixed that for you

No, I think I got it right this time.

Read the comments (5, Insightful)

RaigetheFury (1000827) | more than 4 years ago | (#31401574)

I pity EvilMuppet. Guy is a tool. There are contractual agreements that are in place to prevent pictures, aka the "rules" but when the data center blatantly LIES they are breaking the trust and violating the agreement. Case Law exists where contracts can be violated when one accuses the other of violating said contract.

That's what happened. The data center was lying about what happened to avoid responsibility for the equipment it was being paid to host. Pictures were taken and are being used to prove the company did violate the trust of the contract.

You can argue the semantics and legality of it but if this goes to court the pictures will be admissible and the data center will lose.

Re:Read the comments (0)

Anonymous Coward | more than 4 years ago | (#31401652)

EvilMuppet might just be the DC's SockPuppet :)

Re:Read the comments (1)

houghi (78078) | more than 4 years ago | (#31402022)

An interview with him from a previous 'non-event' : http://www.youtube.com/watch?v=WcU4t6zRAKg [youtube.com]

Re:Read the comments (1)

1_brown_mouse (160511) | more than 4 years ago | (#31403906)

I love those guys.

They did a comedy show building up to the 2000 Sydney Olympics.

http://en.wikipedia.org/wiki/The_Games_(Australian_TV_series) [wikipedia.org]

Spawned "the Office" style of pseudo documentary. Excellent show.

Search: clarke dawe "the games" on youtube to see some clips.

Re:Read the comments (0)

sexconker (1179573) | more than 4 years ago | (#31403530)

There is no such thing as case law.
There is legal precedent, which is not law.

Judges and lawyers who follow precedent are lazy, spineless fucks.

Each case should be decided upon according to the spirit and letter of the law.

"Case law" is not law that regulates people.
"Case law" is legalese for "someone else did the hard work before us, and until someone else less lazy than me comes along and argues against it, this is what we'll do - no need to rock the boat".

It is either law or precedent.
If it is law, it must be used to judge (both letter AND spirit). It potentially can be nullified by the jury, or later changed by the legislative, but the lawyers and judge have no power over those processes beyond their individual vote.

If it is precedent, it is simply a tool to be used to speed up the legal process in similar cases. It is not a means of "stabilizing" law. It is not binding. All cases are to be judged individually. Our courts are based on the idea that judges fuck up - we have appeals. It is folly to look at precedent as anything more than a program of what arguments each side will likely make and what laws are likely in play.

Re:Read the comments (1)

DragonWriter (970822) | more than 4 years ago | (#31405536)

There is no such thing as case law.

Yes, there is.

There is legal precedent, which is not law.

Whether and to what extent legal precedent is binding and, as such, "law" depends on the legal system; there are some legal systems in which it is not, and some in which, in specific circumstances, it is. The latter is true of the UK and many former British colonies, including, e.g., the USA, Canada, Australia, among others.

Judges and lawyers who follow precedent are lazy, spineless fucks.

Judges who follow precedent that is binding on them are doing their job.

Lawyers who fail to recognize and cite applicable precedent are doing poor service to their client, and are potentially, in extreme cases, liable for malpractice.

title should read "Google App Engine NOT a Cloud" (1, Funny)

Anonymous Coward | more than 4 years ago | (#31401576)

Obviously if the power goes out, and the service goes offline, then it WASN'T a cloud. If it's a cloud, it can't go down. If it goes down, it wasn't a cloud.

What's there to get?

Re:title should read "Google App Engine NOT a Clou (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31401638)

Even a cloud isn't effective if all the nodes go down, it's not magic.

Re:title should read "Google App Engine NOT a Clou (1, Funny)

Anonymous Coward | more than 4 years ago | (#31401858)

Whoosh.

Re:title should read "Google App Engine NOT a Clou (0)

Anonymous Coward | more than 4 years ago | (#31403588)

Wow, that cloud's on the move!

Re:title should read "Google App Engine NOT a Clou (1)

Davorama (11731) | more than 4 years ago | (#31401998)

Sounds more like fog to me.

Re:title should read "Google App Engine NOT a Clou (1)

RoFLKOPTr (1294290) | more than 4 years ago | (#31403654)

Obviously if the power goes out, and the service goes offline, then it WASN'T a cloud. If it's a cloud, it can't go down. If it goes down, it wasn't a cloud.

The cloud got too big and it rained.

They had a perfect contingency plan for this case (5, Funny)

juanjux (125739) | more than 4 years ago | (#31401636)

...but it was stored on Google Docs.

Significantly higher latency? (2, Interesting)

nacturation (646836) | more than 4 years ago | (#31401700)

A new option for higher availability using synchronous replication for reads and writes, at the cost of significantly higher latency

Anyone know some numbers around what "significantly higher latency" means? The current performance [google.com] looks to be about 200ms on average. Assuming this higher availability model doesn't commit a DB transaction until it's written to two separate datacenters, is this around 300 - 400ms for each put to the datastore?

Re:Significantly higher latency? (1)

sexconker (1179573) | more than 4 years ago | (#31403630)

No, because the writes should happen in parallel.
No need to write, confirm, write, confirm, commit.

Just write both, confirm both, commit.

Then of course, you have to commit twice. And commit your commits.

And get a receipt for your husband.
And give the nice government man a receipt for the receipt you received.

Re:Significantly higher latency? (1)

DragonWriter (970822) | more than 4 years ago | (#31405394)

Anyone know some numbers around what "significantly higher latency" means?

I suspect not, since the feature hasn't been implemented yet.

Don't they have (0)

Anonymous Coward | more than 4 years ago | (#31401712)

UPS's and backup generators? or some other onsite emergeancy power? (wind turbine, batteries, bunch of illegals on treadmills etc

Re:Don't they have (5, Informative)

johnncyber (1478117) | more than 4 years ago | (#31401798)

Dude RTFA (I know, I know, shame on me). The backup generators kicked in, but 25% of the machines in data center did not receive power before crashing.

Re:Don't they have (1)

ElectricTurtle (1171201) | more than 4 years ago | (#31402308)

lol no UPS = fail

Re:Don't they have (1)

lucifuge31337 (529072) | more than 4 years ago | (#31402688)

lol you = don't know how datacenters work

Re:Don't they have (1)

Critical Facilities (850111) | more than 4 years ago | (#31403180)

I have to say, I think ElectricTurtle is right. If the generators came online as they're claiming, how could it be that 25% of the load dropped during transfer?? There's more to this story than is being told, and instead, they're focusing on how they came back online rather than why they went offline in the first place. I'd be willing to bet you that heads are rolling behind closed doors. If there were properly functioning UPSs in the building (either the large ones or the server-mounted batteries Google sometimes likes), then there shouldn't have been any outage on the transfer to generator.

Re:Don't they have (1)

lucifuge31337 (529072) | more than 4 years ago | (#31403320)

You are also assuming that all datacenters have and need UPSes. This is simply not the case. More and more facilities are going to flywheel generators as maintaining batteries for transfer time between mains and generator power is insanely expensive in floor space, labor, and replacement costs. Nothing in any of the linked content says what kind of generators they have, or anything about a UPS. Based on the simple fact that Google can afford and makes it a priority to hire too notch talent and build things the right way, are you really telling me that you believe you and ElectricTurtle are smarter than the combined brainpower set loose by Google for building and maintaining this facility?

Re:Don't they have (3, Interesting)

Critical Facilities (850111) | more than 4 years ago | (#31403554)

First of all, the "flywheel generators" you're referring to are actually either standalone UPS systems or a part of a DRUPS (Diesel Rotary UPS). Here [buildingdesign.co.uk] is some information on one of the leading manufacturers of such equipment.

However, all of this is moot, since even if they had a flywheel setup as you're speculating, it still doesn't explain why 25% of the floor went down. If the equipment was installed, maintained and loaded properly, they should've been able to get to the generators with no problem.

are you really telling me that you believe you and ElectricTurtle are smarter than the combined brainpower set loose by Google for building and maintaining this facility?

No, I'm telling you that I manage a data center, and I know first hand how they work (or in this case, should work). I fail to see an adequate explanation of how this was unavoidable.

Re:Don't they have (2, Insightful)

DragonWriter (970822) | more than 4 years ago | (#31405662)

There's more to this story than is being told, and instead, they're focusing on how they came back online rather than why they went offline in the first place.

That's because they are focussing on what went wrong. Power losses, including ones that take down the whole data center, are accepted risks and part of the reason they have a redundant data centers and failover procedures.

The failure wasn't that they had a partial loss at a datacenter. The failure was that the impact of that loss wasn't mitigated properly by the systems that were supposed to be in place to do that.

Re:Don't they have (0, Troll)

ElectricTurtle (1171201) | more than 4 years ago | (#31403260)

Yeah, I just imagined working in raised-floor, climate-controlled rooms. You don't know shit about me, nor could you from a four word drive-by. You just want to put people down because it does something for you. That behavior demonstrates that you are a pitiful excuse for a decent human being, congratulations! Piss off.

Re:Don't they have (1)

lucifuge31337 (529072) | more than 4 years ago | (#31403386)

That's a nice try at another troll.

You demonstrated that you don't know enough about modern data center design based on your 4 word comment. No further information was necessary.

Plenty of people who have worked in data centers wouldn't know this, so the fact that you may have worked in one is a moot point.

See the reply to the guy who also doesn't know this stuff that was trying to stick up for you. http://slashdot.org/comments.pl?sid=1575066&cid=31403320 [slashdot.org]

Re:Don't they have (1)

ElectricTurtle (1171201) | more than 4 years ago | (#31403846)

You would do better to see his reply to your reply. He's already putting you in your place so well that any similar effort by me would be redundant.

ups battery systems can fail (0)

Anonymous Coward | more than 4 years ago | (#31403852)

I have seen data centers crash. I do not think it is likely in this case but twice I have seen issues with the UPS system take a datacenter down

Once a battery in the UPS system blew up and sprayed acid on the wall as it crashed. Once a component inside died.

I have also seen them go down due to too much stuff plugged in not really a ups issue.

Funny thing is in the seven years I was working there, I never saw the mains drop. It could have happened for a few seconds but they came back up before I was paged.

Usually our data center was affected by HVAC issues.

Re:Don't they have (1)

Vancorps (746090) | more than 4 years ago | (#31403960)

The argument is simply that going without adequate battery power to handle transfer switching is asinine and you seem to think that's normal data-center behavior. You would be the only one that thinks that would be properly redundancy and all the data-centers I'm in have battery backed transformers to handle the load while they switch to alternate power.

The most expensive data center I'm in even goes so far as to have an hour of battery time to handle generator failures during a power outage.

ElectricTurtle and Critical Facilities both have comments that mirror my own experience and echo every data center best practice. People without this power are asking for problems. Google tried something against best practice and despite us individually not having more brain power than Google, collectively the likes of IBM, Microsoft, and every other large corporation with many large data centers have come to this conclusion. Many and I'm looking as those lovely Texas data centers keep trying to buck the best practice and surprise surprise, it bites them in the ass.

That said, Google has a great track record so I'm not going to call any of their practices into question, it sounds like the event was mishandled and that's why there was a service outage. Sometimes events are mishandled due to unforeseen circumstances or something didn't have their morning cup of coffee. That's why companies do post-mortems and the fact that Google was so open about it is a good sign that the same situation won't lead to another outage which is what matters given their stellar uptime.

Re:Don't they have (1)

Glendale2x (210533) | more than 4 years ago | (#31402728)

Actually no, Google doesn't use UPS systems if this is one of their designs that uses one small sealed lead acid battery per server.

Re:Don't they have (1)

Critical Facilities (850111) | more than 4 years ago | (#31403194)

I've heard a few rumors that they're re-thinking this strategy. I'm betting this event might keep those conversations going.

App Engine down again? (2, Insightful)

bjourne (1034822) | more than 4 years ago | (#31401768)

App Engine must be Googles absolutely most poorly run project. It has been suffering from outages almost weekly (the status page [google.com] doesn't tell the whole truth unfortunately), unexplainable performance degradations, data corruption (!!!), stale indexes and random weirdness for as long as it has been run. I am one of those who tried for a really long time to make it work, but had to give up despite it being Google and despite all the really cool technology in it. I pity the fool who pays money for that.

The engineers who work with it are really helpful and approachable both on mailing lists and irc, and the documentation is excellent. But it doesn't help when the infrastructure around it is so flaky.

ISO9001 (1, Insightful)

Anonymous Coward | more than 4 years ago | (#31401814)

This should be standard practice... It's like the good bits of ISO9001 with a bit more openness. When done right, ISO9001 is a good model to follow.

the worst nightmare of data center peeps (3, Interesting)

filesiteguy (695431) | more than 4 years ago | (#31401820)

i don't run a data center, but manage systems that rely on the data center 18 hrs/day 6 days/week. we pass upwards of $300m through my systems. I've yet to get a satisfactory answer as to exactly what would happen if - say - a water line breaks and floods all the electrical (including the dual redundant UPS systems) in the data center.

Re:the worst nightmare of data center peeps (2, Informative)

SmilingBoy (686281) | more than 4 years ago | (#31402220)

First, your servers will shutdown ungracefully, and then, they will be destroyed with little chance of recovery. You will then have to rebuild your systems, and restore the data from the offsite backup. This will of course take time. If this is too much off a risk, you should run a alternate datacentre mirroring your primary databases that can go live within minutes.

Re:the worst nightmare of data center peeps (0)

Anonymous Coward | more than 4 years ago | (#31402344)

To summarize: you won't get much sleep for the next few weeks and your bosses will find a way to throw you under the bus for not "covering your bases".

Re:the worst nightmare of data center peeps (1)

mjwalshe (1680392) | more than 4 years ago | (#31402278)

switch to the alternate DC - I worked for BT and the set up an alternate DC across town for Telecom Gold just in case the thames flooded

Re:the worst nightmare of data center peeps (1)

afidel (530433) | more than 4 years ago | (#31403340)

*across town*!? Hmm, here in the states best practice (and legal requirements for certain industries) requires significantly more distance than that between DC's. Ours is just inside of reasonable driving range (6 hours) but is on a different power grid, different core services from our Tier-1 ISP, etc.

Re:the worst nightmare of data center peeps (1)

FlexAgain (26958) | more than 4 years ago | (#31404558)

Across town could be 20 miles away in London. On the other side of the Thames is very likely to have it's power and data coming from completely independent systems, even a different power station and over a different part of the national grid.

Since BT was historically the only telecoms provider, even now they are plenty big enough to easily be in a position to have multiple independent data feeds, and if they all fail, nothing else in the capital is working anyway, so a DC's survival would be a minor issue.

A six hour drive from London going North would almost put you in Scotland, and in the other direction, you would have run out of land, and be well on your way to Paris if you crossed the Channel.

Re:the worst nightmare of data center peeps (1)

Hurricane78 (562437) | more than 4 years ago | (#31402510)

Well, I’m no expert, but it’s not very hard to get a building water tight, now is it?

Re:the worst nightmare of data center peeps (0)

Anonymous Coward | more than 4 years ago | (#31403126)

it is really hard:
1-water can slowly eat the concrete away
2-water can go trough small microscopic opening
combine 1 and 2 and you got a water infiltration

floods (2, Insightful)

zogger (617870) | more than 4 years ago | (#31403272)

Did you ever actually see a big flood? Freaking awesome power, like a fleet of bulldozers. Smashes stuff, rips houses off foundations, knocks huge trees over, will tumble multiple ton boulders ahead of it, etc. Just depends on how big the flood is. We had one late last year here, six inches of rain in a couple of hours, just tore stuff up all over. The "building" that can withstand a flood of significant size exists, it is called a submarine. Most buildings of the normal kind just aren't designed to deal with anything that destructive. Some can resist minor floods, but not too many.

Re:floods (1)

Ant P. (974313) | more than 4 years ago | (#31404108)

The structure that can withstand a flood has existed for a lot longer than submersible warships - it's called a "hill". If you don't have one conveniently nearby to use you can even build an artificial one.

Re:floods (1)

zogger (617870) | more than 4 years ago | (#31405194)

A hill isn't a building. He was talking about water proofing a building. Under normal conditions, sure, buildings are pretty good to keep you from the weather, but in big floods, most will suffer leakage or outright destruction. That's why you always see people trying to save their homes or businesses with sand bags. It just isn't that common for buildings to be built bad flood tough. Some probably exist, but not too many. And yep, a good building on top of the biggest hill around would be the safest. I was just going for the cheap laugh mentioning a submarine, they are our tightest and strongest man made structures built to deal with keeping humans away from too much water. So, if you built a building like a submarine, it might make it through a big flood.

Re:floods (2, Informative)

DragonWriter (970822) | more than 4 years ago | (#31405330)

The structure that can withstand a flood has existed for a lot longer than submersible warships - it's called a "hill". If you don't have one conveniently nearby to use you can even build an artificial one.

An "artificial hill" intended to protect an area from floods is usually called a "levee", and while certainly extremely useful for their intended purpose, they aren't exactly an ironclad guarantee. So having contingency plans for the case where they fail isn't a bad idea.

Re:the worst nightmare of data center peeps (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31402516)

I've yet to get a satisfactory answer as to exactly what would happen if - say - a water line breaks and floods all the electrical (including the dual redundant UPS systems) in the data center.

Then you are doing a rubbish job at your job! I would never hire somebody who would't ask this question up front before making a hosting decision, and having this decision made before you joined is no excuse. You are another typical slashdot failure of a sysadmin. Disaster recovery IS a sysadmins primary job. If your network cannot handle a disaster like this, then your network is rubbish.

"no online database will replace your daily news" (1)

SlappyBastard (961143) | more than 4 years ago | (#31401862)

OMFG! There's swinging at an outside pitch and there's try to hit one that was thrown in the fuckin' stands!!

Huh? (2, Informative)

SlappyBastard (961143) | more than 4 years ago | (#31401874)

How did I end up in this article? Ah!!!

Generators plus UPS FTMFW (2, Insightful)

Anonymous Coward | more than 4 years ago | (#31401876)

Epic fail.

Any data center worth it's weight in dirt, must have UPS devices sufficient to power all servers plus all network and infrastructure equipment, as well as the HVAC systems too, for a minimum of at least 2 full hours on batteries, in case the backup generators have difficulty in getting started up and online.

Any data center without both adequate battery-UPS systems plus diesel (or natural gas or propane powered) generators is a rinky-dink, mickey-mouse amateur operation.

Re:Generators plus UPS FTMFW (1)

ElectricTurtle (1171201) | more than 4 years ago | (#31402288)

Yeah, seriously. I worked for a mid-size company that had a very modest server farm (it was a retail-related business), and even we had everything switch to diesel at the instant the grid might go down. Since our switches were POE, and our phone were VOIP, and our computers were laptops, it was like there was no power outage at all. We'd be on the phone with one of our stores and just say 'oh, the power went out, well, back to your issue...'

It's hard to believe that freakin' Google wouldn't be at that level...

Re:Generators plus UPS FTMFW (1)

mjwalshe (1680392) | more than 4 years ago | (#31402306)

quite thers a comment somwhere else about how 356 main was highly regarded lol - if everything insn't running of the batteries 24/7 it aint a real datacentre.

Re:Generators plus UPS FTMFW (0)

Anonymous Coward | more than 4 years ago | (#31402350)

Two full hours requires a MASSIVE battery capacity. It's far more feasible to count on 10-15 minutes from the batteries and make sure your generators start up promptly.

Also, some of Google's datacenters (not sure if this is one of them) dispense with many centralized batteries in favor of building the battery into each server alongside the PSU. This avoids some issues with AC->DC->AC conversion, leaving them with just AC->DC at each server. I'm just speculating, but it's possible that generator startup went as planned and the 25% of servers that didn't survive the outage turned out to have too-short battery life on their local battery packs. Hard to verify battery performance without a live test...

Re:Generators plus UPS FTMFW (1)

Glendale2x (210533) | more than 4 years ago | (#31402930)

That's what I was thinking; the local battery design that was previously praised became the fault. A large central UPS can monitor and test its batteries more than just plugging an SLA battery into the DC side of a server power supply and patting yourself on the back for being a genius. A UPS gives more telemetry, too. How did Google monitor those individual batteries? Not all SLA batteries are perfect. Were they tested and maintained? I'm guessing "no" to both if 25% of the servers lost power before the generators started (probably 10 to 30 seconds, which isn't that much). How long was the generator start window? TFA doesn't say anything about that.

Google fails to address what caused the outage (beyond that the power went out). I've read some comments here saying that's not important, just how they handled it afterward is important. I disagree; if their no-UPS design has some fundamental flaws in it, they should admit it and address it, even if that means going back to a traditional centralized UPS.

Re:Generators plus UPS FTMFW (1)

DragonWriter (970822) | more than 4 years ago | (#31405230)

Google fails to address what caused the outage (beyond that the power went out).

This is false. Google details at some length the causes of the customer-facing outage. The power going out is an early problem, but its not a particular important issue because that's an accepted risk in their plans. The failure was in the fact that the procedures that are intended to prevent a power loss at a data centre from producing a customer-affecting outage had inadequate coverage of partial losses of power, and on top of that were not executed properly (in part due to inadequate training on the procedures, in part because of outdated and incomplete documentation of the procedures.)

Google has redundant data centers for a reason -- so that anything that causes one to go down doesn't effect operations that rely on them. Its not a "failure" if a data center goes down due to one of the risks accepted in the design of the individual data center. It only becomes a failure if the redundancy doesn't work as intended. The reasons for that failure -- and Google's plans on dealing with them -- are addressed, at some length, in the published post mortem.

Re:Generators plus UPS FTMFW (1)

Glendale2x (210533) | more than 4 years ago | (#31405572)

This is false. Google details at some length the causes of the customer-facing outage.

I only sort of skimmed over TFA to get the big points, but if you can point out the part where they explain why 25% of the servers lost power, I'd appreciate it.

Re:Generators plus UPS FTMFW (1)

Pentium100 (1240090) | more than 4 years ago | (#31403724)

Of course you can verify battery performance safely. My UPS has battery test (checks if the batteries can still be used, if it fails, batteries need replacing) and run time calibration (discharges batteries to 25% and monitors how long it took, based on that it can estimate how long will it be able to hold the load). The whatever system google is using should be able to check the batteries while power is on, so that you don't end up with batteries that have 20% of their original capacity when the power goes down.

Re:Generators plus UPS FTMFW (3, Insightful)

Tynin (634655) | more than 4 years ago | (#31402656)

You are so cute. I know very little about UPS systems, but when I was working in a datacenter that housed 5000 servers we had a two story room that was twice the size of most houses (~2000 sq ft) with rows and rows of batteries. I was told that in the event of a power outage, we had 22 minutes of battery power before everything went out. The idea of having enough for 2 hours would have been one an interesting setup considering how monstrously large this one already was. Besides, I'm unsure why you'd ever need more than that 22min since that is plenty of time for our on site staff to gracefully power down any of our major servers if the backup generator failed to kick in.

Re:Generators plus UPS FTMFW (-1, Troll)

shish (588640) | more than 4 years ago | (#31404616)

Besides, I'm unsure why you'd ever need more than that 22min since that is plenty of time for our on site staff to gracefully power down any of our major servers if the backup generator failed to kick in.

You consider powering down major servers to be a good option? Smells like an opinion from microsoft land (where "planned downtime" counts as "uptime", and an "uptime" of 95% is "acceptable"...)

Re:Generators plus UPS FTMFW (2, Informative)

Richard_at_work (517087) | more than 4 years ago | (#31405392)

In your rush to criticise 'Microsoft land', you must have overlooked his closing statement regarding 'if the backup generator failed to kick in'.

You cannot have uptime without power. A mains outage coupled with an unexpected generator failure *will* result in downtime - your decision now is whether you wish your servers to be gracefully shutdown, or just have the rug pulled from under them and hours or days of potential angst as a result. Which is it?

And before you suggest larger UPSes for longer protection, consider why you have both a generator and a UPS in the first place - UPSes cost a lot, they cost a lot to buy, and they cost a lot to maintain, and then they cost a lot to replace after only a few years. A generator in comparison costs a lot less all round.

Re:Generators plus UPS FTMFW (2, Funny)

Darth_brooks (180756) | more than 4 years ago | (#31402788)

Yeah, and when the guys at the Jesus Christ of Datcenters that you describe have to do something like, say, switch from generator to utility power manually, and the document that details that process is 18 months old and refers to electrical panels that don't exist anymore, you get what you had here. A failure of fail-over procedures. If the lowliest help desk / operator can't at least understand the documentation you've written, then you've failed.

The only equipment failure listed is a "power failure." Granted, that can be as simple as "car hits a telephone pole and knocks out a chunk of the grid, leaving your office in the dark", which should be an easily survivable event. But how do you handle a failure like "50kva inline UPS shits the bed leaving nothing but a smoking chassis that no one wants to go anywhere near?" or "HVAC unit fails on christmas eve when only a skeleton staff is on duty and fills the raised floor with 8 inches of water, shorting everything within an inch of its life and making it impossible to bring any hosted services back online?"

There's nothing like a little bit of "we had no idea these three or four unrelated circumstances could happen simultaneously" disaster porn to make you realize that A. Outage / DR / fail-over planning is more than just throwing money at stuff (UPS's, generators, redundant lines, etc) and B. No matter how good your plan is, it will never be 100% effective.

Re:Generators plus UPS FTMFW (1)

DragonWriter (970822) | more than 4 years ago | (#31405096)

Any data center worth it's weight in dirt, must have UPS devices sufficient to power all servers plus all network and infrastructure equipment, as well as the HVAC systems too, for a minimum of at least 2 full hours on batteries, in case the backup generators have difficulty in getting started up and online.

Google's setup appears to rely on the fact that they have redundant data centers, so failover to another data center addresses this problem. The problem here, as identified in their post-mortem, is that for training and other reasons, the fail over wasn't handled correctly.

Since there are sources of data center failure that having UPS + Generator backup won't help with at all, for something like this redundant data centers are essential whether or not you use UPS + Generator backup. Once you have redundant data centers, this problem should be solvable with failover. So, I think Google's general approach was reasonable from the start, as are their plans (detailed in the post-mortem) to address the failure by addressing the training and other issues which prevented failover plans from being properly executed.

When the Power Goes Out At Google... (3, Informative)

binaryseraph (955557) | more than 4 years ago | (#31402066)

...a fairy dies.

Re:When the Power Goes Out At Google... (1)

Colz Grigor (126123) | more than 4 years ago | (#31402810)

...a fairy dies.

I suspect that this will result in a large overpopulation of fairies. Since Google would be to blame for this, perhaps they should begin some sort of fairy mitigation program?

Re:When the Power Goes Out At Google... (0, Flamebait)

binaryseraph (955557) | more than 4 years ago | (#31403638)

We might need to contact Enron to instigate more rolling black-outs (like they did in the late 90's). This might help keep the population under control.

Re:When the Power Goes Out At Google... (0)

Anonymous Coward | more than 4 years ago | (#31405568)

They already have an agreement with San Francisco.

try employing the right people (2, Interesting)

mjwalshe (1680392) | more than 4 years ago | (#31402204)

try hiring some staff with telco experiance instead of kids with a perfect GPA scores from stanford and design the fraking thing better !

Has anyone from Ubisoft read this? (1)

Ben4jammin (1233084) | more than 4 years ago | (#31402210)

I think it would do them good, considering the recent downtime with Assassin's Creed 2. Has anyone seen any info on that outage?

Lucky they have multiple datacenters (0)

Anonymous Coward | more than 4 years ago | (#31402808)

Google is lucky they have a second, third, n number of datacenters to failover to in the first place. You might be surprised how many large companies still rely on truck-shipped tapes or other "cold" disaster recovery methods even for their most critical business data. If you had to restore your systems via tape, would your company still be alive by the time you came back up? Or would the negative publicity from the event lead to a slow and timely death? Although this was an eye-opening experience for Google, it should be even moreso for companies who haven't had to experience this type of an event. Unfortunately in my experience many companies (Google included) will not change disaster recovery policies (or many other IT policies for that matter) until a significant event has occurred. The question is, should you bet your business and be reactive, or protect your business and be proactive? In Google's case, they were able to be reactive and will come out alright. Many others probably wouldn't be so lucky. All-in-all I believe this will be a great learning experience for Google, and as a side-effect will hopefully direct more people to looking at cloud technology to protect their business from outages.

Lessons Like (2, Funny)

Greyfox (87712) | more than 4 years ago | (#31403138)

Don't have all your shit in one data center, maybe? I'd have thought that one would be pretty fundamental. Of course, knowing Google they're going to decide that what they really need is power generation right on site, then they'll just pop off and invent nuclear fusion before lunch.

Re:Lessons Like (1)

cpghost (719344) | more than 4 years ago | (#31404640)

they'll just pop off and invent nuclear fusion before lunch.

And they'll call it gFusion?

Back around 2005... (1)

kilodelta (843627) | more than 4 years ago | (#31403282)

We decided to move three of our divisions into one facility, those included to business facing units and the I.T. division.

I was charged with laying out the design for data, telecom and electrical for the project. Also had engineering of our little NOC.

Nice setup - redundant power in the I.T. division, nice big APC UPS for the entire room, had it's own 480V power drop, dual HVAC units, a natural gas fired generator. It's nice to have the money to do this.

Since we were a state agency we had to use state DNS services. And one day the city had a massive power outage. We were up and running happy as a clam but we found the Achilles heel in all our plans. Without DNS we couldn't get in or out. I had floated the idea of maintaining our own DNS server but nobody wanted to hear that. We had the decent network connection, and the redundant power (Yes, we even placed a UPS/Generator backed up outlet in the MDF for Cox's Marconi router) so why the hell not replicate the state DNS services?

Let that be a lesson. We tried to plan for all contingencies and we completely missed our dependence on an outside state agency. Of course since a river runs right behind we also raised the NOC floor by about a foot.

Outside or the TV (0)

Anonymous Coward | more than 4 years ago | (#31405204)

I turn on my TV or go outside to return to a normal life.

Post Mortem Missed the Problem (1)

photonrider (571060) | more than 4 years ago | (#31405894)

I read the post-mortem and I think they completely missed the mark. Power failed to some machines. They only noticed because "...traffic has problems..." They should have been monitoring the power to detect this situation. They didn't say whether they have the data center power supply on a UPS or not. If it was, it was dying and no one noticed. If they had been monitoring the power they might have avoided the whole mess.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>