Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Seattle Data Center Outage Disrupts E-Commerce

ScuttleMonkey posted more than 5 years ago | from the no-sigmas-for-you dept.

The Internet 118

1sockchuck writes "A major power outage at Seattle telecom hub Fisher Plaza has knocked payment processing provider Authorize.net offline for hours, leaving thousands of web sites unable to take credit cards for online sales. The Authorize site is still down, but its Twitter account attributes the outage to a fire, while AdHost calls it a 'significant power event.' Authorize.net is said to be trying to resume processing from a backup data center, but there's no clear ETA on when Fisher Plaza will have power again."

Sorry! There are no comments related to the filter you selected.

Heh (5, Insightful)

MightyMartian (840721) | more than 5 years ago | (#28573147)

Redundancy ain't just a river in Egypt.

Re:No Backup?? (1)

mwiley (823919) | more than 5 years ago | (#28573369)

When this happens in this day and age the CIO should be fired! There is no excuse. It's a situation where you gamble that this will never happen but when it does you should go.

Re:No Backup?? (4, Insightful)

Nutria (679911) | more than 5 years ago | (#28573665)

When this happens in this day and age the CIO should be fired!

And if the CIO recommended a redundant D.C. but the CEO, CFO or Board rejected it as "too expensive"????

Re:No Backup?? (2)

SkyDude (919251) | more than 5 years ago | (#28575173)

When this happens in this day and age the CIO should be fired! And if the CIO recommended a redundant D.C. but the CEO, CFO or Board rejected it as "too expensive"????

If that's the case, then the aformentioned officers should give up their pay to the thousands of merchants who lost their day's pay due to this problem. Yeah, like that'll happen.

Phone lines occasionally go out and that might affect local merchants, but when it's a data center that handles the livelihoods of thousands of merchants, there needs to be much greater redundancy. The businesses that are affected by this are not all huge e-tailers either. Many are just small operators trying to make a living on the web. As it stands now, a merchant can't have multiple card processors unless he's willing to pay the monthly fees for two processors. I've never heard of that being done and doubt it would be feasible.

Merchants affected by this will just have to suck it up, but for those who are not involved in e-commerce, this is a shining example of how doing business with credit card processors is dancing with the devil. They screw you on all of the charges, they screw you on chargebacks, and now they've screwed a lot of small business people by denying them income, probably because it wasn't cost effective to have a first class backup plan.

Happy Independence Day!

Re:No Backup?? (3, Interesting)

sopssa (1498795) | more than 5 years ago | (#28573715)

I know redundancy and such is better on business stuff, but this kind of reminds me of the fact how customer lines have lots of single failure points aswell. There was a day when TeliaSonera's, large nordic ISP, DHCP stopped working, leading 1/3 of the whole country's residents without internet access. Turns out there was a hardware failure on the dhcp server, leading me to believe that they actually depend on just one server to handle all the dhcp requests coming from customers. They did fix it in a few hours, but it was still unavailable for the rest of the day because hundreds of thousands computer's were trying to get an ip address from it. That being said, I remember it happening only once, but it still seems stupid.

Re:No Backup?? (1)

hairyfeet (841228) | more than 5 years ago | (#28574121)

You know we had something similar happen (North Central AR) a few years back. We had over 70k people with zero Internet anything for two days. They couldn't get medical records, use CC, hell the three towns affected pretty much ground to a halt. The cause? The lines heading out to the main branch all converged on a single big fiber trunk that some dumbass farmer nailed with his backhoe while digging a ditch.

So while you can hope there is enough redundancy in the system to keep catastrophic failures like this from occurring, the simple fact is we have no idea how much of our critical infrastructure can be taken down with a single fuck up. Maybe the US gov needs to find out which companies we depend on have such single points of failure and demand redundancy for critical infrastructure? Of course with bribery....uhhh I mean lobbying being legal the mega corps would just use it as another excuse for a bailout which they would stuff in their pockets instead of doing what we paid for. Kinda like how they took those billions we gave them for nationwide broadband and gave us the finger in return.

Re:No Backup?? (1)

ibbey (27873) | more than 5 years ago | (#28576077)

All fine and good... There is no possible way to design the entire world with redundant systems. But a company like Authorize.net doesn't have that excuse. Hopingh has nothing to do with it, it's called network engineering. They should have multiple data centers located in geographically dispersed parts of the world. This is hosting 101 for any large-scale internet business. The OP is right, the CIO should be cleaning out his desk as we speak.

Re:No Backup?? (1)

The -e**(i*pi) (1150927) | more than 5 years ago | (#28575041)

They should also fire the person who was responsible for having a sprinkler installed above a transformer, exactly how is spraying water on a transformer going to help in a fire?

Re:Heh (3, Informative)

Anonymous Coward | more than 5 years ago | (#28573395)

It's interesting how many companies have assumed redundancy in place but never take the time to do proper testing. They figure that once a disaster happens, that everything will automatically work because their vendor or staff said so. To achieve true redundancy a company needs to do semi-frequent testing to ensure that everything is working properly. Authorize.net might have had what was assumed a redundant system in place, but once the disaster happen they soon realized their system wasn't designed or configured properly. It is expensive and time consuming to test redundancy, let alone actually paying for the redundant equipment/staff/etc, but in times like this it shows how one gets their moneys worth in doing so.

Re:Heh (1)

Clever7Devil (985356) | more than 5 years ago | (#28573429)

Denial of service as it were?

Re:Heh (1)

panoptical2 (1344319) | more than 5 years ago | (#28573481)

This is why clustering is ABSOLUTELY necessary for as large a company as Authorize.net. As parent said, putting all your eggs in one basket is a stupid idea...

I wonder how many companies will switch to PayPal after this...

Re:Heh (1)

rhekman (231312) | more than 5 years ago | (#28573543)

... putting all your eggs in one basket is a stupid idea...

....but... maybe they blew their budget on a really, really good basket?

Re:Heh (1)

Nutria (679911) | more than 5 years ago | (#28573781)

maybe they blew their budget on a really, really good basket?

Mark Twain: Put all your eggs in one basket, and then guard that basket!!

Re:Heh (0)

Anonymous Coward | more than 5 years ago | (#28575691)

That wasn't Mark Twain. It was Andrew Carnegie.

Re:Heh (1)

tearmeapart (674637) | more than 5 years ago | (#28573637)

More information is available from the NANOG (North American Network Operator's Group) list: http://comments.gmane.org/gmane.org.operators.nanog/65992 [gmane.org] .

Excerpt:
"
Fisher Plaza, a self-styled carrier hotel in Seattle, and home to multiple
datacenter and colocation providers, has had a major issue in one of its
buildings late last night, early this morning.
The best information I am aware of is that there was a failure in the
main/generator transfer switch which resulted in a fire. The sprinkler
system activated. From speaking to the fire battalion chief, I am under the
impression that Seattle Fire did use water on the fire as well, but I am
unsure of this.
"

(Btw: Water + Lots of electricity = not good. I bet the electricity got turned off.)

I would copy and paste the rest with reference, but people are posting more details as they come.

Oh, the humanity! (0, Troll)

PopeRatzo (965947) | more than 5 years ago | (#28573689)

..Outage Disrupts E-commerce.

Oh noes! Whatever shall we do if e-commerce gets disrupted?

Because we all know that the cha-chinging of virtual cash registers is the very music of the spheres that keeps the Universe in motion.

Re:Oh, the humanity! (1)

MightyMartian (840721) | more than 5 years ago | (#28573793)

Let's imagine that you're actually paying this data centre large amounts of money with the assurance that the money means 99.9% uptime. Then, maybe, it might mean something more.

If you don't give a crap about uptime, then hell, get a Google webpage or something.

Re:Oh, the humanity! (1)

bertoelcon (1557907) | more than 5 years ago | (#28576507)

There is a reason its a 99.9% uptime and not 100%, this can happen and you can't really sue them if they argue that this is the .1% its down.

Re:Oh, the humanity! (4, Insightful)

ErkDemon (1202789) | more than 5 years ago | (#28574047)

Actually -- in a totally unconnected incident -- my grocery shopping was disrupted today because (according to the note pinned to the closed store's shutters) the store's till server was down, and they'd shut up the shop while they waited for an engineer.

I'm guessing that the server was probably local, possibly above the store, and might have gone fritzy in the heat.

So, real-world implications of computer failure. A server goes down, and suddenly Eric Cannot Buy Cheese ("Aaaaiiiieeee!"). Eric has hard cash, store (presumably) has cheese, but store can no longer sell cheese to Eric. Or anything else.

The shop "crashed".

Okay, so I trudged off and did my grocery shopping elsewhere, but it was a little disturbing to think that we've already gotten to the point where a server problem can stop you buying food, in a "real" shop, with "real" money.

Re:Oh, the humanity! (2, Insightful)

supernova_hq (1014429) | more than 5 years ago | (#28574255)

That's pathetic. I've seen stores stay open during 24 hour POWER FAILURES! Any manager who does not teach their employees how to manually do credit card transactions (yes you can do them by paper!) should never have been hired in the first place.

When we lose power around here (once every 6 months or so), the stores stay open. They simply don't accept debit cards (which require a connection to the bank) until the power comes back on.

Re:Oh, the humanity! (1)

skuzzlebutt (177224) | more than 5 years ago | (#28574763)

Or (gasp!) make change without a computron! I wonder if they even train that in grocery stores anymore...scary, indeed.

Re:Oh, the humanity! (3, Insightful)

Spike15 (1023769) | more than 5 years ago | (#28575481)

Or (gasp!) make change without a computron! I wonder if they even train that in grocery stores anymore...scary, indeed.

I think the bigger issue in this case would be manually looking up the price for every single item. We tend to simplify selling things manually in this way (manually processing credit card transactions, making change manually, etc.), when really when really the biggest problem is being without the UPC system.

Re:Oh, the humanity! (1)

mpe (36238) | more than 5 years ago | (#28576633)

That's pathetic. I've seen stores stay open during 24 hour POWER FAILURES! Any manager who does not teach their employees how to manually do credit card transactions (yes you can do them by paper!) should never have been hired in the first place.
When we lose power around here (once every 6 months or so), the stores stay open. They simply don't accept debit cards (which require a connection to the bank) until the power comes back on.


In other words it happens frequently enough that there is a procedure to handle it. But not frequently enough for the stores to use a UPS and generator to cope with unreliable power. Not even given the loss of refrigerated/frozen stock.

Obligatory (1)

KingAlanI (1270538) | more than 5 years ago | (#28576053)

I can't buy any cheddar here? But it's the most popular cheese in the world!

Re:Heh (2, Interesting)

Nutria (679911) | more than 5 years ago | (#28573753)

there was a failure in the main/generator transfer switch which resulted in a fire. The sprinkler system activated.

Where I work, the D.C. is in a sub-level basement. One day a few years ago, a dim-wit plumber was brazing a pipe with a propane torch, and swung it too close to a sprinkler head.

Sprinkler went off and water did what it does: flow downhill, eventually pouring into the D.C., right onto the SAN storing "my" database...

We were down for a few days. People couldn't access the web site or IVR, but fortunately it happened over a weekend, so the store-front operations weren't totally affected. Also, the system is part of an "asynchronously buffered" stove pipe, so operations "in front" of the downed machine just kept on processing.

Maybe they... (0)

Anonymous Coward | more than 5 years ago | (#28573151)

...should switch to twitter to do all their authorizations.

I know I'd feel much safer if Ashton Kutcher was processing my credit card.

No Carr.... (1)

BeerCat (685972) | more than 5 years ago | (#28573157)

Hmm. Power outage stops /. posts. News at 11

Heads will roll (hopefully) (0)

Anonymous Coward | more than 5 years ago | (#28573201)

It's absurd that a service provider like AuthorizeNet can be taken down by any single point of failure. I run a much, much smaller business and even we have our resources distributed widely enough that it would require a terrorist attack on New York PLUS an earthquake in San Francisco to knock us offline.

Re:Heads will roll (hopefully) (1)

SerpentMage (13390) | more than 5 years ago | (#28573937)

Wow, you are just as bad as AuthorizeNet... Namely you are putting all of your eggs into one basket called AMERICA... What you are ignoring are the ramifications if a government decides to take you down. And frankly I am more worried about a government taking me down than some accident.

I am part of a hedge fund and we have data centers in... Caymans, Monaco, and Switzerland... I think you get the drift here... And our exchanges that we talk to are scattered throughout the world... Is it simple? Cheap? Nope...

Re:Heads will roll (hopefully) (0)

Anonymous Coward | more than 5 years ago | (#28573981)

Well, you're a HEDGE fund, your two biggest points are HEDGING YOUR BETS, and making sure you're not in an extradition friendly country when you take your client's money and run as the castle comes crumbling down :D

Re:Heads will roll (hopefully) (0)

Anonymous Coward | more than 5 years ago | (#28574195)

Well, YOU are putting all of your eggs in one basket. It's called EARTH.

It boils down to risk management. How much do you want to spend to avert risk. Not every company can or even should have multiple data centers because it doesn't always make financial sense (although it probably does make sense for Authorize.net to have redundant data centers). Does your hedge fund insure your key staff personnel against alien abduction? Why not? Lloyds of London will sell it to you so it's not like you can't buy such a policy.

Re:Heads will roll (hopefully) (1)

batquux (323697) | more than 5 years ago | (#28574461)

it would require a terrorist attack on New York PLUS an earthquake in San Francisco to knock us offline.

Which is all moot since you're using authorize.net as a payment gateway. ;)

Backup data center was impacted too (1)

basementman (1475159) | more than 5 years ago | (#28573235)

http://twitter.com/AuthorizeNet/status/2455435020 [twitter.com] Hopefully someone made an offsite backup as well.

Re:Backup data center was impacted too (1)

bezking (1274298) | more than 5 years ago | (#28573647)

Yeah. Offsite to these people must look like this:
  1. Install datacenter gear on 5th floor
  2. Clients want redundancy? OK, more gear on the 6th floor.
  3. Pray that no sort of structural event occurs.
  4. ??
  5. Profit!!!!!

Slow news day! (1)

Another AC (151302) | more than 5 years ago | (#28573247)

News at 11...

tomorrow.

Also affecting Bing.com (2, Interesting)

Cothol (460219) | more than 5 years ago | (#28573251)

Bing Travel servers are located in the same server hall. More info: http://isc.sans.org/diary.html?storyid=6721

The best line from the SANS ISC (3, Interesting)

Zocalo (252965) | more than 5 years ago | (#28573589)

The media are also following the story, KOMO a local station was knocked offline but are broadcasting from a backup site.

Way to go guys! At least two national, and maybe even international, ICT companies on whom numerous affiliates depend upon fail to provide for an adequate backup facility and continuity plan, yet the local AM radio station manages to pull it off. I'm guessing that some heads are gonna roll after the holiday weekend...

Re:The best line from the SANS ISC (1)

The Evil Couch (621105) | more than 5 years ago | (#28574951)

I'm pretty sure they're talking about KOMO, the TV station, actually. It's one of the largest stations here in Seattle. I think they take up a fair chunk of Fischer Plaza, where the fire was. Still, your point about international and national business entities failing, when a local business succeeds is pretty stupid.

Re:The best line from the SANS ISC (1)

Blakey Rat (99501) | more than 5 years ago | (#28576535)

KOMO is one of the largest TV broadcasters in Seattle. Possibly the largest, although KING might have them beat. Yah, they also own a AM station.

I mean, your point still kind of applies, but you might want to look up with KOMO actually is before you chime in with the podunk AM radio comments... http://en.wikipedia.org/wiki/KOMO-TV [wikipedia.org]

Failover Planning (and this broke FiOS too) (4, Informative)

Cysgod (21531) | more than 5 years ago | (#28573285)

Apparently Verizon has a single point of failure for much of its FiOS for the metro areas of Western Washington state in this building as well so the FiOS customers are offline as well right now.

  • Clownshoes: Have no failover plan and be singly homed.
  • Meh: Have a failover plan.
  • Good: Have a failover plan that requires humans and exercise it regularly.
  • Better: Have a failover plan that is automated and exercise it regularly.
  • Best: Eliminate single points of failure so failover is turning off the flake or fail and going back to drinking a beer.

Hot/Hot is always a more ideal solution than Hot/Warm or Hot/Cold for disaster recovery (and increasing equipment utilization/ROI), and this event demonstrates why.

Re:Failover Planning (and this broke FiOS too) (2, Informative)

Cysgod (21531) | more than 5 years ago | (#28573381)

Looks like from twitter comments that Verizon finished their failover since people's FiOS is coming back now.

Re:Failover Planning (and this broke FiOS too) (1, Informative)

Anonymous Coward | more than 5 years ago | (#28573559)

Not just FIOS it looks like, I was wondering why my DSL was offline. Nearly all network services I would guess.

Re:Failover Planning (and this broke FiOS too) (1)

brianc (11901) | more than 5 years ago | (#28574373)

Best: Eliminate single points of failure...

Earth is a single point of failure.

Re:Failover Planning (and this broke FiOS too) (1)

haifastudent (1267488) | more than 5 years ago | (#28576211)

Best: Eliminate single points of failure...

Earth is a single point of failure.

Therefore, Earth must be eliminated.

Re:Failover Planning (and this broke FiOS too) (1)

bertoelcon (1557907) | more than 5 years ago | (#28576541)

Best: Eliminate single points of failure...

Earth is a single point of failure.

Milky Way Galaxy is a single point of failure.

Re:Failover Planning (and this broke FiOS too) (1)

Blakey Rat (99501) | more than 5 years ago | (#28576471)

I'll really be concerned about that in 2087 when Verizon finally starts rolling FIOS out in Snohomish. Christ, we had DSL in 1997, what the hell do you have to do to get FIOS? Sacrifice a virgin? Then to pour lemon juice on the wound, they SATURATE the airspace, billboards, advertising on mass transit (especially buses that go to Snohomish!) telling people to order FIOS. Meanwhile I know hicks in Louisiana who can't even spell the word "fiber" who have it in their dirt-floored one-room shacks.

Fucking Verizon.

There should be an international backup day (0)

Anonymous Coward | more than 5 years ago | (#28573305)

Where everyone tests their backups and failovers, because when the crap hits the fan for real everyone just sticks their head in the sand and then blames a third party post mortem

It's mindboggling ... (1)

SlashDev (627697) | more than 5 years ago | (#28573329)

... that authorize.net does not have a failover site.

Re:It's mindboggling ... (1)

Cysgod (21531) | more than 5 years ago | (#28573455)

They do. It sounds like the involves-humans failover process failed somehow.

Fisher Plaza is a disaster response center (4, Informative)

Anonymous Coward | more than 5 years ago | (#28573405)

Fisher Plaza is supposed to be a regional telecomm / communications / medical care hub for the Seattle area. It was designed and built to *not* crash, even in a magnitude 9.5 quake. Sounds like they've got work to do ...

Re:Fisher Plaza is a disaster response center (1)

univalue (1563403) | more than 5 years ago | (#28575279)

and this is not the first time they had an power outage. There is still no backup generate either.

Re:Fisher Plaza is a disaster response center (1)

evil-merodach (1276920) | more than 5 years ago | (#28576121)

And having sprinklers in the electrical room is such a good idea too. Let's make sure we don't just suffer from fire damage.... They're lucky no one was seriously injured.

System failure (5, Informative)

ErkDemon (1202789) | more than 5 years ago | (#28573431)

There are four main factors that can take a part of a society's key infrastructure offline.

1: ACTS OF GOD
Meteor strike, lightnight strike, extreme weather ...

2: ACTS OF MALICE
War, terrorism, extortion, employee sabotage, criminal attacks ...

3: WEAK INFRASTRUCTRUCTURE
Underpowered networks, inadequate UPS backups, skeleton staffing, the shaving of safety margins as an efficiency exercise, inadequate rate of replacing old hardware ...

4: MANAGEMENT ARSINESS
This is when a problem starts, and the people in charge either don't know how to react, don't care, or prioritise face-saving over actual problem-solving. This happens when you get an outage, and instead of system management promptly calling all their critical clients to inform them, and warn them that there's maybe twenty minutes of UPS capacity in the routers if the system's not fixed by then, they instead cross their fingers and hope that things'll work out, and worry about what to tell the clients afterwards.

Fisher Plaza seems to have suffered from a case of #4 recently, so it's not surprising that they've gone down again. The first time should have been the wakeup call to show them that their human systems were in need of an overhaul. Without that overhaul, you're setting up a dynamic in which the second time it happens, things are even worse (because now people are locked into defensive mode).

No matter how advanced your technological systems, if the people running it have the wrong mindset, you're gonna go down. And when you go down, you're gonna go down far far harder than necessary.

Re:System failure (2, Insightful)

SerpentMage (13390) | more than 5 years ago | (#28573945)

5: Government...

A government that decides to come to your headquarters and decides they want all of your hardware pronto...

Re:System failure (2, Funny)

eln (21727) | more than 5 years ago | (#28574053)

3: WEAK INFRASTRUCTRUCTURE

It's good to see that you've provided redundancy for the "TRUC" part of your infrastructure, but I'm concerned about the rest of it.

Re:System failure (1)

supernova_hq (1014429) | more than 5 years ago | (#28574295)

Isn't #3 a result of #4?

Button pusher did it (0)

Anonymous Coward | more than 5 years ago | (#28573441)

This is a common problem in this datacenter. The red candy like emergency power shut off button is located right by the exit door. Noobs think it is the door release.

Re:Button pusher did it (0)

Anonymous Coward | more than 5 years ago | (#28574289)

That is a problem in a number of data centers.

I've worked in places where people instead of pushing the exit bar would open the DC door, instead would reach for the big candy red button that is placed well away from the door so its not confused with a "push to exit" button. Even if its covered with a glass plate with a warning on it saying only reach for the hammer and break the glass if there is an emergency. Emergency as in a fire that actually matters to servers, as opposed to a burnt cig or a lit fart. Emergency as in the FD outside have their hoses plugged in and are charging the dry standpipes to the building in preparation to fire off the sprinklers. Not emergency as in a premature birth of the remnants of a burrito eaten a couple hours earlier.

Only way to protect against this is to use multiple locations and WANS. Expensive as all get-out, but worth it, as opposed to a client lawsuit.

Re:Button pusher did it (1)

supernova_hq (1014429) | more than 5 years ago | (#28574309)

So THAT'S why there was a do not touch sign above it...
*avoids eye contact*

Authorize.Net did have a backup (3, Informative)

johnncyber (1478117) | more than 5 years ago | (#28573461)

...except it failed as well. From their twitter:

"@gotwww The backup data center was impacted too. Don't have info as to why. The team is solely focused on getting us back up for now."

Re:Authorize.Net did have a backup (0)

Anonymous Coward | more than 5 years ago | (#28573905)

...
"@gotwww The backup data center was impacted too. Don't have info as to why. The team is solely focused on getting us back up for now."

Maybe the backup data center was overbooked?

Re:Authorize.Net did have a backup (1)

eln (21727) | more than 5 years ago | (#28574071)

What, was the backup data center on the floor directly below the primary data center?

If I had to guess, either they did something that stupid or they didn't properly test their failover procedures or their backup data center, and either one or both of those things turned out to be inadequate.

Re:Authorize.Net did have a backup (0)

Anonymous Coward | more than 5 years ago | (#28576043)

No, their transit provider was in the same building. Oops!

Re:Authorize.Net did have a backup (3, Interesting)

ZorinLynx (31751) | more than 5 years ago | (#28574429)

Sometimes folks set up a redundant system and forget to make one key piece redundant.

Example: A server rack with two UPS systems. Each server has two power cords, one going to each UPS.. but the switch everything is plugged into only has one power input, so it's connected to UPS A.

Power blinks and UPS A decides to shit itself. Rack goes down, even though all the machines are up, because the network switch loses power.

Solution? An auto switching power Y-cable with two inputs, and one output. But 80% of people will be lazy and not bother. Oops.

Happens all the time; I see it everywhere.

Re:Authorize.Net did have a backup (2, Insightful)

linuxbert (78156) | more than 5 years ago | (#28576091)

An auto switching power Y-cable with two inputs, and one output? ive never seen or heard of these.. Do you have a manufacturer or part number?
id defiantly like some.

Re:Authorize.Net did have a backup (2, Informative)

funkboy (71672) | more than 5 years ago | (#28576641)

An auto switching power Y-cable with two inputs, and one output? ive never seen or heard of these.. Do you have a manufacturer or part number?
id defiantly like some.

Well, it ain't just a Y cable and they're not super-cheap, but still affordable if you're running anything that needs anywhere near the level of redundancy that they provide.

It's called a static transfer switch [apc.com] and can be had for a few hundred bucks from most APC dealers (and MGE dealers, now that the merger is complete).

What's nice about them is that unlike a UPS, colo providers don't mind if you stick an STS in your rack, as a UPS removes the colo provider's ability to completely shut off everything in the datacenter with their automated power systems if the shit really hits the fan (trust me, if there's a fire in the datacenter, you'd much rather have your servers suffer a cold shutdown than sucking in smoke and FM200 and all the other tasty stuff in the air, not to mention fanning or even directly contributing to an electrical fire if it's in your rack). An STS still enables them to completely kill the juice in an emergency while providing good & economic redundancy for single-feed machines, not to mention being close to 100% efficient.

Geocaching.com too (4, Informative)

dickens (31040) | more than 5 years ago | (#28573555)

And on a holiday. Bummer. :(

Re:Geocaching.com too (1)

supernova_hq (1014429) | more than 5 years ago | (#28574327)

Aha, somebody else noticed this as well!

Not only is it a holiday, but there is a HUGE geocaching event (for 3 days) happening in B.C. and anyone attending (I know some people) are SOL for getting information about it.

If anyone knows of a secondary site for finding info on the events, please post!

Re:Geocaching.com too (1)

KPexEA (1030982) | more than 5 years ago | (#28575199)

Event Locations
# Cache Creek Park, 1500 Quartz Rd. (N50Â 49.039 W121Â 19.561)
# Clinton, Reg Conn Park, Smith Ave. (N51Â 05.314 W121Â 35.225)
# Lillooet, Xwisten Park, approx 5km from Lillooet on Hwy 40 (Moha Road) (N50Â 45.111 W121Â 56.112)
# Logan Lake, Maggs Park, Chartrand Ave. (N50Â 29.549 W120Â 48.691)
# Lytton, Caboose Park, 4th St. (N50Â 13.875 W121Â 34.925)
# Merritt, Lions Park, Voght St & 1st Ave. (N50Â 06.882 W120Â 47.188)

http://www.goldcountry.bc.ca/bcga [goldcountry.bc.ca]

Re:Geocaching.com too (1)

johannesg (664142) | more than 5 years ago | (#28575525)

Geocachers of Slashdot unite!

Yesterday I almost broke my daily record with 42 finds, but I came home too late to do the logging. Today the site was down all day long. Well, tomorrow then...

As for KPexEA: great service!

According to KOMO news (2, Informative)

PPH (736903) | more than 5 years ago | (#28573741)

... who's broadcast facilities reside in this building (they were broadcasting from a park on Queen Anne hill this morning), it was due to a transformer vault fire. The resulting sprinkler operation rendered their backup generator inoperable.

Being in the power biz, this sort of thing is to be expected in typical office buildings. Sometimes the power goes out. Live with it. What really puzzles me is how someone can take such a structure, install a raised floor and some big A/C units on the roof and sell it as a data center. This kind of crap goes on all the time, as I've seen purpose built data centers go down for single point failures.

Re:According to KOMO news (1)

kyoorius (16808) | more than 5 years ago | (#28574963)

Same thing [slashdot.org] happened to theplanet.com last year. Transformer went boom, fire, etc. Backup generator was allegedly shut down as ordered by the fire department. This is happening so frequently, it should be included in the disaster planning and standard test scenarios.

Re:According to KOMO news (1)

orngjce223 (1505655) | more than 5 years ago | (#28576015)

Ditto happened to Caro Hosting several months ago. The backup generator, which had just been turned on because of a power outage,caught on fire. Said hosting service kept backups only of data, did not have actual failover servers (which they'd promised). Needless to say, providers were switched soon after.

Re:According to KOMO news (1)

aaarrrgggh (9205) | more than 5 years ago | (#28576651)

Fischer plaza is actually pretty robust of a site, and well compartementalized. The problem with most telecom hotels though is the battery plant is the main line of defense; generator and utility equipment are often located in the same room.

With Verizon, their HUB there should go 8 hours on battery in this type of failure while they are trying to coordinate with Aggrecco for a roll-up unit. Depending on timing and the fire department, they would expect a 6-8 hour outage.

sloppy engineering (1, Flamebait)

ChrisCampbell47 (181542) | more than 5 years ago | (#28573829)

"Our current estimate for re-establishing Bing Travel functionality is 5pm PST," says a notice at Bing

When someone in a technical role screws up a timezone designation, for me that is always a red flag that they are sloppy with facts, and I need to closely watch their other decisions, actions and statements, because they may be in over their head.

Re:sloppy engineering (0)

Anonymous Coward | more than 5 years ago | (#28573901)

Not really sure what you are complaining about... PST == Pacific Standard Time. I don't see anything wrong with this.

Re:sloppy engineering (1)

Phroggy (441) | more than 5 years ago | (#28574187)

Not really sure what you are complaining about... PST == Pacific Standard Time. I don't see anything wrong with this.

And that's exactly why these kinds of mistakes are made.

Seattle is currently on PDT (GMT -0700), not PST (GMT -0800). The switch back to PST happens in November.

Re:sloppy engineering (0)

Anonymous Coward | more than 5 years ago | (#28574489)

If the stupid lackwit asshats want to keep RANDOMLY changing their time, then fuck em.

Re:sloppy engineering (1)

Phroggy (441) | more than 5 years ago | (#28574771)

Changes to the time are not random at all, they're clearly defined. Of course those definitions are periodically changed randomly with minimal notification, but that's not the same problem.

Re:sloppy engineering (0)

Anonymous Coward | more than 5 years ago | (#28575263)

Well it was me who originally mentioned this (the response to ChrisCampbell47).

Thank you for correcting me. I live somewhere that doesn't use Daylight Savings Time so it didn't click in at first.

Changing my clock every 6 months and being considered a different timezone is really something I have never experienced :-)

Re:sloppy engineering (1)

linuxbert (78156) | more than 5 years ago | (#28575965)

its still not incorrect as they stated that it was in standard time. if they only stated 5pm Pacific time, one would assume the current Daylight Savings time.
Canadian (and American I think, but dont hold me to it) Tide and Current tables are in Standard time, so you need to remember to add the hour when you are in Daylight Savings Time, otherwise your calculations are off, and you can hit low things, and run around on high things.

Re:sloppy engineering (1)

hey (83763) | more than 5 years ago | (#28573919)

I guess your point is that it is PDT time now.

Re:sloppy engineering (3, Funny)

eln (21727) | more than 5 years ago | (#28574119)

Focusing on something that 99% of us screw up at one point or another, particularly when our primary focus at the time is probably getting the service back online rather than checking the calendar to see if it's Daylight Saving Time or not, for me is always a red flag that you're an insufferable pedant.

Re:sloppy engineering (3, Informative)

Achromatic1978 (916097) | more than 5 years ago | (#28574853)

Come on, the guy's sig is a link to some comic rant about "its versus it's" which, whilst it annoys me no end, is most definitely a good indicator that he is, no doubt, an insufferable pedant.

Re:sloppy engineering (1)

Blakey Rat (99501) | more than 5 years ago | (#28576493)

Also, having a Slashdot account.

Re:sloppy engineering (2, Insightful)

Phroggy (441) | more than 5 years ago | (#28574179)

"Our current estimate for re-establishing Bing Travel functionality is 5pm PST," says a notice at Bing

When someone in a technical role screws up a timezone designation, for me that is always a red flag that they are sloppy with facts, and I need to closely watch their other decisions, actions and statements, because they may be in over their head.

It's quite likely that this message was not posted by somebody in a technical role, but a managerial role. The technical people may very well have just said "by 5:00" or possibly "by 5:00 Pacific Time", and whoever posted the notice on the web site (while the technical people were busy working on trying to fix things) added "PST" instead of "PDT".

Re:sloppy engineering (0)

Anonymous Coward | more than 5 years ago | (#28574679)

Look, I'm not trying to troll....but, really? You're ready to discard someone's entire team due to one person typed "PST" instead of "PDT"? Until you brought it up, I didn't realize there was a difference. I also work long-haul communications, but refer to Zulu as my main time zone... so maybe it's my sole ignorance. But if you knew they meant Pacific, and the posters below knew you meant Pacific, and I knew it.....

...is it really worth the "need to closely watch their other decisions..." etc?

Re:sloppy engineering (1)

citylivin (1250770) | more than 5 years ago | (#28575271)

Pacific Standard Time.

Seattle is on the west coast.

Not everyone lives in new york you know...

Re:sloppy engineering (1)

DerekLyons (302214) | more than 5 years ago | (#28575389)

When someone in a technical role screws up a timezone designation, for me that is always a red flag that they are sloppy with facts, and I need to closely watch their other decisions, actions and statements, because they may be in over their head.

When someone is excessively pedantic for the sole reason of making his virtual penis larger and harder, I point and laaaaaaaaaaugh.
 
Seriously, get the fuck over yourself. PST is a widely used and widely accepted descriptive term for the Pacific time zone. For 99% of the people, it doesn't matter whether it's actually PST or PDST.

Fisher Plaza? (1)

ScentCone (795499) | more than 5 years ago | (#28574761)

Sounds more like Fischer Price. Glad that none of customers rely on Authorize.net.

Wow... (1)

ThousandStars (556222) | more than 5 years ago | (#28574945)

Adhost oversees two sites for my family's business: http://www.seliger.com [seliger.com] and http://blog.seliger.com [seliger.com] . At least part of the Fisher Plaza data center seems to be up at the moment because seliger.com will load for me, while blog.seliger.com won't. When I figured this out a few hours ago, I sent an e-mail to Adhost and got this as part of the response:

We have been advised by the building engineering team that they anticipate restoring power to the Plaza East building in plus or minus 4 hours. We sincerely hope this is an accurate number and, if not, we will let you know as soon as we receive new information from the engineers.

Imagine my surprise at learning that the problem is big enough to make /.. Actually, what's even more surprising is the unplanned outage in the first place: I don't recall Adhost ever going down for this long, especially in the middle of the day.

Re:Wow... (1)

ThousandStars (556222) | more than 5 years ago | (#28574959)

Sorry to reply to my own comment, but the Adhost e-mail servers are also working. I don't know if this is because their main site is coming back online or if it's because their backup worked.

Re:Wow... (1)

grossvogel (972807) | more than 5 years ago | (#28575101)

My website is hosted at Adhost, and it's up right now. Email, too.

I'd post my url for proof, but, I like it to stay online...

Fisher Plaza Designed to survive External factors (1)

stmfreak (230369) | more than 5 years ago | (#28574985)

I used to manage a 22 rack cage that we leased from Internap at Fisher Plaza back in 2005. They really did build the place well. Massive diesel generators, independent well water, redundant cooling, etc. But it was designed to survive and continue broadcasting for a local news station for 18 days without resupply in the event of a major external disaster like an earthquake.

I imagine they are reviewing their DR procedures and designs now to minimize collateral damage from internal factors.

But let's not be too hard on them, it was one of the better colo facilities I've seen. There are far worse out there holding their pants up with three hands.

Not the first time (1, Informative)

Anonymous Coward | more than 5 years ago | (#28575151)

This is the 2nd fire since 2008... Apparently Internap rent the power from the building so they have no control over the quality/maintenance of these generators and UPSes.

The fire which started around 11:30 PM (or maybe earlier, but first signs were around that time) damaged badly some of the electrical risers, so they are unable to get power back so some parts of the datacenter. According to their last update they're getting external generators to bypass the damaged equipment and power up the rest of the datacenter, which should be completed late this evening... At best it's going to be a nearly full day outage for some of their customers.

Huge portable generator arrives at Fisher Plaza (2, Interesting)

KPexEA (1030982) | more than 5 years ago | (#28575275)

Twitpic link blocked from Slashdot?? (1)

KPexEA (1030982) | more than 5 years ago | (#28575303)

The Twitpic link works fine from the place I found it but not when clicked via slashdot???

Re:Twitpic link blocked from Slashdot?? (1)

KingAlanI (1270538) | more than 5 years ago | (#28576073)

Works For Me.

affected me (0)

Rene S. Hollan (1943) | more than 5 years ago | (#28576611)

I had to work today to find and fix a bug related to a particular external site... sure enough, our internet access was down.

Pfft! I had a copy of Barry on a linux box, tethered my BlackBerry, a bit of iptables magic, and I'm back online to test.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?