Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AOL Creates Fully Automated Data Center

Unknown Lamer posted about 3 years ago | from the tomorrow-system-architect-automates-himself dept.

America Online 123

miller60 writes with an except from a Data Center Knowledge article: "AOL has begun operations at a new data center that will be completely unmanned, with all monitoring and management being handled remotely. The new 'lights out' facility is part of a broader updating of AOL infrastructure that leverages virtualization and modular design to quickly deploy and manage server capacity. 'These changes have not been easy,' AOL's Mike Manos writes in a blog post about the new facility. 'It's always culturally tough being open to fundamentally changing business as usual.'" Mike Manos's weblog post provides a look into AOL's internal infrastructure. It's easy to forget that AOL had to tackle scaling to tens of thousands of servers over a decade before the term Cloud was even coined.

Sorry! There are no comments related to the filter you selected.

So it will take ages for a fix (1)

Anonymous Coward | about 3 years ago | (#37684550)

How long will it take for an engineer to get there to replace a card or server?

no security or maintenance? (1)

Joe_Dragon (2206452) | about 3 years ago | (#37684652)

Seem like it may take time for any one to come to the site for any thing vs have a few people on site to get to stuff quicker.

Re:no security or maintenance? (3, Insightful)

EdIII (1114411) | about 3 years ago | (#37685646)

The whole idea is not to need to get to stuff quicker at all.

If you are:

1) Completely virtualized.
2) Use power circuits that are monitored for load, on a battery back up, power conditioners, and diesel fuel generators for local utility backup.
3) Use management devices to control all your bare metal as if you are standing there, complete with USB connected storage per device that you can swap out the iso for.
4) Have redundancy in your virtualization setup that allows you to have high availability, live migration, automated backups, etc.

What you get is an infrastructure that allows you to route around failures and schedule hardware swap outs on your own timetable, which can be far more economical.

If you don't have that then it does involve costly emergency response at 2am to replace a bare metal server that went down. You either pay somebody you have retained locally to do it, or you are the one driving down to the datacenter at 2am to do the replacement yourself with who-the-heck-knows how long it will take with uptime monitoring solutions sending out emails like crazy to the rest of the admin staff, and heavens help you, some execs that demanded to be in the loop from now on due to an "incident".

Don't know about you..... but I would rather be able to relax at 10pm and have a few beers once awhile (to the point I can't drive) without worrying about bare metal servers going down all the time, or who is on call, etc.

Re:So it will take ages for a fix (1)

errandum (2014454) | about 3 years ago | (#37684666)

About as much time as it takes on most datacenters that already are monitored remotely. With news like this some would think Nagios or Ganglia did not provide the admins with a web interface.

PS: They might want to, at least, man it with a security guard to sound the alarm in case of fire or robbery

Re:So it will take ages for a fix (4, Interesting)

Martin Blank (154261) | about 3 years ago | (#37685020)

One of the major backbone providers has a lights-out data center not far from my work. I know a guy who has a hosting business there, and he's shown me around to the limits of his access. There is no one on-site from the company or its contractors--not even a security guard. They have biometrics plus PINs for access; it's laced with low-light/IR cameras (it wouldn't surprise me to learn they have microphones); it has motion detectors in case the cameras miss something; and the redundancy is incredible. They maintain contracts with local electricians, plumbers, and a few technical companies should a blade burn out. They manage the entire thing from a few states over, and as of a couple of years ago almost all of their data centers had been converted to run this way. Savings were good, something like a million dollars per DC per year even as unanticipated downtime decreased.

I looked at it and saw the future of IT. I wasn't sure if I was more impressed or scared.

Re:So it will take ages for a fix (1)

mikael (484) | about 3 years ago | (#37685478)

It's more scary - every field of technology evolves that way.

Early valve computers used to require technicians to replace burnt out valves on a daily basis. Each morning of the day, the technicians would go round and replace any that had burnt out or were about to burn out. Now your PC has about 2 billion transistors or more (CPU +GPU), and not one will burn out.

100 years ago, it would take 25 minutes to make a long-distance call between San Francisco and New York due to all the operators involved. Now, it's all automated.

200 year ago, it took four people to operate a single loom to make a shirt. Now, one technician can supervise fifteen industrial carpet making looms that reload automatically.

We've actually got a global clothing surplus due to all the designer and brand name labels to the extent that local makers in developing countries are put out of business.

Re:So it will take ages for a fix (1)

trickyD1ck (1313117) | about 3 years ago | (#37686878)

This isn't scary. This is things getting better.

Re:So it will take ages for a fix (2)

tehcyder (746570) | about 3 years ago | (#37688502)

This isn't scary. This is things getting better.

It's scary if your job is manually maintaining servers.

Re:So it will take ages for a fix (1)

Grave (8234) | about 3 years ago | (#37688832)

I'm not so sure. While individual reliability has increased dramatically, the shear number of systems in use around the world has increased as well, probably along a similar rate. Will we eventually reach a point at which computer hardware simply does not fail without an external event (power surge, physical damage, etc)? Maybe. But I don't see that happening until performance plateaus.

Re:So it will take ages for a fix (1)

X0563511 (793323) | about 3 years ago | (#37686750)

Those must be some fancy microphones to be of any use inside a DC...

Re:So it will take ages for a fix (1)

kmoser (1469707) | about 3 years ago | (#37687144)

One word: RoboCop.

Re:So it will take ages for a fix (2)

PTBarnum (233319) | about 3 years ago | (#37684750)

The article states "failed equipment is addressed in a scheduled way using outsourced or vendor partners". They don't care if an individual server is down, they just move the workload elsewhere, and wait for a repair. So there actually will be people in their data center doing repairs, they just aren't AOL employees and aren't based in the data center. I could see making a decision that a longer wait time for repairs is justified by labor savings, but it isn't really obvious where those savings come from. There is a suggestion in the article that they want the flexibility to increase or decrease the number of workers as needed, which is somewhat easier with contractors than regular employees, but with regular employees you can get a similar effect from part time or overtime work.

Re:So it will take ages for a fix (1)

arbiter1 (1204146) | about 3 years ago | (#37685214)

What it sounds like everything hosted there will be a cloud type system, so if 1 machine dies you won't even notice.

Re:So it will take ages for a fix (1)

X0563511 (793323) | about 3 years ago | (#37686762)

No, but if a whole cabinet or row goes out because someone wasn't around to notice the funny smell or magic smoke coming out of the power equipment, or hear that ACU fan belt starting to come loose, you just might notice...

Re:So it will take ages for a fix (3, Insightful)

Zocalo (252965) | about 3 years ago | (#37685228)

Who cares? I'm guessing you don't have much experience of server clusters but generally, long before you get to the kind of scale we are talking about here, you start treating servers in the same way you might treat HDDs in a RAID array. When one fails, other servers in the cluster pick up the slack until you can either repair the broken unit or you simply remote install the appropriate image onto a standby server and bring that up until an engineer physically goes to site. Handling of the data is somewhat critical though; should a server die you ideally need to be able to resume what it was working on seemlessly and without causing any data corruption; think transaction based DB queries and timeout/retry.

If you have enough spare servers and you can easily get by with engineers only needing to go on site once a month or so, assuming you get your MTBF calculations right that is. There's a good white paper [google.com] by Google on how 200,000 hr MTBF hard drive failure rates equate to drive failures every few hours when you have a few 100k HDs.

Re:So it will take ages for a fix (1)

Grishnakh (216268) | about 3 years ago | (#37685670)

How long will it take for an engineer to get there to replace a card or server?

Much less time than it'll take them to get a user.

Honestly, I was surprised by this article; I thought AOL had already folded.

Re:So it will take ages for a fix (1)

Chapter80 (926879) | about 3 years ago | (#37688652)

...over a decade before the term Cloud was even coined.

You mean over a decade before you heard the term?

Cmon! HP was using the term Cloud five years before "America Online" existed in 1991.

Just because your expertise doesn't extend back before you got that first AOL floppy and went online to type "a/s/l?", it doesn't mean it didn't happen.

Manos, Hands of Fate? (1)

Anonymous Coward | about 3 years ago | (#37684586)

Is now hands-off?

Re:Manos, Hands of Fate? (1)

oodaloop (1229816) | about 3 years ago | (#37685170)

I guess they won't need Torgo to look after the place.

Uh.... (1)

Anonymous Coward | about 3 years ago | (#37684592)

So they have a fully automated unmanned data center... For their fully unused unpopulated services?

WIN!

Re:Uh.... (3, Funny)

silverglade00 (1751552) | about 3 years ago | (#37685112)

Nobody will be there to see Skynet become self-aware. What... you thought the end of humanity wouldn't come from AOL?

Re:Uh.... (1)

crafty.munchkin (1220528) | about 3 years ago | (#37687142)

Well played, sir!

Wow .. how '2000'ish (3, Informative)

johnlcallaway (165670) | about 3 years ago | (#37684610)

Wow ... we were doing this 10 years ago before virtual systems were commonplace, 'computers on a card' where just coming out. Data center was 90 miles away. All monitoring and managing was done remotely. The only time we ever went to physical data center was if a physical piece of hardware had to be swapped out. Multiple IP addresses were configured per server so any single server one one tier could act as a fail over for another one on the same tier. We used firewalls to automate failovers, hardware failures were too infrequent to spend money on other methods. We could rebuild Sun servers in 10 minutes from saved images. All software updates were scripted and automated. A separate maintenance network was maintained. Logins were not allowed except on the maintenance network, and all ports where shutdown except for ssh. A remote serial interface provided hard-console access to each machine if the networks to a system wasn't available.

Yawn ......

Re:Wow .. how '2000'ish (3, Informative)

johnlcallaway (165670) | about 3 years ago | (#37684672)

Thanks for not pointing to the actual blog in the original article. So what they are really blogging is their ability to move an entire DATA CENTER without having to send people to do it. Other than .. you know .. install the hardware to start with.

Never mind........

Re:Wow .. how '2000'ish (1)

rubycodez (864176) | about 3 years ago | (#37684862)

virtual systems were commonplace in the 1960s. But finally these bus-oriented microcomputers, and PC wintel type "servers" have gotten into it. Young 'uns.......

Re:Wow .. how '2000'ish (1)

ebunga (95613) | about 3 years ago | (#37685100)

Eh, machines of that era required constant manual supervision, and uptime was measured in hours, not months or years. That doesn't negate the fact that many new tech fads are poor reimplementations of technology that died for very good reasons.

Re:Wow .. how '2000'ish (2)

timeOday (582209) | about 3 years ago | (#37685196)

And other new tech fads are good reimplementations of ideas that didn't pan out in the past but are now feasible due to advances in technology. You really can't generalize without looking at specifics - "somebody tried that a long time ago and it wasn't worth it" doesn't necessarily prove anything.

Re:Wow .. how '2000'ish (2)

rednip (186217) | about 3 years ago | (#37685456)

"somebody tried that a long time ago and it wasn't worth it" doesn't necessarily prove anything.

Unless there is some change in technology or technique, past failures are a good indicator of continued inability.

Re:Wow .. how '2000'ish (1)

timeOday (582209) | about 3 years ago | (#37685702)

The tradeoff between centralized and decentralized computing is a perfect example of a situation where the technology is constantly evolving at a rapid pace. Whether it's better to have a mainframe, a cluster, a distributed cluster (cloud), or fully decentralized (peer-to-peer) varies from application to application and from year-to-year. None of those options can be ruled in or out by making generalizations from the year 2000, let alone the 1960's.

Re:Wow .. how '2000'ish (1)

rubycodez (864176) | about 3 years ago | (#37685278)

depends what model you bought, the redundant, fault-tolerant systems stayed up while components replaced

Re:Wow .. how '2000'ish (1)

dwreid (966865) | about 3 years ago | (#37685570)

Actually, while 60s era mainframes did require significant maintenance by the time the late 70s came around up-time was much better. I still have a late 70s mini-computer that I keep around for laughs that routinely gets about a year and a half between reboots, running 11 users and multi-tasking for each user. As for features that come and go, the IBM 7030 had instruction pipe-lining and look-ahead (what Intel calls hyper-threading) way back in the 60s. In fact it could have as many as 11 instructions in the pipeline at any time. (Though 4 was typical) That went away in the era of the microprocessor, not because it was a bad idea, but because it wasn't possible to implement in early primitive Intel processors. Only after technology caught up again did it reappear as hyper-threading.

Re:Wow .. how '2000'ish (1)

afabbro (33948) | about 3 years ago | (#37687228)

Eh, machines of that era required constant manual supervision, and uptime was measured in hours, not months or years.

I'm not sure what datacenter you were working in, but in general that is quite untrue.

Re:Wow .. how '2000'ish (1)

mikael (484) | about 3 years ago | (#37685502)

Telephone exchanges in rural areas are like that. The only time a technician had to enter the premises was to clear out old equipment. There was enough spare capacity in the exchanges that the only work required was to open the local cabinets on the street and pair up a new telephone line.

I luv AOL! (1)

jimpop (27817) | about 3 years ago | (#37684614)

Seriously. AOL keeps my relative's PC experience safe; which, generally, keeps them from bugging me for help. :-)

Who? (3, Insightful)

Jailbrekr (73837) | about 3 years ago | (#37684624)

Seriously though, most telcomm operations operate like this. Their switching centers are all fully automated and unmanned, and usually in the basement of some non descript building. This is nothing new.

Re:Who? (2)

rickb928 (945187) | about 3 years ago | (#37684778)

Um, I wouldn't be comfortable my telcomm's switching centers in basements. These are moct commonly the first room to flood when the water comes, and telcomm, switches are everywhere their users are.

I see telcomm switches housed above ground, in plain, sometimes unmarked buildings. There's one a quarter mile from my house, and I drive by two others to go to work. If they have basements, I bet that's where they keep stuff that doesn't matter as much.

And the huge switch that used to work in my old hometown, one of the last crossbar switches in the U.S. to convert to ESS. It was deafening in there, and the basement was empty. Six floors of relays going constantly. The mice ate the insulation like it was licorice. Putting any of that in the basement would be wrong, even if it was built on a hill.

Re:Who? (1)

h4rr4r (612664) | about 3 years ago | (#37684846)

The building I am in hosts one such setup in the basement. It never floods at my location.

Re:Who? (1)

dwreid (966865) | about 3 years ago | (#37685586)

Actually that's not true. Equipment of this type was and is routinely stored in basements as well as entire buildings. I know because I've worked on them for years.

It was to be staffed... (1)

haus (129916) | about 3 years ago | (#37684626)

.. but there last geek quite, so now the data center must fend for itself.

Re:It was to be staffed... (1)

Anonymous Coward | about 3 years ago | (#37684766)

Spelling. You fail it.

Re:It was to be staffed... (0)

Anonymous Coward | about 3 years ago | (#37684998)

Kurtesy. U fale it.

Re:It was to be staffed... (2)

haus (129916) | about 3 years ago | (#37686426)

You do realize that this story is about AOL, correct spelling would simply be out of plase.

Offtopic, but IT workers? (0)

Anonymous Coward | about 3 years ago | (#37684628)

With a lot of my friends believing their code monkey jobs were a dead end, and becoming IT/network admins etc.. I wonder how cloud computing etc will affect the market? Will we see more of these people switching back to software engineering?

Re:Offtopic, but IT workers? (1)

Synerg1y (2169962) | about 3 years ago | (#37684784)

Really? I'm a bit of a hybrid in terms of tasks, but I've gotten...

1. a lot more offers for admin positions (might have more to do w/ my presentation though)
2. better salary offers on coding positions

Thinking of just me, it seems to be better to stay w the code, especially web development, those are always in demand.

I'd take being a part of an admin team over a coding team anyway though, prolly need more experience before I start getting offered those w/o actively seeking them and getting no reply :)

Re:Offtopic, but IT workers? (3, Insightful)

aix tom (902140) | about 3 years ago | (#37684984)

The software still needs to be written. The programs still need to be run somewhere.

Technically not much has changed. The "Cloud" is still made up of servers that have to be administered. The main effect is that the IT and network admins will have to keep up with technology, especially the new virtualization layers between the hardware and the running application. But keeping up to date has always been a part of working in IT.

What (3, Funny)

Dunbal (464142) | about 3 years ago | (#37684634)

AOL still exists? Wow. Yeah ok I guess this is the result of years of beancounter thinking - the expensive part of running the service and the reason they were losing money was the IT staff, huh? Glad I closed my CompuServe account before giving these guys any money.

Re:What (1)

jgotts (2785) | about 3 years ago | (#37684678)

Instead of $15/hour techs working for AOL doing regular maintenance they've switched to outside contractors billing at $100-200/hr when the shit hits the fan. I don't think this idea is going to work very well.

Re:What (2)

Synerg1y (2169962) | about 3 years ago | (#37684798)

The contractors warranty their work :) Sometimes makes all the difference, the $15/h tech is just miserable usually.

Re:What (4, Informative)

billcopc (196330) | about 3 years ago | (#37685012)

How often does shit hit the fan in that sort of environment ?

As a hybrid techie who does a lot of hardware work, I would much rather go in once a month, fix a batch of issues in one visit, collect my fat cheque and go back to the pub, than spend 40+ hours a week playing Bejeweled, waiting for stuff to break.

I would expect AOL's strategy to greatly reduce costs, because that $15/hr rack monkey costs a lot more than $15/hr in the end. They have benefits, you have to "manage" them, they need human comforts like bathrooms, cleaning, seating, heating/air, lunch room. From an efficiency standpoint, the contractor route is more efficient in both money and time.

Re:What (1)

hedwards (940851) | about 3 years ago | (#37685820)

Depends, how confident are you that every eventuality has been planned for and provided for by the system? A significant outage can easily eat up an entire years worth of $15 an hour salaries if you hit an unforeseen condition which causes the whole data center to go down. Sure it's unlikely if the people doing the planning know what they're doing, but I'm sure that the folks in the WTC weren't expecting their records to be destroyed by a terrorist attack taking the entire building down.

Re:What (1)

maxwell demon (590494) | about 3 years ago | (#37688286)

Depends, how confident are you that every eventuality has been planned for and provided for by the system? A significant outage can easily eat up an entire years worth of $15 an hour salaries if you hit an unforeseen condition which causes the whole data center to go down. Sure it's unlikely if the people doing the planning know what they're doing, but I'm sure that the folks in the WTC weren't expecting their records to be destroyed by a terrorist attack taking the entire building down.

Of course any number of $15/h techs in the WTC wouldn't have helped them with this problem anyway.

They're on the Time Warner life support (1)

Gimbal (2474818) | about 3 years ago | (#37687016)

...and Daddy Warbucks got some dough - in a manner of speaking, as it were, etc und so weiter.

Re:What (the fuck are the mods smoking) (0)

Anonymous Coward | about 3 years ago | (#37687132)

cool story bro.

I'm still expecting their datacenters (0)

Anonymous Coward | about 3 years ago | (#37684684)

I'm still expecting their datacenters to be unmanned and using zero electricity soon. I"m surprised they have lasted this long.

Re:I'm still expecting their datacenters (1)

517714 (762276) | about 3 years ago | (#37685072)

Zero electricity would have been achieved if the last technician had turned the lights out when he left.

What is AOL again. ..? (1)

SplatMan_DK (1035528) | about 3 years ago | (#37684696)


I'm from Europe. What is AOL again? And what is its/their significance in 2011/2012 anyway?

- Jesper

Re:What is AOL again. ..? (3, Funny)

SwedishChef (69313) | about 3 years ago | (#37684720)

I thought everyone knew... AOL is the Internet.

Re:What is AOL again. ..? (0)

Anonymous Coward | about 3 years ago | (#37684734)

I'm from Europe. What is AOL again? And what is its/their significance in 2011/2012 anyway? - Jesper

I'm from the United States. What is AOL again? And what is its/their significance in 2011/2012 anyway?

Re:What is AOL again. ..? (1)

Osgeld (1900440) | about 3 years ago | (#37686486)

They have decent TV program listings

Re:What is AOL again. ..? (2)

Moridineas (213502) | about 3 years ago | (#37684756)

They suck. They just suck differently now. They've switched from being an ISP to being a content company (and most of their content creators seems rather disgruntled). Mostly US-based, but most slashdotters should recognize names like TechCrunch or primarily HuffPo...the rest, not so much.

Re:What is AOL again. ..? (0)

Anonymous Coward | about 3 years ago | (#37686860)

I'm from Europe. What is AOL again? And what is its/their significance in 2011/2012 anyway?

- Jesper

I'm from AOL. What is Europe again? And what is its/their significance in 2011/2012 anyway?

Re:What is AOL again. ..? (1)

maxwell demon (590494) | about 3 years ago | (#37688296)

AOL is a service which was providing you with free CDs for decorative purposes. It was, however, a bad idea to put them into you computer's CD drive.
And yes, they also operated in Europe.

In other news (2)

mccrew (62494) | about 3 years ago | (#37684708)

In other news, the rest of AOL is expected to go "lights out" any time now.

Huh? (1)

frisket (149522) | about 3 years ago | (#37684718)

AOL? Who they?

I apologize in advance, (1)

The Yuckinator (898499) | about 3 years ago | (#37684786)

But I can't resist.
 
...In Soviet Russia, remote hands are YOURS!

Pretty easy. (1)

ewhenn (647989) | about 3 years ago | (#37684816)

It's pretty easy to automate a bunch of off switches. ;)

Colossus (1)

SnarfQuest (469614) | about 3 years ago | (#37684832)

Are they going to call it "The Forbin Project"?

Isn't everybody's? (0)

Anonymous Coward | about 3 years ago | (#37684924)

Everybody's data center is fully automated until they decide to make a change they hadn't thought of in the first place. Then you have unauthorized cross-connects running everywhere and desktops running RHEL2 for that one app the developers insists won't run on a VM hidden behind racks so the DC owners won't find them.

Works very well. (2)

140Mandak262Jamuna (970587) | about 3 years ago | (#37684950)

The new data center with 0 head count matches nicely the AOL user base with 0 head count!

Two points. (3, Insightful)

rickb928 (945187) | about 3 years ago | (#37684962)

One - If there is redundancy and virtualization, AOL can certainly keep services running while a tech goes in, maybe once a week, and swaps out the failed blades that have already beeen remotely disabled and their usual services relocated. this is not a problem. Our outfit here has a lights-out facility that sees a tech maybe every few weeks, and other than that a janitor keeps the dust bunnies at bay and makes sure the locks work daily. And yes, they've asked him to flip power switches and tell them what color the lights were. He's gotten used to this. that center doesn't have state-of-the-art stuff in it, either.

Two - Didn't AOL run on a mainframe (or more than one) in the 90s? It predated anything useful, even the Web I think. Netscape was being launched in 1998, Berners-Lee was making a NeXT browser in 1990, and AOL for Windows existed in 1991. Mosaic and Lynx were out in 1993. AOL sure didn't need any PC infrastructure, it predated even Trumpet Winsock, I think, and Linux. I don't think I could have surfed the Web in 1991 with a Windows machine, but I could use AOL.

Re:Two points. (1)

laffer1 (701823) | about 3 years ago | (#37685282)

Netscape was founded in 1994. http://en.wikipedia.org/wiki/Netscape [wikipedia.org]

Re:Two points. (1)

rickb928 (945187) | about 3 years ago | (#37689016)

I was thinking of the browser, not the company.

Re:Two points. (0)

Anonymous Coward | about 3 years ago | (#37685292)

AOL may have used mainframes for their service, but that's a lot of machine to run basically IRC, I don't know.

But did it predate anything useful? uhh no. Netscape was released quite a bit before 1998. Hell in 1996 I already had broadband. There was plenty of internet before that too, it was really slow for home dial-up at 14.4-28.8-56.6kbps.

Re:Two points. (1)

lakeland (218447) | about 3 years ago | (#37685526)

AOL was already famous for being a good source of free floppies in the early 90s, and a search on wikipedia confirms they were renamed to AOL and expanded in '89.

They were doing graphical forums in '86, almost 10 years before Netscape.

how does redundancy help you when the main power (1)

Joe_Dragon (2206452) | about 3 years ago | (#37685796)

how does redundancy help you when the main power switch goes down / on fire and there is no one there. Let's see firemen make a big mess and no is there to start the rebuild or it may just do a safe shutdown just to send some out just to find out you need to call in this other guy to fix the switch or generator.

Re:how does redundancy help you when the main powe (1)

ToddDTaft (170931) | about 3 years ago | (#37686034)

how does redundancy help you when the main power switch goes down / on fire and there is no one there

If you are a big enough operation, you have redundancy at the data center level. i.e. you can lose an entire data center and have no loss of service on your production applications. Other than a possible speed/performance degradation, your average customer has no knowledge that anything bad has happened.

that's what geographic redundancy is for (1)

Chirs (87576) | about 3 years ago | (#37687032)

This is why you have a duplicate data center in another city that is kept in standby and is just sitting there ready to take over. (Actually, you normally have a mix of services active at either location.)

The company I work for makes telecom equipment, and supporting geo redundancy is a fairly key requirement for some major customers.

Re:how does redundancy help you when the main powe (0)

Anonymous Coward | about 3 years ago | (#37687232)

how does redundancy help you when the main power switch goes down

The natural-gas feed backup generators automatically kick on, supplying more than enough power to run the facility for an indefinite period of time.

on fire

Halon suppressant systems. Puts out the fire, doesn't do anything to the equipment. It'll suffocate humans, however, so it's best used in an unmanned room.

Let's see firemen make a big mess

They are trained on how to deal with Halon systems and in specific fires, etc. in datacenters and other electric-heavy facilities.

nd no is there to start the rebuild

Within a few minutes of the alarms tripping, someone in a central monitoring center will dispatch repair techs to the site. There will be a spares deport located somewhere closeby with replacement equipment.

find out you need to call in this other guy to fix the switch or generator.

Generators and electrical are usually contracted out to some kind of local company which specializes in that stuff.

Un-manned simply means there isn't someone there on a daily basis. I'm not sure why this is being talked up like it's some kind of new concept, since tens of thousands of companies all around the planet have been doing this for many, many years.

Re:Two points. (1)

evilviper (135110) | about 3 years ago | (#37685828)

It predated anything useful, even the Web I think. Netscape was being launched in 1998, Berners-Lee was making a NeXT browser in 1990, and AOL for Windows existed in 1991.

The web was around, and in-force MUCH earlier than you would imagine. Windows 98 had Internet Explorer version 4 inextricably linked to the OS. Not version 1, but version 4. Internet Explorer was concieved as a weapon against Netscape, so there's no way IEv4 predated Netscape...

And before the WWW, the internet was quite useful. Newsgroups, FTP sites, and Gopher sites contained a lot. Many people here were downloading floppies of Slackware Linux back then...

  I don't think I could have surfed the Web in 1991 with a Windows machine, but I could use AOL.

No, you couldn't because NOBODY had Windows in 91. Everyone was running on MS-DOS. I still remember the Compuserve and Prodigy login-screens from their old DOS apps. Trumpet Winsock is irrelevant in the DOS days.

 

Re:Two points. (1)

bill_mcgonigle (4333) | about 3 years ago | (#37686818)

No, you couldn't because NOBODY had Windows in 91.

What in the world are you talking about?

Re:Two points. (0)

Anonymous Coward | about 3 years ago | (#37687280)

You fail at correction..

Windows 3.0 sold quite well, and it was released in 1990. I was using it in 1990, and it was commonplace to come on PCs at the time. Windows 3.1 was also a very strong performer for Microsoft. He said that Berners-Lee was making a browser in 1990...which is about correct. The web wasn't around before that, by definition. So it wasn't around earlier than they imagined. Yes, he got the date on Netscape wrong, but web browsers were not being distributed before 1992 and even then pretty much only amongst those in education or special commercial affiliations.

AOL started off as an online service called Quantum Link for C64s. It very much predates the web, by something like 7 years. Additionally, in the US at least, Internet (not just web) access for the general public was quite limited until the early to mid 90s.

Re:Two points. (0)

Anonymous Coward | about 3 years ago | (#37687372)

No, you couldn't because NOBODY had Windows in 91.

Try research before posting next time.

Windows 3.0 was released in the Spring of 1990, and Trumpet Winsock was what was used by AOL at the time for its Windows-based client.

Re:Two points. (2)

Jay L (74152) | about 3 years ago | (#37687018)

AOL initially ran on a network of Stratus fault-tolerant minicomputers, each running two to eight 680x0 CPUs. Later we added unix boxen, some beefy SGIs and HPs for servers, and Suns for front-end telco interfacing IIRC. By the mid-90s we grew a Tandem fault-tolerant cluster for our critical databases; it did hot component failover, multimaster replication, all
the stuff that's common today, but
with SQL down in the drive controller for blazing speeds. We didn't really
start moving to a PC-based architecture until the late '90s, when
Linux provided cheap, reliable enough workhorses, and helped drive the
big Iron prices down too

Re:Two points. (1)

Jay L (74152) | about 3 years ago | (#37688720)

Wow. I will never post from an iPhone again...

Re:Two points. (1)

rickb928 (945187) | about 3 years ago | (#37689060)

Wow. We're still two years from decomissioning our Stratus servers. We're still 6 months from decom of SNA. I gotta talk to the other team about stepping it up.

AOL Needs a Data Center? (2)

Trip6 (1184883) | about 3 years ago | (#37685182)

Oh yeah, to house all the dial-up modems...

Amazing! (1)

MAXOMENOS (9802) | about 3 years ago | (#37685234)

I didn't know AOL even still existed!

Re:Amazing! (0)

Anonymous Coward | about 3 years ago | (#37685926)

Really? Despite the fact that a story like this is on here every few months?

I wish people would stop posting AOL stories on slashdot. The jokes weren't funny back in the day, they're less funny now. I'm amazed that each of these can have like 30 people all post either "AOL is still around?" or "I didn't know they still existed!" or "Me too!", as if it was the height of comedy. Oh, or the hilarious jokes about the CDs and floppies. Ouch, my sides hurt.

As if the only bad users in the world came from AOL. Please. If it wasn't AOL, it would have been Earthlink or MSN or whatever.

Seriously though, get some new jokes.

really? (1)

geekoid (135745) | about 3 years ago | (#37685412)

AOWho?

Me too (1)

Pezbian (1641885) | about 3 years ago | (#37685588)

n/t /obligatory

Datacenter in a box (1)

lucm (889690) | about 3 years ago | (#37686036)

At least that way they won't need "heroic support"

Wow, is AOL still around? (2)

QuietLagoon (813062) | about 3 years ago | (#37686100)

What are they doing nowadays that requires multiple servers?

Re:Wow, is AOL still around? (0)

Anonymous Coward | about 3 years ago | (#37688506)

AOL were working in a distributed news editorial solution. The idea was that they would get all the generic news feeds and publish those (like lots of people do), but paid editors (picked from the general public) would pick the best bits and they'd promote those.

Going further, those editors would be able to read the same story from multiple sources, and write their own piece about it, quoting their sources (an editorial, no less!)

Going even further, editors could go out and do their own journalism and publish their own stories.

The idea was that a bunch of computers, programs, advertisers and crowd-sourcing would give editors 'score'. The higher their scores, the more money they got, the better their articles performed, the more money they got, and the quicker their work would get promoted, and the less oversight they got.

It's easy to bash them, but I have to say, I think this idea is pretty damn cool. It's what newspapers probably need to turn into to survive long-term, and someone with the size of AOL is well placed to do it. Of course, the truth is, it's not easy, and the fact everyone thinks they're as-good-as-gone suggests that they're finding it harder than they thought.

As an Operator currently working inside a DC... (1)

Datamonstar (845886) | about 3 years ago | (#37686772)

... I say FUUUUUUUUUUUUUUUuuuuuu...

And they named it.... (1)

Gimbal (2474818) | about 3 years ago | (#37687000)

....wait for it .... Smynet! (Someone typoed)

Hope they don't have rats (1)

bryan1945 (301828) | about 3 years ago | (#37687064)

To start chewing through wires, causing power outages, starting fires, pooping in the mailbox, that kind of stuff.

Management Speak (0)

Anonymous Coward | about 3 years ago | (#37687160)

"Redundancy" was not meant for the staff.

One of the early search engines did this. (1)

Animats (122034) | about 3 years ago | (#37687386)

One of the early search engines, I think Infoseek, worked this way. Machines were installed in blocks of 100 (this was before 1U servers) and never replaced individually. Failed machines were powered off remotely. When some fraction of the block had failed, about 20%, the whole cluster was replaced.

There's a lot to be said for this. You have less maintenance-induced failure. Operating costs are low.

Grid (1)

1s44c (552956) | about 3 years ago | (#37687602)

...over a decade before the term Cloud was even coined.

You mean back when it was called 'grid'?

Not much to see here (1)

LordFolken (731855) | about 3 years ago | (#37687624)

What they did:
* Modularize/Standardize Infrastructure, e.g. storage & computing power
* Build provisioning systems
* Virtualize everything

When they say that they are flexible, they mean that they have a lot of dark hardware lying around.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?