×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

What Web 2.0 Means for Hardware and the Datacenter

ScuttleMonkey posted more than 5 years ago | from the driving-the-bottom-line dept.

Data Storage 125

Tom's Hardware has a quick look at the changes being seen in the datacenter as more and more companies embrace a Web 2.0-style approach to hardware. So far, with Google leading the way, most companies have opted for a commodity server setup. HP and IBM however are betting that an even better setup exists and are striking out to find it. "IBM's Web 2.0 approach involves turning servers sideways and water cooling the rack so you can do away with air conditioning entirely. HP offers petabytes of storage at a fraction of the usual cost. Both say that when you have applications that expect hardware to fail, it's worth choosing systems that make it easier and cheaper to deal with those failures."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

125 comments

RTFA... (4, Funny)

Anonymous Coward | more than 5 years ago | (#23547505)


I'd love to RTFA but there's no link...

Oh, honestly (5, Funny)

Anonymous Coward | more than 5 years ago | (#23548283)

This is Web 2.0 we're talking about here. You don't need any links! :)

Re:RTFA... (5, Insightful)

GuldKalle (1065310) | more than 5 years ago | (#23548403)

There is no direct link to the story because there's no direct link between Web 2.0 and redundant hardware setups.

Re:RTFA... (-1, Troll)

Anonymous Coward | more than 5 years ago | (#23548539)

I'd love to RTFA
You're either a liar SOB or you must be new here

Re:RTFA... (0)

Anonymous Coward | more than 5 years ago | (#23549735)

I'd love to RTFA
Pfft, like hell you would.

WTF ? The Web 2.0 approach to hardware? (4, Insightful)

Colin Smith (2679) | more than 5 years ago | (#23547507)

Web 2.0 is about a thousand layers above hardware, it does not in any manner, approach.

 

Re:WTF ? The Web 2.0 approach to hardware? (4, Funny)

tsalmark (1265778) | more than 5 years ago | (#23547669)

But if you run 2.0, your hardware wants to be on it's side, to um, well run better, because it's 2.0

Get on board or get left behind (1)

raftpeople (844215) | more than 5 years ago | (#23547697)

When I order a mocha, now I ask for 2.0 % milk

Re:Get on board or get left behind (2, Funny)

thePowerOfGrayskull (905905) | more than 5 years ago | (#23548021)

When I order a mocha, now I ask for 2.0 % milk
Pfft. Real nerds ask for milk with a 0.02 fat concentration.

Re:Get on board or get left behind (2, Informative)

xaxa (988988) | more than 5 years ago | (#23548359)

When I order a mocha, now I ask for 2.0 % milk
Pfft. Real nerds ask for milk with a 0.02 fat concentration.
Real nerds drink black coffee.

Re:WTF ? The Web 2.0 approach to hardware? (1)

visualight (468005) | more than 5 years ago | (#23547859)

I don't think the articles author defines Web 2.0 the same way I do. Commodity virtual and/or high performance clusters have been around for at least a decade and don't have anything to do website design.

Don't blame the article author.. (3, Interesting)

Junta (36770) | more than 5 years ago | (#23547995)

The big companies are locking on to 'Web 2.0' as a moniker for embracing an idea they had been completely ignoring until Google took advantage of it and forced everyone to notice. Smaller companies had already gotten the message that while hardware-failure tolerant servers have their place, in many situations with large numbers of systems the only practical place to solve it is in software, and then expensive hardware redundancy is superfluous, costing both initial money and additional power/cooling.

I'm not saying Google was by any means the first to think of this or do it, but no one else that did that as part of their core strategy had come to the spotlight to the degree Google has. Every single one of Google's moves to the industry at large has become synonymous to 'Web 2.0', and as such hardware designs done with an eye on Google's datacenter sensibilities logically become 'Web 2.0' related. You'll also note them saying 'Green computing' and every other possible buzzword that is fashionable.

Of course, part of it is to an extent trying to create a sort of self-fulfilling prophecy around 'Web 2.0'. If you help convince the world (particularly venture capitalists) that a bubble on the order of the '.com' days is there to be ridden, you inflate the customer base. Market engineering in the truest sense of the phrase.

Unfortunately there's one single definition (5, Informative)

Moraelin (679338) | more than 5 years ago | (#23548705)

Unfortunately, there is one single definition of "Web 2.0", and that is the one of the guy who registered that trademark: Tim O'Reilly.

Now I'm not usually one to make a big fuss over using a word wrong, but this one is actually a trademark. Deciding to use it in any other way, is a bit like deciding to call my Audigy 4 sound card a GeForce or an Audi. It just isn't one.

And the extent to which both tech "pundits" and PHBs use it wrong, while (at least the latter) proclaiming their undying love and commitment to it, just leaves the impression that they use it as yet another buzzword. You don't proclaim your commitment to a technology, unless you actually understand what it is, how it can help you, and preferably how it compares to other technologies to the same end. Just going with a buzzword because it's popular, and ending up pledging your company to the camp of such a buzzword, is as silly (and often has the same effects) as making it your strategy to use scramjets in bicycles. Just because everyone seems to love scramjets lately, and you wouldn't want your mountain bike company to be left behind.

To get back to the actual definition of that trademark, it's not even about technology as such. It's about people. It's not techno-fetishism, as in liking cool new technologies for their sake, it's techno-utopianism: the mis-guided belief that you only need to give more internet tools to a billion monkeys, to get a utopia like nothing imagined before. Although said monkeys never created anything worth reading with a keyboard, if it's keyboards connected to the Internet, now that's how you hit a gold mine.

O'Reilly's idea is sorta along the lines of:

- forget about publishing content (e.g., hiring expensive tech writers and marketers for your site), it's all about participation, baby. Let users write your content. Hust put in some wikis and forums, and a thousand bored monkeys will do the work faster, cheaper and more accurate. (People will just flock to offer you some free, quality work, just because they like donating to a corporation, I guess. And if instead you discover comments about how much your company sucks, the CEO's sexual orientation, and his mom's weight, well, I guess it must be true, 'cause collaborative efforts can't _possibly_ be wrong.)

- forget about setting up your own redundant servers or dealing with Akamai, use BitTorrent. (Ask a lot of people how they felt about Blizzard's going almost exclusively through BitTorrent at launch. Nowadays their own servers serve a lot more of the content, if not enough other users are stuffing your pipe. I wonder why.)

- forget selling media on the Internet, teh future is Napster letting people pirate it, like happened way back then. (No, literally, the "mp3.com --> Napster" line is part of his own page explaining Web 2.0. I guess good thing noone told Steve Jobs that.)

- forget content management systems, use wikis. (I wonder in which alternate reality the piss-poor search engines of wikis can be compared to the capabilities of those systems.)

- for that matter, forget about structuring information in any way, like through directories and portals, just let the users tag it. (I'm _sure_ that the tags "humor, theft, oldnews, !news, digg" will so help me find the story about a manager stealing the server from earlier. Never mind that search engines were already dumping searching for tags, in favour of full text search, even at the time when he came up with that idea.)

Etc.

Basically, if you have the patience to sift through his ramblings, and don't give up at the "well, Google started up as a web database" intro, the meat begins at "Harnessing Collective Intelligence". That's what's it about. It's not as much about what technology you use on the web, it's about connecting a billion clueless monkeys, and believing that the result is something a billion times more intelligent and informed. Anything that helps connect those monkeys is good, anything else is irrelevant. Even whether you use AJAX or whatever other stuff that misguidedly got called Web 2.0, is really irrelevant there.

Even Google is his poster child only because (A) it lets you reach other people's data, and (B) in the meantime it had a big fat IPO. The latter goes a long way to get people's attention that maybe another bubble can be started, and they too can get a chunk of VC money for just having a hare-brained site. Only this time with wikis on it.

_That_ is Web 2.0.

Now _I_ am not particularly convinced of its viability, but even that's rather secondary. What I'm trying to say is: well, whoever pledges their undying commitment to Web 2.0, at least damn better understand what that means. If they still like it then, fine. But let's not pretend that buying a water-cooled rack has _anything_ to do with Web 2.0.

Re:Unfortunately there's one single definition (1)

ducomputergeek (595742) | more than 5 years ago | (#23549885)

The guy who founded MP3.com (IIRC) wrote a book entitled "The Cult of the Amatuer", which he basically talks about going on a weekend get away with O'Reiley and has an interesting critique of Web 2.0. So much so, I bought a couple extra copies and have given a couple copies to PHB when I'm hired to do consulting work.

Not saying I disagree (2, Insightful)

Junta (36770) | more than 5 years ago | (#23549931)

Merely that the companies are the ones tinting the situation for their benefit. 'Web 2.0' has become a bit of marketeering, since the original definition doesn't help a lot of those companies sell more crap.

However, to an extent, fighting for the original spirit/meaning of 'Web 2.0' to an extent is like fighting for correct usage of 'begging the question', while you may be in the right, the masses still adopt the common usage. And in Web 2.0 in the true sense of the word, the most popular opinion tends to win, and thus Web 2.0 isn't that anymore ;) It's sort of ironic that the core meaning of Web 2.0 really allows it to not retain the meaning at all.

Web 2.0 has deteriorated to mean 'second coming of .com, come and buy your servers and services before it's too late!' after all the marketing groups got a hold of it. O'Reilly made the mistake of coming up with too catchy a phrase that accurately described aspects of key popular sites, and the only thing the business types see are the aspects that correlate to money.

Re:WTF ? The Web 2.0 approach to hardware? (2, Informative)

jonbryce (703250) | more than 5 years ago | (#23548711)

Web 2.0 is a corporate buzzword that PHBs throw into discussions to make it sound like they are really up to date.

Re:WTF ? The Web 2.0 approach to children? (0)

Anonymous Coward | more than 5 years ago | (#23548009)

We've been deep into Child 1.0 but there were a lot of issues so my wife turned sideways for Child 2.0 and we're sure our cost efficiencies, eco-awareness and community acceptance will be a lot higher.

Re:WTF ? The Web 2.0 approach to hardware? (3, Interesting)

fan of lem (1092395) | more than 5 years ago | (#23548175)

Servers post to twitter whenever they "don't feel well". Web 2.0-enabled system admins react quicker! (Esp. with a Firefox plugin)

Re:WTF ? The Web 2.0 approach to hardware? (3, Informative)

thatskinnyguy (1129515) | more than 5 years ago | (#23548205)

Web 2.0 is about a thousand layers above hardware, it does not in any manner, approach.

Not to be pedantic, but it depends on what model you are using. According to the OSI model, the Application Layer is 6 layers above the Physical Layer. And according to the TCP/IP model, the Application Layer sits 4 layers above hardware.

Network models with thousands of layers?! Not only is that crazytalk, it's way too precise to be practical.

Re:WTF ? The Web 2.0 approach to hardware? (1)

klapaucjusz (1167407) | more than 5 years ago | (#23548701)

Web 2.0 is about a thousand layers above hardware, it does not in any manner, approach.

If you're running a highly redundant and completely pointless application, then you want to optimise your hardware differently than if you're running a monolithic and mission-critical one. Which is what the article is about.

Re:WTF ? The Web 2.0 approach to hardware? (2, Interesting)

hackstraw (262471) | more than 5 years ago | (#23548817)

My thoughts exactly. Its like "Hmm, we need a good buzzword here, ah Web 2.0, that will work".

I haven't read the FA yet, but here are the big 2 with data centers infrastructure-wise. 1) Power 2) cooling. Always has been, always will be. Frankly, I think that pumping a bunch of cold air in the floor is a bit primitive. I think in the near future we will see power and cooling be more a part of the racks than the way its done now. There are some data centers that are doing this, but its one of the things that its too new for it to be universal.

I've thought for a long time that the hot row, cold row thing is also a bit primitive. I think that it would be cool if there were plenums _between_ the racks that removed the heat from the systems _upward_, not front to back like its done now.

I also don't understand why DC/telco type systems are not more common, and put redundant power supplies in the racks and not have each 1U pizza box have a power supply. So much energy is lost this way, its not even funny.

Anyway, while web 3.0 is on its way, I'll read the FA and see what is going on. I didn't know HP was in the petabyte storage arena, and I'll also see what IBM is up to...

OMG (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#23547509)

Lol, my first FIRST PR0ST!

I'm leary of watercooling servers... Too much to go wrong too spectacularly...

Um... wheres the article? (0)

Anonymous Coward | more than 5 years ago | (#23547511)

Oooi, editors...

A link in the story please? (2, Funny)

Anonymous Coward | more than 5 years ago | (#23547513)

How about a link in the story? Or whats the deal here? We need a site to slashdot!

Re:A link in the story please? (1, Insightful)

RpiMatty (834853) | more than 5 years ago | (#23548071)

No one RTFA anyways so what does it matter?

Try slashdotting my server http://127.0.0.1/ [127.0.0.1]

Re:A link in the story please? (1)

thefekete (1080115) | more than 5 years ago | (#23549677)

How stupid of you to list your IP!

I've entered it into the queue and in a few moments, my botnet will begin a DoS atta

Re:A link in the story please? (1)

servognome (738846) | more than 5 years ago | (#23549145)

Dude that's the whole point of the story! In Web 2.0 you don't even need a link so IBM is replacing servers with sideways water cooled racks, no electronics needed... think of the profit margins!

No link (0, Redundant)

Kickboy12 (913888) | more than 5 years ago | (#23547515)

Where's the article link?

Re:No link (0)

Anonymous Coward | more than 5 years ago | (#23547703)

Where's the article link?
Are you implying that you RTFA? That's unheard of!

Re:No link (1, Funny)

Anonymous Coward | more than 5 years ago | (#23547721)

It's just the final step for slashdot, since nobody actually RTFA anymore anyway. :>

Web 2.0 (4, Insightful)

77Punker (673758) | more than 5 years ago | (#23547541)

Oh, I get it. This is Web 2.0 hardware setup because users can add and modify servers as they see fit! Wait, the users have no control over the hardware?

Sounds pretty stupid, but maybe Tom's hardware guide has a good explanation...wait, there's no link to the article, or anything at all! At least we'll get some good discussion going because this is Slashdot, right?

This is probably the worst article I've ever seen on Slashdot.

Re:Web 2.0 (1)

rjshirts (567179) | more than 5 years ago | (#23547645)

It's a holiday. Everyone that posts informative articles is probably off wasted because it's their one day off this month. That's also probably why this article made it through with no link - one too many shots of tequila to celebrate Memorial Day........

I'm drunk (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#23547961)

I'm drunk you insensitive clod.

Re:Web 2.0 (0)

Anonymous Coward | more than 5 years ago | (#23547653)

Hello, you must be new here.

Re:Web 2.0 (0)

Anonymous Coward | more than 5 years ago | (#23548115)

You've obviously never checked out idle...

Re:Web 2.0 (4, Informative)

Eponymous Bastard (1143615) | more than 5 years ago | (#23548509)

This is Web 2.0 hardware setup because users can add and modify servers as they see fit!
Actually, I found this part interesting, from HP's offering:

When drives or the fans in the disk enclosures fail, the PolyServe software tells you which one has failed and where - and gives you the part number for ordering a replacement. Add a new blade or replace one that's failed and you don't need to install software manually. When the system detects the new blade, it configures it automatically. That involves imaging it with the Linux OS, the PolyServe storage software and any apps you have chosen to run on the ExDS; booting the new blade; and adding it to the cluster. This is all done automatically. Automatically scaling the system down when you don't need as much performance as you do during heavy server-load periods or marking data that doesn't need to be accessed as often, also keeps costs down. [emphasis mine]

I know, not what you meant, but a funny coincidence.

IBM is offering a more optimized rack, with shared and optimized power supplies, different arrangement for the fans, a heat exchanger in every rack for your building's air conditioner, (which Tom's interprets as water cooling) and a couple other things.

HP has a weird clustering software/hardware hybrid with large amounts/density of RAID 6 storage (for a flickr-style site, for example) together with a cluster of blades that can all access all the storage and can be added/removed at will. Interestingly they point at scaling down the system when load is low, to keep the costs down. I wonder if they put servers on stand-by automatically or something. They are also looking at not spinning all the disks all the time, but they're not there yet. I guess having some disks acting as a write cache could allow you to at least spin down the parity disks of the LRU sections or some such. You could even cache the read side if you're willing to put up with the spinup delay on a cache miss.

Supposedly this is Web 2.0 because you want a google-style cluster with lots of generic hardware where any one computer can go down and the whole thing keeps going. IBM wants to lower the maintenance costs, HP didn't show them the server side, but pushed their storage technology.

Re:Web 2.0 (1)

alx5000 (896642) | more than 5 years ago | (#23548537)

This is probably the worst article I've ever seen on Slashdot.

You must be new here ;)

Re:Web 2.0 (1)

77Punker (673758) | more than 5 years ago | (#23548619)

If this isn't the worst posting ever, what is? I've probably read 80% of the posts on this site for the last 7 or 8 years, and this one really stuck out for some reason. Just overwhelming misuse of already annoying (nearly meaningless) buzzwords combined with no link.

Re:Web 2.0 (1)

alx5000 (896642) | more than 5 years ago | (#23548797)

Yes, I was only trying to be "funny". I completely agree with you: this is a blatant slashvertishment about two big companies taking advantage of the many buzzword-loving pointy-haired bosses out there, who are willing to puke huge piles of money into anything 'enterprisey'.

I think we should celebrate the fact that there was no link to begin with ;)

Re:Web 2.0 (1)

77Punker (673758) | more than 5 years ago | (#23549857)

That's a pretty grim outlook. I thought it was just ineptitude. I was serious about asking about the worst article ever, though. We should come up with a top/bottom 10 list of Slashdot stories on a slow news day.

The link... (0)

Anonymous Coward | more than 5 years ago | (#23547605)

http://www.tomshardware.co.uk/servers-hp-ibm,review-30875.html

Web 2.0 and hardware (4, Insightful)

gnuman99 (746007) | more than 5 years ago | (#23547625)

WTF is TFA link?

But from the summary, it seems that "Web 2.0 servers" are like "Web 1.0 servers" but they would need more

    1. storage (for user comments)
    2. I/O (less caching, more throughput)
    3. processing power

But then that is just common sense. Regardless, "Web 2.0" is clearly a misused term to fullest extent possible these days. Might as well be "web enabled" and "linux" at end of the 90s.

Re:Web 2.0 and hardware (4, Funny)

Bodrius (191265) | more than 5 years ago | (#23547687)

If they need those 3 things to offer the same performance, and are uglier to boot... then yeah, that's Web 2.0 all right.

Maybe they let Ops mod their servers too.
Gotta bring in the user content aspect into the picture.

Re:Web 2.0 and hardware (1)

Courageous (228506) | more than 5 years ago | (#23548387)

Nah, what they mean when they refer to "Web 2.0" hardware is the same as Web 1.0... distributed stateless servers. Commodity replaceable parts, with software architectures designed to "run anywhere, we don't care where".

With that approach, the big SMP systems are now the detritus of technologies past. Everything is cluster this, cluster that, basically. Web 2.0 doesn't change this any, but it makes for a nice buzz word.

But never mind that. The article presents HP as having a storage solution at a "significant fraction" of the cost of their competitors. It goes on to state that "Current enterprise storage costs are around $15 per gigabyte".

Um. Hello?!? I get to see the no kidding per usable gigabyte prices of every significant vendor in the Gartner magic quadrants... even the most expensive vendors, top of the line, king of the kings, are no where near $15 per gig! WTF!?

If an HP salesman told me something like that, I might have to show him the door. "Lies do not become us."

And if they are targeting $2/GB, they really need to be looking at the competition. Seriously. Many vendors will be sub $1 before the end of the year.

C//

Re:Web 2.0 and hardware (1)

Vancorps (746090) | more than 5 years ago | (#23548985)

I think you are very confused what they mean when they say $15/gig, that includes redundant load balanced raid controllers, the power required to run it all, any internal switching connected such as Infiniband, 10GigE, or FCP interfaces.

When you add all this up then yes, a 60tb SAN array is not going to cost the price of 60 1tb Hard drives.

Enterprise class storage with all the associated equipment does not come cheap even today.When your goal isn't just bulk capacity then you have to also consider the number of spindles necesary to get the best speed then things get complicated and the cost/gig rises quickly.

Re:Web 2.0 and hardware (1)

Courageous (228506) | more than 5 years ago | (#23549055)

I buy redundant active-passive HA storage controllers, all storage attached to the system and networked to the storage controllers, with fibers, SFPs, RACKED, and with IP addresses inserted for me at the factory and turnkeyed for us, for under $2/GB... per usable gigabyte, that is, after the file system and RAID-6 overhead is removed... from one of the leading and most well known vendors in the storage community today.

That's for Tier-II. For Tier-1, it's in the $4.50 range. I am presuming that the HP solution was a SATA one, and hence my eye rolling at their fantabulous $2/GB figure.

Did I not tell you I get to see prices (as in quotes for multi-petabyte systems) from all the significant magic quadrant vendors? I did say that, didn't I?

I'll give HP a bit of credit, at least, for doing with PolyServe something that traditional NAS vendors can't yet do, and therefore perhaps some credit on pricing.

But $15/GB? Salesman, please step away from the crack pipe.

C//

New Buzz Hardware... (2, Insightful)

hyperz69 (1226464) | more than 5 years ago | (#23547637)

For New Buzzwords...

Web 2.0 is gonna be better then Web 1.0 Just like Vista was WAY WAY better then Windows XP!

Though I mean common seriously, this stuff is getting a bit dicey. Web 2.0 isn't really even a standard of OTHER standards. It's a term for how much java, shockwave, and ads you cam JAM INTO A WEBSITE!

What Web 2.0 means for hardware, is that a bunch of companies late to taking in the $$$ from Web 1.0 are gonna not miss the next gravy train. Overselling to data centers a rack of watercooled 128 core .5U servers, for what can be done on a modified Xbox.

Then again thats just my opinion.

P.S. Don't mod troll me, I am fragile like an IPO for a search engine.

Re:New Buzz Hardware... (2, Insightful)

maxume (22995) | more than 5 years ago | (#23547877)

Web 2.0 doesn't have anything to do with Java, unless you are serving your Javascript with it.

Re:New Buzz Hardware... (1)

dafdaf (319484) | more than 5 years ago | (#23548607)

And since we're talking about servers, that would be... correct !

Re:New Buzz Hardware... (1)

maxume (22995) | more than 5 years ago | (#23548729)

It's a term for how much java, shockwave, and ads you cam JAM INTO A WEBSITE!

Yippy skippy doodle.

Better names other than Web 2.0 (2, Funny)

Chicken_Kickers (1062164) | more than 5 years ago | (#23548137)

Calling it Web 2.0 is so...1999. I propose some more marketable names:
  • Web II: The P2P Quickening
  • Web 2: RIAA Judgment Day
  • Web II: The wrath of Balmer
  • Web 2: The Information Hyper Highway
  • Web Episode II: Attack of the SpamBots
  • The Son of Web
  • The Second coming of Web
  • The Web Reloaded
  • The Web, Professional Edition
  • The Web, Service Pack 2

Re:New Buzz Hardware... (3, Informative)

anomalous cohort (704239) | more than 5 years ago | (#23548139)

The poster equates Web 2.0 to "java, shockwave, and ads" and gets modded as insightful? Riiiiight.

Even if you were only focused on the technical aspects of Web 2.0, you would realize that these so-called Web 2.0 [blogspot.com] sites used AJAX and neither java nor shockwave. An even more relevant description of web 2.0 would include such terms as collective intelligence [blogspot.com], user generated content [transitionchoices.com], or the long tail [blogspot.com].

Re:New Buzz Hardware... (2, Insightful)

mad_minstrel (943049) | more than 5 years ago | (#23548179)

Oh. I thought Web 2 was where users use your website as an application, rather than perceive it as content, and then you charge advertisers. Client-side scripting makes the user experience better by providing a more responsive interface, but plain html would work ok, provided there's a fast enough connection on both ends. Right?

So it means the same as anything else these days? (1)

Nursie (632944) | more than 5 years ago | (#23547665)

Commodity hardware and a solution like VMware ESX.

High availability, built in redundancy, cheap per-unit cost. What's not to like?

Works for your mission critical apps and your less critical stuff.

Re:So it means the same as anything else these day (3, Funny)

morgan_greywolf (835522) | more than 5 years ago | (#23547851)

No, no, no, no. VMWare ESX was Web 1.5. New hardware is Web 2.0. Get with the program!

For those not keeping up, here is my guide to Web 2.0:

Web 1.0: House blend coffee
Web 1.5: Tall, skinny latte with soy milk
Web 2.0: Frappuccino.

Web 1.0: Static HTML
Web 1.1: Dynamic HTML
Web 1.5: Dynamic XHTML
Web 2.0: HTML? What's that?!

Web 1.0: Cisco routers
Web 1.1: Cisco routers runnning IOS
Web 1.5: Nortel routers
Web 2.0: Who needs routers? We have IPV6!

Web 1.0: Wired
Web 1.5: Wireless
Web 2.0: Sharks. With friggin' LASERS attached to their heads!

Re:So it means the same as anything else these day (1)

notdotcom.com (1021409) | more than 5 years ago | (#23548117)

Because when I have a virtual host fail (kernel dump), I want 15 "high availability" servers to all go down simultaneously, instead of one.

Re:So it means the same as anything else these day (1)

Nursie (632944) | more than 5 years ago | (#23549527)

And come back up in seconds on redundant hardware or spare capacity running in the same VM cloud off the same SAN.

Strangely enough they though of that.

Re:So it means the same as anything else these day (1)

Fallon (33975) | more than 5 years ago | (#23548369)

Actually VMware is the exact wrong way to go (although it does rock for many purposes). If your looking to put up a server farm for "web 2.0" apps, you want to have each box running as close to 100% efficiency with no extra overhead (VMware). As you scale up in a modular environment you'll gladly trade off the flexibility that VMware gives you for more efficiency. Redundant boxes provides high availability rather than extra software.

Specing hardware for an application farm usually means piles of blades or 1U boxes, depending on the application you probably don't even need redundant hardware, just replace the failed hardware and rebuild from scratch. If your going VMware you'll end up specing bigger mega CPU boxes with massive RAM.

In smaller shops where you require more flexibility to run servers/services that don't require farms VMware gives you a better bang for your buck. Nothing beats VMware for R&D either.

Re:So it means the same as anything else these day (1)

rawler (1005089) | more than 5 years ago | (#23548489)

Uhm, yeah. Gotta tell that to my friends at work. Their VMWare-virtual hosts aren't constantly rebooting themselves and go down, crash in kernel, drift in clock etc. It just seems like it.

Also, the way they manage to get some 15 poorly performing servers out of hardware and software investments close to 50k Euro must be brilliant. We only get decent performing servers for some 1.5-2k Euro/Server (WITH full redundancy for the services that requires it, including separate disk-arrays).

Sorry, but I don't buy VMWare (or any other host-level virtualisation for that matter) as a solution to everything as many people tend to do nowadays. Host virtualisation is good for quickly jamming out test-environments for software integration and testing new and different software stacks and OS:ed.Possibly also virtual hosting for customers that needs to run their own OS. For other purposes the service/server differentiation in the standard OS/Process - model tend to work out pretty well. If you need to get fancy, OS-virtualisation [wikipedia.org] may be an option.

Re:So it means the same as anything else these day (1)

Nursie (632944) | more than 5 years ago | (#23549509)

"Gotta tell that to my friends at work"

By your comments I take it you mean you have experience with VMware workstation. Try using ESXi and coming back to me on that. You're out of date.

In case of Slashdotting (0)

Anonymous Coward | more than 5 years ago | (#23547681)

posting anonymously to avoid karma whoring:

Tom's Hardware has a quick look at the changes being seen in the datacenter as more and more companies embrace a Web 2.0-style approach to hardware. So far, with Google leading the way, most companies have opted for a commodity server setup. HP and IBM however are betting that an even better setup exists and are striking out to find it. "IBM's Web 2.0 approach involves turning servers sideways and water cooling the rack so you can do away with air conditioning entirely. HP offers petabytes of storage at a fraction of the usual cost. Both say that when you have applications that expect hardware to fail, it's worth choosing systems that make it easier and cheaper to deal with those failures."

Sick of Web 2.0 (0)

Anonymous Coward | more than 5 years ago | (#23547741)

Web 2.0 is growing up ... He allows people to interact with him ... but why even talk about it? The more we talk about it, the more important this web 2.0 guy thinks it is...

Yeah... link. Why bother? (0)

Anonymous Coward | more than 5 years ago | (#23547767)

It's Tom's Hardware. This is the entire article, only on one page instead of 6.

Turning servers sideways? (0)

Anonymous Coward | more than 5 years ago | (#23547781)

Won't somebody think of the angular momentum of the Earth?

This just in... (5, Funny)

discord5 (798235) | more than 5 years ago | (#23547791)

The best way to organize your serverroom for web 2.0 compliance is by stacking the servers diagonally. This way, air can float freely between racks, improving the flow of the system administrator gas based bowel attacks.

Don't bother with those 10Gb switches, just hook it all up on wireless. Wireless network, wireless fibre storage, wireless power! Your megaflops (the rate at which a million projects per second will turn out to be a flop) will increase by a factor of 213% per watt.

Web 2.0, the best thing to happen to your serverroom since buttered toast and angry system administrators, can be yours now only for $ 9999,95 per diagonal server! Why go for a 1U server when you can have a 2U for three times the price. Call now, and receive a free "My other server is a web 3.0" bumpersticker which will be applied by an angry salesman who'll also slash your tires for FREE!

Warning: servers may not be stacked diagonally on top of eachother, rather rammed into your rack repetetively by an angry monkey (which we've nicknamed "Bob the technician"). Aforementioned technician may or may not leave presents in your servers. Do not feed Bob during the installation process, nor introduce Bob to small children and pets.

Re:This just in... (1)

antifoidulus (807088) | more than 5 years ago | (#23548003)

Warning: servers may not be stacked diagonally on top of eachother, rather rammed into your rack repetetively by an angry monkey (which we've nicknamed "Bob the technician"). Aforementioned technician may or may not leave presents in your servers. Do not feed Bob during the installation process, nor introduce Bob to small children and pets.

I thought the purpose of web 2.0 is so we would have FEWER managers.

Re:This just in... (0)

Anonymous Coward | more than 5 years ago | (#23548211)

I thought the whole purpose of Web 2.0 was to separate the lunatics who don't know shit from the people who have a fighting chance of being cluefull during a job interview?

If you come into my office with a resume laden with web 2.0 crap I can guarantee you resume will be immediately placed first in line on the /dev/null stack while plenty of jokes are made at your expense.

Karma Whoring (1, Interesting)

Anonymous Coward | more than 5 years ago | (#23548129)

To stop the web 2.0 discussion and focus on something interesting. From TFA:

If you're an IT administrator for a bank and want to build a server farm for your ATM network, you make it fault tolerant and redundant, duplicating everything from power supplies to network cards. If you're a Web 2.0 service, you use the cheapest motherboards you can get, and if something fails, you throw it away and plug in a new one. It's not that the Website can afford to be offline any more than an ATM network can. It's that the software running sites like Google is distributed across so many different machines in the data center that losing one or two doesn't make any difference. As more and more companies and services use distributed applications, HP and IBM are betting there exists a better approach than a custom setup of commodity servers.
Then they go on to talk about how google uses custom power supplies, how people are now charged by power consumption and how blade style servers use up too much power (?)

They mentioned preconfigured linux servers for cheap, to help people avoid the extra work in setup (?)

Etc. A jumble of suggestions for cheaper data centers, cooling many midrange servers, and so on.

I would've thought selling VMs on a power-efficient mainframe would be more up IBM's alley, but that's not what they are selling. Anyone got any better ideas?

(posting anonymously so as not to karma whore)

The Blue Gene solution? (1)

TjOeNeR (1110041) | more than 5 years ago | (#23548223)

They cool their servers from the bottom to the top. Also sideways, so kinda diagonally, but they're getting excellent results. Sounds like an efficiÃnt idea to me. http://www.ibm.com/systems/deepcomputing/bluegene/ [ibm.com] And why don't they use Western Digital GreenPower drives? They have an enterprise version of those don't they?

Re:The Blue Gene solution? (0)

Anonymous Coward | more than 5 years ago | (#23548549)

I'll see your Blue Gene suggestion and raise you actual working code [ibm.com].

So, after reading the article ... don't bother. (4, Interesting)

oneiros27 (46144) | more than 5 years ago | (#23548241)

They mention 'sideways', and I thought they just meant rotating about the depth of the rack (ie, so a 19" rack would be about 11U wide), but the discussion is talking about the fans being 15" away vs. 25" ... which makes no sense, as they're mentioning servers being 47" deep. I think they're talking about side venting, which is what Suns _used_ to have, but you'd have to get these 30" wide racks (so there'd be ducts on each side for airflow in/out)

And we have the useless quote:

"In a data center the air conditioning is 50 feet away so you blow cool air at great expense of energy under the floor past all the cables and floor tiles," McKnight said. "It's like painting by taking a bucket of paint and throwing it into the air."
I'm not going to claim that forced air is more efficient than bringing chilled water straight to the track, as it's not -- but the comparison is crap -- anyone who's had to manage a large datacenter will have had to balance ducts before -- it's not fun, I admit, but you don't just pump the air in, and expect everything to work.

Then there's the great density -- 82TB in 7U. I mean, that's not bad, but the SATABeast is 42TB in 4U (unformatted), and I'm going to assume a hell of a lot cheaper. (although, it's a lower class of service). And HP's not using MAID yet, but spinning all of the disks.

My suggestion -- skip the article. It reads more like a sales brochure, with very little on the actual technical details of what they're doing.

Re:So, after reading the article ... don't bother. (1)

Charcharodon (611187) | more than 5 years ago | (#23548501)

If they are water cooling, then not having the servers stacked vertically would keep you from frying everything below the one that springs a leak.

Not the way this works.. (1)

Junta (36770) | more than 5 years ago | (#23548645)

The servers are stacked pretty much like they have been before, but not as deep. The water cooling is contained within the door and does not go to the servers (as the article says, they thought that currently that was too pricey to be worth doing). Besides, they left their options open, it's easier to tack on a door or not based on the datacenter use of chilled water or not than it is to change system heatsinks, etc etc.

On the sideways thing.. (4, Informative)

Junta (36770) | more than 5 years ago | (#23548587)

Most racks are on the order of 2 ft. wide and 4ft deep. The iDataplex racks are 4ft wide, and 2ft deep, with two columns each 19" wide. The cooling is still front to back with 19" wide servers, it's just that the racks are less deep. They are doubled up presumably to be in some way conventional for shipping, marketing, whatever, but ultimately aren't as exotic as some would fear. They could have just as well had 'normal' 42U racks with only half the depth and logically be analogous. They also take some of the spare horizontal space and carve out 16U of vertically oriented U space.

As to the air cooling aspect, I think the discussion is tilted toward the extremes of bad datacenter design to sound better, but water-cooling is more efficient to pump the distance even with clear path for the air to go. Not saying this is specific to any particular vendor (the difficulty of sticking the converse of a radiator on the back of a rack seems like it would be low), but I think IBM is fishing for ways to take advantage of two-column racks in a remotely meaningful way. In this case, the ratio of usable surface area on the water pipes to unusable plumbing in the design is higher since they can be wider.

Re:So, after reading the article ... don't bother. (0)

Anonymous Coward | more than 5 years ago | (#23549557)

The IBM rack cools front to back. The servers are 15.5" deep like Rackable Systems or the SuperMicro clone. IBM uses a proprietary rack with the servers configured side by side and the rack is only 24" deep. IBM claims this saves lots of space in the datacenter but that claim is dubious. It'll save 12-16" of cabinet depth at best and your hot aisles can be a little narrower. But does floor space really matter when you can cram 24-30kw of servers into the rack?

Eco friendly solution! (1)

ale_ryu (1102077) | more than 5 years ago | (#23548331)

I've always wondered why don't they just place the servers in places where it's cold all the year, imagine how much power they could save by using ducts to distribute the cold air from the outside in the server room. I mean, a solution like this can't be that hard to implement!

Re:Eco friendly solution! (1)

Joe The Dragon (967727) | more than 5 years ago | (#23548377)

The network links to places like that are not that good and that will add LAG as well.

latency... (0)

Anonymous Coward | more than 5 years ago | (#23549535)

Please use the word 'latency' instead of 'lag'. Data centers do more than host video game servers, numbnuts.

What?! (3, Insightful)

Bootarn (970788) | more than 5 years ago | (#23548347)

So called "Web 2.0" means JavaScript. JavaScript is run on the client side.
I fail to see why this requires supercooled servers, and until now I didn't even think it was possible to use the "Web 2.0" buzzword on hardware.

Web 2.0 FAQ (5, Funny)

trollebolle (1210072) | more than 5 years ago | (#23548379)

This seems like a good opportunity to mention the famous Web 2.0 FAQ by Rich "Lowtax" Kyanka on somethingawful.com. For those readers who are not entirely sure what web 2.0 is:

Question: What is Web 2.0?

Answer: Web 2.0 is a combination of Web 1.0 and being punched in the dick.

Question: How do I know I'm using a website / service / product that is officially "Web 2.0" and not actually "Web 1.0" with various patches and enhancements added to it?

Answer: Web 2.0 is made obvious by the addition of completely and highly unnecessary bells and whistles that don't do anything besides annoy you and make life more complicated. If Web 1.0 was the equivalent of reading a book, Web 2.0 is reading a book while all the words are flying around and changing pages as the book rotates randomly and sets your hands on fire. Also there's this parrot that keeps on flying towards your head in repeated attempts to gouge out your eyes.

Question: I read about this one website in Wired Magazine. Is that Web 2.0??

Answer: Oh definitely. Wired won't even mention Web 1.0 sites. Every single site in their magazine is at least Web 2.0. Sometimes they're even up to Web 45.2 (such as www.ebutts-and-credit-reports-delivered-via-carrier-pidgeon.com)!

Question: My roommate said he "digged" a "wikipedia entry" about "the blogosphere" which mentioned "podcasting" as a viable form of "crowdsourcing."

Answer: Your roommate is a faggot. Also, this wasn't technically a question.

Question: What's Web 3.0?

Answer: It's a product or service planned on release in spring of 2008, and consists solely of websites enabling the user to create even more detailed Kirby ASCII art. (O'.')-o

Re:Web 2.0 FAQ (0)

Anonymous Coward | more than 5 years ago | (#23549623)

Wow. Thanks for the laugh, I needed that :D

Web 2,0 (3, Insightful)

EricVonZippa (719996) | more than 5 years ago | (#23548439)

Web 2.0 really has nothing to do with 1u, or 2u servers being configured in any specific manner, nor the layout in the racks to be "sideways", upside down, or water-cooled. Web 2,0 is about moving the complexity required to support an application from the physical hardware into the application stack. This happens when an application provider builds resiliency and redundancy into the application and then the application utilizes the compute power of a series of systems merely as a process station. If a node goes offline, or fails, the application moves to the next logical set or online node. This really is nothing new to the industry, other than the capability now being available in the x86 platform. The hardware provider that will win in this space will be the provider that can build, design, and architect the highest possible compute spec while utilizing the least amount of both space and power. It's not virtualizing applications, or operating systems. It's about squeezing as many processing units into the smallest amount of space while utilizing as little power as possible and the application being architected in a manner that will utilize that. Gone are the days of needing to build fault tolerant hardware platforms, with back-up power-supplies, clustering,etc. Today we have smart applications that see that additional processing power is required, or that a process node is down, and the application fails over to the next node in line. This is really not new, what's new is the capability being available in the X86 space. that's a beautiful thing. that means the customer/consumer wins.

Mainframe Tech (3, Insightful)

maz2331 (1104901) | more than 5 years ago | (#23548787)

So, basically all we're doing is taking some mainframe tech and moving it to x86 servers. Add in some hardware-based virtualization (say, to run old code on different physical processor technology), mix it with virtualizing the rest of the hardware, and give it a proper hypervisor and you have....

A Z9 mainframe.

Maybe IBM should just make some nice REALLY low-end mainframe-type PC servers with a "clustering" port.

Mainframe tech is great, except it's just too damn expensive, especially when you're not doing enterprise-level data crunching.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...