Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Intel Confirms Decline of Server Giants

Soulskill posted about 2 years ago | from the bigger-they-are-the-cloudier-they-fall dept.

Intel 152

An anonymous reader writes "A Wired article discusses the relative decline of Dell, HP, and IBM in the server market over the past few years. Whereas those three companies once provided 75% of Intel's server chip revenue, those revenues are now split between the big three and five other companies as well. Google is fifth on the list. 'It's the big web players that are moving away from the HPs and the Dells, and most of these same companies offer large "cloud" services that let other businesses run their operations without purchasing servers in the first place. To be sure, as the market shifts, HP, Dell, and IBM are working to reinvent themselves. Dell, for instance, launched a new business unit dedicated to building custom gear for the big web players — Dell Data Center Services — and all these outfits are now offering their own cloud services. But the tide is against them.'"

cancel ×

152 comments

Boing. (-1)

Anonymous Coward | about 2 years ago | (#41316487)

That's right, I said it.

If Google sold servers... (5, Interesting)

denis-The-menace (471988) | about 2 years ago | (#41316525)

If Google sold servers, HP and Dell would die overnight.

Just the "12volt-only" power supplies with built-in batteries with "12volt-only" motherboards makes them more reliable than anything out there.

HP and Dell either can't or won't license this from Google.

Re:If Google sold servers... (1, Flamebait)

postbigbang (761081) | about 2 years ago | (#41316685)

Oh? Inside your desktop or 1U/etc server is a 12V power supply, and 5vdc, too. License? This isn't about licensing, it's about density and uniformity.

You can put a 12v battery into your machine, too. It's allowed.

Re:If Google sold servers... (3, Informative)

denis-The-menace (471988) | about 2 years ago | (#41316983)

In google servers, the power supply only make +12volts.

There are no -12V, +5V or -5V rails.
There are, instead, DC-to-DC converters on the motherboard.

Re:If Google sold servers... (0)

Anonymous Coward | about 2 years ago | (#41317915)

the location, inside the box, of the 12V-to-5V DC-DC converter hardly seems interesting...

Re:If Google sold servers... (3, Interesting)

labradore (26729) | about 2 years ago | (#41319525)

What's the point?
1. you use less parts and cheaper parts in the power supply.
2. you have fewer and shorter cables
3. you use 5V, 3.3V, regulators that are the right size for the job. this saves space and saves material
4. you get to choose where to put these regulators so that heat management can be more optimal
5. it's easier to integrate the 12v battery with the space saved

Re:If Google sold servers... (1)

Jake73 (306340) | about 2 years ago | (#41316695)

License what? The ability to run from 12v power?

I'm pretty sure my old Atari 400 and Atari 800XL both ran from DC power supplied from a brick. What's new about that? Nearly every laptop runs from DC power and has a built-in battery.

Re:If Google sold servers... (0, Redundant)

denis-The-menace (471988) | about 2 years ago | (#41317005)

In google servers, the power supply only make +12volts.

There are no -12V, +5V or -5V rails.
There are, instead, DC-to-DC converters on the motherboard.

(too, bad I can't edit posts like on Reddit)

Re:If Google sold servers... (3, Insightful)

postbigbang (761081) | about 2 years ago | (#41317089)

Other motherboards make use of similar dc-dc converters and have for a long time. It's nice to have a 12vdc bus; makes it more dense. But it's neither innovative or unique. Instead, it's all about density and design for a specific purpose. These aren't retail-able machines. And there are now luscious racks you can obtain with lots of dense Intel, AMD, and even ARM-powered systems. If you have the application, someone has a design.

It might be a good design for you, and not for others.

Re:If Google sold servers... (0)

Anonymous Coward | about 2 years ago | (#41317017)

But thats different it wasn't a server.

Just like Apple owns grid icons ... just on a phone. Not desktop.

Re:If Google sold servers... (4, Interesting)

fm6 (162816) | about 2 years ago | (#41317187)

Sorry, you're wrong. Wish you were right.

I've always been appalled by the way PCs rely on big, hot, wasteful noisy internal power supplies. When IBM entered the workstation market, 30 years ago (Oh, Lord, that makes me feel old) I worked for a company that made a pre-PC x86 system [computinghistory.org.uk] that relied entirely on external, passively cooled power supplies. To me, this was clearly the way of the future, but once IBM entered the market, everything had to be IBM compatible, even the way the power system worked. Because if you couldn't use IBM-compatible power supplies, your system cost too much to build. (I once had to throw out a perfectly good Zenith PC with a blown PS; although it was mostly IBM-compatible, its power supply was proprietary, and cost too much to replace.)

So, Google can't go into the hardware business, because their machines would cost too much and would rely too much on proprietary infrastructure. Easier to justify using your own technology regardless of cost when you're gigantic and profitable.

HP and Dell's nightmare isn't Google. It's cloud computing in general. The cloud providers (which includes Google, if you ignore the fact that they only provide high-level cloud services, unlike Amazon) mostly build their own hardware. Those that don't buy cheap no-name hardware.

Low- vs. High-level cloud services (3, Informative)

DragonWriter (970822) | about 2 years ago | (#41317713)

The cloud providers (which includes Google, if you ignore the fact that they only provide high-level cloud services, unlike Amazon) mostly build their own hardware.

Google provides low-level cloud services (IaaS in the form of Google Compute Engine, PaaS in the form of Google App Engine, RDBMS-in-the-cloud in the form of Google Cloud SQL, bucket-style storage in Google Cloud Storage) as well as higher-level services (all of Google's various apps build on their cloud infrastructure.)

So the Google-Amazon distinction drawn in the parenthetical is inaccurate.

Re:Low- vs. High-level cloud services (1)

fm6 (162816) | about 2 years ago | (#41317773)

I stand corrected.

Re:If Google sold servers... (3, Interesting)

afidel (530433) | about 2 years ago | (#41318593)

Not in the least bit, Google designs their servers to optimize power usage and absolute lowest cost per compute cycle. Those are not the same goals for every server buyer. For instance single threaded performance is a large factor for me because we run a lot of interactive workloads that are single threaded or weakly threaded but Google doesn't really care about single threaded performance because they're optimizing at the datacenter level. I also care a lot more about the reliability of any given unit because my jobs are mostly traditional single-server jobs with only my most critical workloads being clustered so the loss of any given node has a significant impact on my overall reliability whereas Google can lose dozens of servers a day per datacenter and it would have no impact on their overall operations. Another example is storage, Google uses COTS SATA drives with horrible MTBF stats and they do so without RAID protection, the only application where that might remotely have a chance of working for me is Exchange 2010 because I have four copies of each database online and the client is seamlessly pointed to a working copy.

Re:If Google sold servers... (1)

gagol (583737) | about 2 years ago | (#41318923)

Google's server architecture is custom made for their datacenters and built around their application. What they could offer is a turn-key datacenter thet requires a similar workload to theirs... and it is not their business to do so.

Your first server, in 2012 (3, Informative)

Compaqt (1758360) | about 2 years ago | (#41316543)

Back in the day (say, 2008 as in the article), if you wanted to buy a server, you'd buy one from the big three.

These days, especially with FB and Google leading the way on commodity hardware, it's a different story.

So what should you get for your first server. I.e., you're a small company. You've got a couple of laptops. You're outgrowing mutual Samba.

You maybe want a fileserver. Maybe it'll have a few NICs and a virtual machine on it (Xen?) will do double duty as a external webserver.

So, Core i3, i5, Xeon? Number of processor cores? Forget fast drives, and just buy a lot of memory? Rack? Or tower?

Lockable front (so people can't just come by and reset it)? Hotplug harddrives? (You don't go this if you go the Google build-your-own route.) Redundant hard drives and ECC memory? Or a couple different commodity-style servers + sharding/rsync?

Is a big 3 server worth it? Or search for your own server case + server power supply, etc.?

Re:Your first server, in 2012 (3, Informative)

Anonymous Coward | about 2 years ago | (#41316591)

Search for your own. Priced one from hp/dell and it would have cost $6,000 plus. Built it with the same specs for $3000. That right there is why their server sales are dwindling.

Re:Your first server, in 2012 (3, Insightful)

ard (115977) | about 2 years ago | (#41316683)

With the same specs? With hot-plug drives, true hardware raid, iLO/iDRAC lights-out management, secondary bios if flashing fails?

Get a refurbished HP gen 5 or 6 server instead of building your own. Perfomance will be sufficient, don't worry. It's well below $3000, and you get enterprise quality hardware.

Re:Your first server, in 2012 (2)

whoever57 (658626) | about 2 years ago | (#41316917)

With the same specs? With hot-plug drives, true hardware raid, iLO/iDRAC lights-out management, secondary bios if flashing fails?

Use software RAID and buy from SuperMicro. Yes, $3k will get you a reliable server (perhaps with dual power supplies also).

Re:Your first server, in 2012 (1)

MightyMartian (840721) | about 2 years ago | (#41317015)

I'm having a hard time putting "software RAID" and "reliable" in the same sentence.

Re:Your first server, in 2012 (0)

Anonymous Coward | about 2 years ago | (#41317107)

I know. It is crazy! No hardware RAID is running any sort of software on it, right?? That would be batshit crazy!! It is all baked into the fabric of spacetime.

Software RAID can't have battery backup. No sirreee! Those UPS things are not for commodity hardware anyway, only for big iron.

And journaling file system?? That only exists on the hardware from the big 3! Software RAID 1 implodes into a tiny blackhole everytime you run your rsync. Everyone knows that!

Re:Your first server, in 2012 (0)

Anonymous Coward | about 2 years ago | (#41317209)

if you can't design (afford?) a hardware raid card... chances are your UPS is gerbil powered.

Re:Your first server, in 2012 (1)

hawguy (1600213) | about 2 years ago | (#41317515)

I know. It is crazy! No hardware RAID is running any sort of software on it, right?? That would be batshit crazy!! It is all baked into the fabric of spacetime.

I trust my single-purpose RAID controller card a lot more than my general purpose operating system to get the write right.

Software RAID can't have battery backup. No sirreee! Those UPS things are not for commodity hardware anyway, only for big iron.

A UPS is not infallible since your server's operating system is subject to other failures such as someone yanking the power cord(s), hitting the reset button on the server, or an operating system crash. A hardware RAID card is not subject to any of these failures, if the power is yanked before it writes data, it will remain in the cache to be retried when the disks are available.

And journaling file system?? That only exists on the hardware from the big 3! Software RAID 1 implodes into a tiny blackhole everytime you run your rsync. Everyone knows that!

Filesystem journalling is independent of RAID level, most people using a journalling filesystem on top of RAID to protect against filesystem corruption from a server crash, which has nothing to do with RAID level. Typically only the filesystem metadata is journalled, so data corruption is still possible even with journalling. (data journalling is possible, but is rare since it means writing a second copy of data. Well actually I guess the RAID controller write cache is somewhat like a data journal)

Another advantage of Hardware RAID is that it's typically much faster than software RAID especially with RAID-5 and 6. Your write only has to hit the cache on the RAID controller to be "complete", while a less-than-full-stripe write with RAID-5 means reading the stripe, calculating parity then rewriting the entire stripe, so each write is really 3 I/O's and the I/O's all have to complete before you can declare the write complete. (I'm assuming NVRAM or battery backed cache RAM, if you use a RAID controller without it then you get what you deserve)

Software RAID has its place, but a hardware RAID controller (a real one, not fake-RAID) adds so little to a typical enterprise server's price that it's generally worth using.

Re:Your first server, in 2012 (2)

whoever57 (658626) | about 2 years ago | (#41318445)

I have seen really terrible performance on real hardware RAID cards using enterprise-class hard drives. And, yes, I am 100% certain that it was not a fakeRAID controller card.

Hardware RIAD in not a magic bullet for performance and they come with a nuymber of disadvantages (your RAID controller dies: good luck getting the data off the disks).

Re:Your first server, in 2012 (2)

afidel (530433) | about 2 years ago | (#41318693)

your RAID controller dies: good luck getting the data off the disks
This is such BS! The RAID controllers from the big three have placed redundant copies of the metadata on the drives for at least a decade. All you need to recover the array in the event of a card failure is to place them into another server with the same generation controller or replace the failed controller. Heck when HP designed their own hardware you could even move an array out of a Proliant and place it in an MSA array and the array would read the metadata and recognize the RAID configuration.

Re:Your first server, in 2012 (1)

whoever57 (658626) | about 2 years ago | (#41318973)

All you need to recover the array in the event of a card failure is to place them into another server with the same generation controller or replace the failed controller.

Exactly. You have to go out and buy a new controller. In some cases, you have to match the firmware version. In reality, when you buy a controller card, you should probably buy a second card as a spare in case the primary card dies.

There is no such complication using software RAID under Linux. I don't have to ask if the vendor has made provision to make the RAID set portable.

There are a few cases where hardware RAID may give an advantage, but it is (IMHO), poor use of the money. The same money applied to other parts of your server can give more value.

Re:Your first server, in 2012 (1)

afidel (530433) | about 2 years ago | (#41319027)

Dude, you can still buy any Dell, HP, or IBM raid controller ever produced because they sell each model by the millions. The last time I had to match firmware versions was like 8 years ago with an IBM controller, it's never been an issue with HP (there may be potential issues between uncertified controller and disk firmware combinations but they're a hell of a lot less likely than similar problems with "let's buy a bunch of generic HDD's and pray they all play nicely with whatever controller I bought"). If you consider data reliability an optional feature (or like the disk performance of drives with their write cache disabled) then you don't belong anywhere near anything called a server.

Re:Your first server, in 2012 (1)

hawguy (1600213) | about 2 years ago | (#41318697)

I have seen really terrible performance on real hardware RAID cards using enterprise-class hard drives. And, yes, I am 100% certain that it was not a fakeRAID controller card.

Hardware RIAD in not a magic bullet for performance and they come with a nuymber of disadvantages (your RAID controller dies: good luck getting the data off the disks).

What kind of workload were you running? As I said, hardware raid is typically faster than software raid, especially for writing to RAID-5/6 volumes. If your workload is mostly read-only, then you may not see much (if any) improvement with hardware RAID.

I use RAID to protect me from server downtime more than to protect my data - even if I have redundant servers, if one server in an HA pair is down, then I have no redundancy left so I use RAID (sometimes with dual controllers), dual power supplies, etc to help ensure that the server stays up. If the server is down anyway, I'm just going to restore data from backup rather than try to recover the data after I replace the card.

Re:Your first server, in 2012 (2)

MasterOfGoingFaster (922862) | about 2 years ago | (#41318755)

RAID cards are great when they work. And when they fail.... well...

I'd much rather depend on ZFS.

Re:Your first server, in 2012 (1)

swalve (1980968) | about 2 years ago | (#41319009)

RAID cards are great when they work. And when they fail.... well...

I'd much rather depend on ZFS.

When they fail, you replace them.

Re:Your first server, in 2012 (4, Interesting)

h4rr4r (612664) | about 2 years ago | (#41317123)

Linux software raid is great.
Proprietary software raid is garbage.

I base this on what I have seen. Linux software raid beats cheapy hardware controllers both in reliability and speed.

Re:Your first server, in 2012 (0)

Anonymous Coward | about 2 years ago | (#41317253)

for your porn collection sure... enterprise apps... not so much

Re:Your first server, in 2012 (0)

Anonymous Coward | about 2 years ago | (#41319121)

You would be surprised. It is used in fortune 500 companies.

Re:Your first server, in 2012 (0)

Anonymous Coward | about 2 years ago | (#41317949)

It does beat 'cheap' hardware controllers, but not the enterprise $700-$1500 ones.

Re:Your first server, in 2012 (1)

snowraver1 (1052510) | about 2 years ago | (#41317265)

Software RAID has it's advantages. If you have your controller card blow up, you don't need to procure an identical card. It does have other drawbacks though.

Re:Your first server, in 2012 (1)

afidel (530433) | about 2 years ago | (#41318735)

You don't need an identical card, just one of the same generation, at least for servers from the big 3.

Re:Your first server, in 2012 (1)

Anonymous Coward | about 2 years ago | (#41317565)

You have to know what you are doing for *any* RAID solution to improve reliability. But, on a low budget Linux server, software RAID is as good as any other choice, assuming you are still spending the same on disks, chassis, and power supply (the most common failure points). Many people have tried to use too cheap of a RAID card and discovered how unreliable those are compared to a basic multi-port controller or mainboard-integrated ports. Those cheap 'hardware RAID' cards are really software RAID too, but with a terrible software stack instead of the standard stuff in a proper server OS.

Linux software RAID is also easier to monitor and manage with zero downtime while doing things like activating a spare drive, swapping out a failed one, or reshaping an array. With low end hardware RAID, you have to use even more arcane CLI tools specific to the RAID card, rather than standard tools like 'mdadm', or sometimes go back to the pre-boot environment and wait for these long operations to complete before you can boot your server again. The BSD or Solaris variants with ZFS have similar characteristics to Linux software RAID, as far as easy online management of disks and recovery.

However, a pretty decent RAID controller is only around $500-800 depending on cache size and number of SATA or SAS ports. So if you are already springing for a SuperMicro chassis with lots of hot-swap drive bays and a SAS/SATA backplane, you might consider a RAID card instead of a "just a bunch of disks" controller card. It will perform better than software RAID for many I/O heavy server loads, particularly with a proper battery-backed write-back cache. Make sure it is one with a driver module in your intended OS kernel. Steer well clear of anything that requires a third-party driver to be installed!

Re:Your first server, in 2012 (2)

funwithBSD (245349) | about 2 years ago | (#41316973)

Well, sure, you can do all that with Newegg available parts.

Would I?

Depends. If I can scale horizontally, sure. Downsize the spec and built 4 or 5 in case one fails and I wait days for a replacement part.

  If I have a vertical architecture, then I want a box I can get someone onsite in 4hrs or less.

And that ain't Newegg, that is an Dell or HP sized company.

Re:Your first server, in 2012 (2)

MasterOfGoingFaster (922862) | about 2 years ago | (#41318837)

If I have a vertical architecture, then I want a box I can get someone onsite in 4hrs or less.

And that ain't Newegg, that is an Dell or HP sized company.

Management turned down my plan to have a second server. It was to be the identical model, but without all the disks and redundancy. They figured HP's 4-hour response time would be better than a hot spare server.

Then the crash came.

A nice fellow showed up within 4 hours, with the "most likely" part. It wasn't.
The next day, more parts. Nope.
The next day, two nice fellows showed up and replaced every part but the case. That solved it.

The cost of downtime was so far beyond the cost of the spare server that it wasn't even funny. Hey, this stuff happens, and the HP guys were great. It just took a lot of time to resolve the problem and a spare would have let us do it while the rest of the factory kept working.

Re:Your first server, in 2012 (4, Insightful)

MightyMartian (840721) | about 2 years ago | (#41316723)

As much as anything, I think virtualization is murdering the market. I bought a $3000 server that hosts six VM guests; two Windows installs (one a DC, one an Exchange server) and four Linux. A couple of years ago, I would have needed at least three servers to do it (one for each Windows install) and one Linux. Admittedly they wouldn't have to have the balls that the new server has, but still, I think we'd be talking about $4000 to $6000 in hardware. Even worse, these are all just basically images sitting on hard drives, so they can essentially be perpetual. Two or three years, when the current server dies or I decide I need more juice, just move the VM images over and away I go, and with hardware prices the way they are, I doubt the next generation server will cost any more than the one I have now, and maybe even less.

Factor in the cloud, VPS hosting and so on, the demand for servers will inevitably drop.

Re:Your first server, in 2012 (1)

blind biker (1066130) | about 2 years ago | (#41317551)

What VM do you use? I am quite ignorant when it comes to virtualization, I left IT (I am now back into academia) before virtualization became big.

Re:Your first server, in 2012 (2)

SuperQ (431) | about 2 years ago | (#41317807)

I run a co-op VM cluster on Ganeti. We bought 3 supermicro 1U single-socket machines (12-core AMD, 64G of ram) for about $7,000. We have about 60% of our capacity rented out. The nice part is we allocate based on 1G of ram slices so you get a pretty powerful minimum server.

Re:Your first server, in 2012 (1)

rtb61 (674572) | about 2 years ago | (#41318839)

Makes you think what the cloud is doing to the OS server market. It seems only the M$ managed parts of the cloud make M$ any real money and the rest of the cloud is running OSs that keep revenue within those parts of the cloud controlled by those operators. Taking the point of view that the cloud is a whole and not really separated as it is made to appear because you can tie your services to more that just one operator, especially considering the risks of the cloud.

Re:Your first server, in 2012 (2)

hawguy (1600213) | about 2 years ago | (#41316957)

Search for your own. Priced one from hp/dell and it would have cost $6,000 plus. Built it with the same specs for $3000. That right there is why their server sales are dwindling.

The difference is not always so dramatic.

My local whitebox builder can put together hardware equivalent to a Dell R720: dual E-2620 CPU's, 32GB RAM, dual 1TB disks with onboard RAID (i.e. fake RAID) for $2800 with one year carry-in warranty. Dell charges $3566 for the the equivalent server but includes a 3 year next business day on-site warranty.

So the dell costs $766 more, or think of it as $20/month for on-site service.

If you're a large shop (or a very small shop) and don't mind taking care of motherboard swaps, etc yourself, then paying extra for Dell's support probably isn't worth if, but if you're a small shop with a dozen servers, you may not want to dedicate one of your (few) sysadmin's to a day of unracking the server and driving it across town for support, or procuring a replacement motherboard (if it's still available in 2 years) and swapping it out himself.

And when I buy the Dell, I'm less worried about build problems, like leaving a tangle of power cords dangling in front of the cooling fan (which I've seen happen on whitebox builds).

Re:Your first server, in 2012 (1)

hjf (703092) | about 2 years ago | (#41317127)

I don't know what you got. I got an IBM x3200 M3, quad-core Xeon, IMM (integrated management module), SAS controller, hot-plug bays, 2 gigabit NICs and 2GB RAM for $1000.

If I had gone the route of IBM hard drives and RAM it would have doubled the price, but I just got Kingston ECC memory ($60 for 8GB) and some SATA HDDs (I don't need SAS).

The killer feature for me? the IMM is connected to the serial port. So I can SSH into the IMM and get a Linux console (and also, get to the BIOS -UEFI actually- over Ethernet).

Re:Your first server, in 2012 (1)

toejam13 (958243) | about 2 years ago | (#41316655)

The problem is that you have to support all of that equipment you just threw together all piecemeal-like. Do you have spare parts available? If no, how much does it cost to have them shipped overnight? Are they still available via retail channels or do you have to dredge through eBay? How much does it cost to purchase and store spare inventory? Do you have the equipment to test for failed components without the possibility of frying other equipment?

Those "Big Three" server companies charge more because of service and support so you don't have to worry as much about those things. RMA and forget. And yeah, I'm saying that with a straight face.

There are times where a company is small enough to where your tech has enough idle time to deal with a white box server. Other times, your techs are better utilized doing other work.

Re:Your first server, in 2012 (2)

h4rr4r (612664) | about 2 years ago | (#41316801)

Buy from the Big Three but get it refurb.
You can get them with the original 3 year 4 hour warranty still in place. Extend it if you need that, or better yet buy another one and there is your spare parts.

Re:Your first server, in 2012 (1)

WhiteDragon (4556) | about 2 years ago | (#41316847)

The problem is that you have to support all of that equipment you just threw together all piecemeal-like. Do you have spare parts available? If no, how much does it cost to have them shipped overnight? Are they still available via retail channels or do you have to dredge through eBay? How much does it cost to purchase and store spare inventory? Do you have the equipment to test for failed components without the possibility of frying other equipment?

Those "Big Three" server companies charge more because of service and support so you don't have to worry as much about those things. RMA and forget. And yeah, I'm saying that with a straight face.

There are times where a company is small enough to where your tech has enough idle time to deal with a white box server. Other times, your techs are better utilized doing other work.

The Big 3 have the same problems. I've seen lots of IBM servers have failed RAID controller batteries, which IBM won't replace under warranty because they're "consumable", and won't replace for a fee because they aren't available anymore. On the other hand, installing a third-party part voids the warranty anyway.

Re:Your first server, in 2012 (1)

drsmithy (35869) | about 2 years ago | (#41317061)

I've seen lots of IBM servers have failed RAID controller batteries, which IBM won't replace under warranty because they're "consumable", and won't replace for a fee because they aren't available anymore.

You'd have to be talking about a machine at least 5 years old.

Re:Your first server, in 2012 (1)

cdrudge (68377) | about 2 years ago | (#41318787)

The Big 3 have the same problems. I've seen lots of IBM servers have failed RAID controller batteries, which IBM won't replace under warranty because they're "consumable", and won't replace for a fee because they aren't available anymore. On the other hand, installing a third-party part voids the warranty anyway.

Under 15 USC 2302(c), they can not require original equipment be used. If the 3rd party component (in this example a battery) can be shown caused damage, then they may have grounds to deny a warranty claim. But if you install a battery and a drive or stick of memory goes south, then they don't have much ground to stand on in denying the claim.

Re:Your first server, in 2012 (1)

afidel (530433) | about 2 years ago | (#41318867)

That particular problem was solved a few years ago when they introduced flash backed write cache. Basically it's a supercap or bank of regular caps that will power the controller long enough to push ram contents into a flash module. I won't buy anything else and in fact HP stopped offering battery backed units with the gen8 servers.

Re:Your first server, in 2012 (2)

heypete (60671) | about 2 years ago | (#41316953)

I'm only really familiar with SuperMicro products, but they offer a pretty standard warranty [supermicro.com] for their servers. Since they use pretty standard components, rather than vendor-specific stuff or firmware-locked drives (see my other post), spare parts are pretty easy to come by. They had all the standard features like IPMI ("Lights Out"), redundant power supplies, etc.

RMAing broken hard disks to Sun was an exercise in frustration and delays. It literally took weeks to get a hard disk replaced under warranty.

Dell premium support (whatever they call it) for their Optiplex systems was great, but we didn't use Dell servers because they were too expensive. Only downside: their desktops used some Dell-specific variant on the ATX power supply plugs: if we had issues on an out-of-warranty system we'd have to buy a new power supply at an inflated price. It made testing potentially-broken systems considerably difficult.

Re:Your first server, in 2012 (0)

Anonymous Coward | about 2 years ago | (#41317509)

RMAing broken hard disks to Sun was an exercise in frustration and delays. It literally took weeks to get a hard disk replaced under warranty.

You think that was bad? Try RMAing a broken tape drive to Sun.

Re:Your first server, in 2012 (0)

Anonymous Coward | about 2 years ago | (#41317039)

This 'shipped overnight', '2-hour response', ... is unprofessional.

Our servers deliver a service. A 2 hour service outage is way too much. Each server has a hot standby. To keep it affordable we keep the servers relatively cheap by scalable software design.

If things break I buy an upgrade in speed. I make sure that I don't rely on old hardware.

Re:Your first server, in 2012 (1)

h4rr4r (612664) | about 2 years ago | (#41317155)

How many layers deep does that go?
Sure you can flop over to a hot spare, but getting parts in 4 hours or NBD is still valuable compared to ordering them and waiting 3 days. Lots can happen in those 3 days.

Re:Your first server, in 2012 (1)

MightyMartian (840721) | about 2 years ago | (#41317225)

To some extent virtualization has done away with even this. Frankly, I doubt I will ever run a server that isn't a guest, unless I'm looking at something like a dedicated backup server (which I have right now) or some very high capacity database server (for my business's needs, I can't see that happening any time in the near future). So for most of my needs, I'd be buying something good RAID, fast drives, lots of RAM and CPU that I can install VMWare or Debian with KVM or Xen support on (running KVM right now). The guests won't know the difference between Dell, HP or something I put together on my own, In the medium term, I'm looking at two NASs, one primary, one failover, and the virtualization server can do it that way.

Re:Your first server, in 2012 (1)

gravyface (592485) | about 2 years ago | (#41316659)

...looks alot like the one from 2008. Big three = hardware warranty and support: drive dies, Dell guy's there in less than 4 hours. That covers the entire lifecycle of the server (3-5 years) while it's in production and playing a mission critical role. Virtualization/consolidation/cloud are whittling away at the server market, but it's never going to go away. Right now I'm dealing with an EC2 instance that won't start and I can't detach the volume to try to snapshot it or mount it to another new instance... yeah, yeah, "b-b-but you don't have an Elastic Load Balanced, Cloud Reach-around setup?". Well, this isn't a mission critical server and nightly backups are good enough, but it's still annoying to me and the end-users. And at ~$100 a month (reserved medium Windows EBS instance), I could've leased a new low-end PowerEdge over 3 years...

Re:Your first server, in 2012 (1)

denis-The-menace (471988) | about 2 years ago | (#41316671)

Maybe something like a QNAP?

Re:Your first server, in 2012 (1)

Anonymous Coward | about 2 years ago | (#41316819)

Back in the day (say, 2008 as in the article), if you wanted to buy a server, you'd buy one from the big three.

These days, especially with FB and Google leading the way on commodity hardware, it's a different story.

So what should you get for your first server. I.e., you're a small company. You've got a couple of laptops. You're outgrowing mutual Samba.

You maybe want a fileserver. Maybe it'll have a few NICs and a virtual machine on it (Xen?) will do double duty as a external webserver.

So, Core i3, i5, Xeon? Number of processor cores? Forget fast drives, and just buy a lot of memory? Rack? Or tower?

Lockable front (so people can't just come by and reset it)? Hotplug harddrives? (You don't go this if you go the Google build-your-own route.) Redundant hard drives and ECC memory? Or a couple different commodity-style servers + sharding/rsync?

Is a big 3 server worth it? Or search for your own server case + server power supply, etc.?

If you're asking these kinds of questions to try and piecemeal your company's file server together from spare parts, clearly you've never had to deal with an outage with a 4-hour SLA window...

One does not pay the premium for hardware from the Big Three because it's a bargain. One pays for the support engine behind it (and even if it sucks or fails, "Big Three" translates to "CYA" when the SHTF...)

Re:Your first server, in 2012 (1)

DragonWriter (970822) | about 2 years ago | (#41317737)

One does not pay the premium for hardware from the Big Three because it's a bargain.

Of course you do, if its a rational decision. You buy it because the expected combined cost of the hardware + support + cost of expected downtime and other losses despite the support is lower than with the available alternatives. Its bargain hunting, just with a wider scope of costs included in the analysis than just the sticker price of the hardware.

Re:Your first server, in 2012 (3, Insightful)

drsmithy (35869) | about 2 years ago | (#41317025)

Is a big 3 server worth it?

Almost certainly. The problem is most techies - especially young ones - only look at a handful of specifications (CPU, RAM, # disks) and the sticker price, because they think their time is free.

Re:Your first server, in 2012 (2)

Todd Knarr (15451) | about 2 years ago | (#41317689)

Or we think that our time costs, but it costs less than business downtime does. If you depend on the vendor and their support contract, you're impacted for however long it takes them to come out. They won't typically let you keep spares, so when a part breaks that box is impaired or off-line for whatever your contract response time it and there's nothing you can do about it. But if it's a white-box server that can be worked on in-house, you can typically keep spares on the shelf. It may cost more in admin/tech time than the support contract would, but you get the choice of paying the time and getting the box back on-line in an hour instead of anywhere from 4 hours to next-day. And you get the option of saying "Not worth messing around with. Grab a new box, spin it up and we'll figure out what's broken with this one after we're back on-line.". We techies don't think our time is free, we just don't make the common management mistake of thinking that down-time waiting for a vendor response is free. And usually our time costs a lot less than the down-time would.

Re:Your first server, in 2012 (1)

drsmithy (35869) | about 2 years ago | (#41318353)

Or we think that our time costs, but it costs less than business downtime does. If you depend on the vendor and their support contract, you're impacted for however long it takes them to come out. They won't typically let you keep spares, so when a part breaks that box is impaired or off-line for whatever your contract response time it and there's nothing you can do about it. But if it's a white-box server that can be worked on in-house, you can typically keep spares on the shelf. It may cost more in admin/tech time than the support contract would, but you get the choice of paying the time and getting the box back on-line in an hour instead of anywhere from 4 hours to next-day.

If your system is that important, then approaching the problem by keeping spare parts on-site is Doing It Wrong. You need proper redundancy.

On top of that, what vendors won't let you buy spare parts ?

And you get the option of saying "Not worth messing around with. Grab a new box, spin it up and we'll figure out what's broken with this one after we're back on-line.".

So exactly the same as name-brand hardware then.

We techies don't think our time is free, we just don't make the common management mistake of thinking that down-time waiting for a vendor response is free.

This is what's called a non-sequitur.

Re:Your first server, in 2012 (0)

Anonymous Coward | about 2 years ago | (#41317647)

You can get a Dell Poweredge for 500-1000 dollars that is pretty nice for a small business with an enterprise support contract. Perfect for a lot of vertical market accounting apps.

Used to sell these when I worked for an accounting software company (for glass shops). It doubled as a file server, database server, print server, vpn host etc.

Re:Your first server, in 2012 (2)

Type44Q (1233630) | about 2 years ago | (#41317673)

Back in the day (say, 2008 as in the article), if you wanted to buy a server, you'd buy one from the big three.

If you wanted a piece of shit (and let's be fair; there are plenty of times that a piece of shit is exactly what a situation requires), then yes; a server from the big three was the way to go. If, however, you wanted something "better" than that (the quotes are due to the admittedly subjective use of the word), you ordered a Supermicro or Intel serverboard, server case, high quality power supplies, etc, etc... and you never looked back (not if you belonged anywhere near a server, anyway!).

The servers from the Big Three were truly low-quality dogshit compared to what just about any reasonably competent and knowledgeable systems engineer could slap together in a couple hours using quality off-the-shelf parts costing a fraction as much.

I heard all the arguments against this back then (usually from extremely incompetent PHB's); those arguments failed to hold much water then and they hold even less now, looking back on it.

No surprise. (5, Interesting)

heypete (60671) | about 2 years ago | (#41316611)

Why bother with branded parts made by an ODM when you can buy directly from the ODM?

My old workplace had (has, probably) a fairly beefy Sun server with a whole bunch of disks. They used it as a RAID-based storage server for a bunch of lab data. As they do on occasion, a hard disk would crap out. The server wouldn't take ordinary disks, though: it would only accept Western Digital disks with some Sun ID code baked into the firmware -- rather than simply being able to buy a few WD RAID-friendly disks ahead of time, we had to jump through Sun's hoops to get disks replaced under warranty. This usually was a multi-week process, during the array with the failed disk was running with a hot spare -- hardly ideal. That was the last time we bought Sun systems.

At some other point, we were planning on setting up a few more storage servers for backup data. Dell's price for a storage system, including firmware-locked drives, was about triple the cost of doing it ourselves with SuperMicro servers, MD-based software RAID, and RAID-friendly disks. We ended up buying two of the SuperMicro-based systems and putting them in different buildings for semi-offsite backup (the concern was if the server room caught fire, not if a meteor affected the whole city). The only extra step during the setup was putting the disks in their caddies: the Dell systems came with the disks pre-installed. That took about 5 minutes per server. Whoop-dee-doo.

The Dell servers restricted our (with firmware-locked disks) options and cost substantially more than doing it in-house. We'd be stupid to go with their products, as we'd be locked to that vendor for the life of the servers.

Sure, we had Dell Optiplex systems as the desktop workstations for researchers as they were inexpensive, reliable in the lab, and essentially identical (useful for restoring system images from one computer to another), but their server stuff is stupidly overpriced.

The SuperMicro servers were much more "open" in that they used pretty bog-standard parts and didn't have stupid anti-features like firmware locking.

Re:No surprise. (0)

Anonymous Coward | about 2 years ago | (#41316875)

First, a RAID array does not "[run] with a hotspare." When a failure occurs, the hotspare becomes a fully integrated member of the array, at which point you would be running without a hotspare, which on a redundant array isn't that much of a problem considering the Dell replacement would be there within 4 hours of reporting/determining a hardware failure.

Second, Dell servers do not have "firmware-locked disks." I've never heard of such a thing. It's a pretty absurd concept that you could only have OEM hard disks in your box, and an unrealistic expectation that clients would comply.

Finally, hardware RAID is leaps and bounds above software RAID. There's a reason it's cheaper to go with software...

You might have saved money up front, but over the life of the server, you could potentially lose much more when you consider catastrophic hardware failure which would be fully covered under the warranty of the Dell box.

Re:No surprise. (3, Informative)

heypete (60671) | about 2 years ago | (#41317177)

First, a RAID array does not "[run] with a hotspare." When a failure occurs, the hotspare becomes a fully integrated member of the array, at which point you would be running without a hotspare, which on a redundant array isn't that much of a problem considering the Dell replacement would be there within 4 hours of reporting/determining a hardware failure.

It took Sun 3+ weeks to send us a replacement hard disk under warranty and required multiple phone calls. This happened on multiple occasions and was one of the main reasons we decided to stop buying Sun servers.

Yes, the spare became an integrated member of the array. That's true. My point was that the hot spare was now a member of the array and we had no remaining spare disks in the array. Since the server hardware only allowed drives with the Sun firmware, we couldn't keep a supply of spare disks around to swap into the arrays as needed.

Second, Dell servers do not have "firmware-locked disks." I've never heard of such a thing. It's a pretty absurd concept that you could only have OEM hard disks in your box, and an unrealistic expectation that clients would comply.

They did [dell.com] : "In the case of Dell's PERC RAID controllers, we began informing customers when a non-Dell drive was detected with the introduction of PERC5 RAID controllers in early 2006. With the introduction of the PERC H700/H800 controllers, we began enabling only the use of Dell qualified drives."

Same thing with Sun, at least at that point in time.

Finally, hardware RAID is leaps and bounds above software RAID. There's a reason it's cheaper to go with software...

Software RAID was perfectly adequate for our needs: as backup servers they didn't need to have the utmost performance. As a bonus, we weren't reliant on a specific make and model of hardware RAID card: we could connect the array to any system running MD. Even under heavy load the demand on the CPU was negligible.

The Sun server was the main Samba share for the lab: lab instruments would write data to it and researchers would access that data on their desktops. It also used software RAID with multiple arrays set up. CPU usage was similarly low, even at high loads, and it worked quite satisfactorily for the lab.

You might have saved money up front, but over the life of the server, you could potentially lose much more when you consider catastrophic hardware failure which would be fully covered under the warranty of the Dell box.

SuperMicro offered a comparable warranty, so that wasn't really an issue.

Re:No surprise. (1)

h4rr4r (612664) | about 2 years ago | (#41317183)

All the big storage vendors restrict you to their drives. Try chucking random drives in a EMC storage box and tell me how it works out.

Re:No surprise. (1)

zlives (2009072) | about 2 years ago | (#41317325)

There are also some chassis warranty issues (eualogic) while not using dell bought drives... this i have run into which imho is completely asinine but thats the rule.

Dell doesn't honor quotes (1)

Spazmania (174582) | about 2 years ago | (#41316647)

At the beginning of August I got a quote from dell for 2 R710 servers and 4 R610 servers. Three weeks later I placed the order. The response? Sorry, we're not selling those any more. You have to buy the R720's instead and they're more expensive.

So, sorry Dell. I won't be considering you for the upgrades to the other 200 servers I manage after all. Pity because HP just pissed me off with the DL380p gen8 which can hold 16 drives but has no raid card which can use more than 8.

Re:Dell doesn't honor quotes (1)

houstonbofh (602064) | about 2 years ago | (#41316737)

Intel makes a nice server platform that many people OEM. I have bought a few from System76, even when I didn't need Linux. Good support and better price.

Re:Dell doesn't honor quotes (1)

gagol (583737) | about 2 years ago | (#41319435)

Have you tried their laptop? I badly need one, they look and prices great. Need feedback here!

Sorry for offtopic'ing!

Re:Dell doesn't honor quotes (0)

Anonymous Coward | about 2 years ago | (#41316811)

I order A LOT of HP servers( they are my favorite ODM )

First the DL380p gen 8 can have 25 Hard drives....
Second under additional Storage adapters if you are customizing you can pick the p822 controller.... which can handle over 200 HDD's

Re:Dell doesn't honor quotes (1)

Spazmania (174582) | about 2 years ago | (#41318257)

The p822 can handle 200 hard drives as long as 192 of them are in external chassis. It can handle exactly 8 drives inside the dl380p. It is not compatible with HP's internal SAS expander card the way the previous generation of smart array controllers were.

Also, the 25 drive version is not a dl380p gen8, it's a dl380e gen8. The E or Economy series is a distinctly lower end box with a maximum processor speed of 2.4ghz.

Re:Dell doesn't honor quotes (1)

Spazmania (174582) | about 2 years ago | (#41318287)

But if you've implemented a DL380p (not e) with the expansion drive chassis to bring it to 16 drives and gotten all 16 to work on a single controller then by all means tell me what configuration you used. If you've found the hidden magic, I'll be only too happy to eat my words.

Re:Dell doesn't honor quotes (1)

Buelldozer (713671) | about 2 years ago | (#41318827)

FIFTH? (3, Interesting)

Anonymous Coward | about 2 years ago | (#41316651)

Let me get this right. Google, who builds all of their servers in-house, exclusively for their own use (not for resale), is the fifth largest buyer of Intel server chips in the world?

That sure paints a picture about the sheer size of Google's data center operations.

Re:FIFTH? (1)

Penguinisto (415985) | about 2 years ago | (#41317279)

It also paints a picture of just how much pr0n, lolcats, and pointless facebook updates actually exist on Earth.

Pretty depressing, isn't it?

Eh... I don't see this as a huge deal, really.... (2)

King_TJ (85913) | about 2 years ago | (#41316669)

While yes, right now, the tide may be against the server manufacturers -- the cloud still requires them in large quantities to host those services. If it negatively impacts sales, it's only to the extent that efficiency is improved. (EG. Joe Businessman who once bought a server for his office of 10 employees skips it, in favor of cloud computing solutions. But it turns out his needs are small enough so they can share the load with 1-2 other small businesses like his, all on a single server in the cloud.)

In my opinion, Dell has the right idea -- changing the focus on who their customer is for their server products. Beyond that, what's really news here?

Going out on a bit more of a limb though? I'm really of the opinion that cloud services are over-hyped as the "in" thing for every business. Once companies migrate heavily to cloud hosted solutions and use them for a while, a fair number will conclude it's not really beneficial. Then you'll see a return to the business model of running in-house servers again. (Granted, those servers might be smaller, with lower power consumption than in the past. Little "microservers" handle many of the basic file and print sharing work companies used to relegate to full size rack mounted systems in the past.)

But my own experience with cloud migrations tells me that it's not so great, 9 times out of 10. For example, my boss has been using the Neat document management software for a while now to scan in all of his personal receipts and documents at home. Neat now offers "NeatCloud" so you can upload your whole database and then access your docs via an iPhone or iPad client, or even scan something new in by simply taking a picture of it. Sounds great, but in reality, he had nothing but problems with it. The initial upload tied up his PC for the better part of his weekend, only to report that some documents couldn't be converted or uploaded properly. He had close to 100 random pages of existing documents thrown in a new folder the software generated, to hold the problem ones. The only "fix" for this was to click to open a trouble ticket for EACH individual document that failed, so someone at Neat could examine it manually and correct whatever issue prevented their system from properly OCRing and uploading it. Clearly, that wasn't much of a solution! He tried, repeatedly, to get someone to remote control into his PC to do some sort of batch repair for him -- but after a couple promises to call back "the next day" to look at it, nobody ever did. Now, all Neat can tell him is they have another update patch coming out for the software in the next week, and to disable cloud uploads until that time.

Or take the recent migration a small office did from GoDaddy pop3/smtp email with Outlook to Google hosted mail. I usually help these guys with their computer issues but they thought they could tackle this migration on their own. Turns out, they wound up with a big mess of missing sub-folders of mail in Outlook on the owner's machine. After a lot of poking around, I discovered part of the problem was due to characters in the folder names that Google Apps didn't consider valid. When it hit one of those during the mail migration, it just skipped the whole mail folder upload with an error. (Did Google's migration wizard utility even warn about this in advance or offer to help rename the problem folders before continuing? Heck no!)

For that matter, take what you'd think is pretty basic functionality with cloud based data backup? I've run into multiple situation now where people used services like MozyPro for their backups, only to discover a full restore (when a drive crashed) was incredibly slow and kept aborting in the middle of the process, making the data restore essentially impossible. Mozy's solution? They're willing to burn a copy of the data onto optical disc and physically mail it back to you. So much for the whole cloud thing, huh?

Re:Eh... I don't see this as a huge deal, really.. (1)

DragonWriter (970822) | about 2 years ago | (#41317137)

While yes, right now, the tide may be against the server manufacturers -- the cloud still requires them in large quantities to host those services.

Google's position on the list of Intel server-chip buyers makes it clear that the problem isn't for people server manufacturers (which Google, very much, is), its for server vendors. Sure, the cloud requires servers. But if the people selling cloud services are also building their own servers, that doesn't create a market for server vendors.

Re:Eh... I don't see this as a huge deal, really.. (2)

Todd Knarr (15451) | about 2 years ago | (#41317219)

It may also depend on what kind of servers companies like Google want. Dell, HP and the like produce expensive servers with high-cost maintenance contracts, which look great to conventional business-executive types. Google, OTOH, probably is taking the techie approach of generic white-box servers with no support. They're installing their own OS image on it, and it's not going to be Windows or a commercial Unix, and with all Google's custom software they probably find vendor support all but useless. Ditto hardware support, the idea is to not worry too much about failures and just replace the box, and with generic hardware replacing failed parts is probably cheaper than the support contract would've been.

You've nailed the rest, though. When you depend on "The Cloud", you're depending on someone else to prioritize solving your problem. The problem is that the most effective solution for them is far from optimal for you, and you don't have enough leverage with them to change their priorities. At least when stuff is in-house the people responsible for it answer to you and you can, if needed, go down and rearrange their to-do list in person.

Re:Eh... I don't see this as a huge deal, really.. (0)

Anonymous Coward | about 2 years ago | (#41317641)

Or small business could, you know, buy now-cheap servers and skip the whole monthly bill that comes with The Cloud (tm). Hosted email? Sure, but that's not "cloud"--it's been common for years. Backup? Properly done, OK, but the thought of paying per use on a business server is just creepy. Well, that, and you'll eventually get the Verizon Effect. Hosted servers will be cheap until people depend on them and then--price hikes and service reductions. It's the American Way (tm).

iPads! Clouds! LinkedIn! (1)

devphaeton (695736) | about 2 years ago | (#41316675)

Further proof that tablets and the Cloud(tm) are the paradigm shift into the new memesphere. Nobody needs big, bulky Iron from folks like IBM, HP, EMC, etc.

We'll do it all now on clustered iPads! With Retina Displays! Surfing the web is dead, now we're Hangliding in The Cloud(tm)!!!!

Re:iPads! Clouds! LinkedIn! (1)

denis-The-menace (471988) | about 2 years ago | (#41316721)

"The Cloud" is only good as secondary backup if you don't care that it becomes public.

Encrypt it all you want. Access to your data is the hardest hurdle and by using the could you give it away.

Re:iPads! Clouds! LinkedIn! (1)

devphaeton (695736) | about 2 years ago | (#41316837)

"The Cloud" is only good as secondary backup if you don't care that it becomes public.

Encrypt it all you want. Access to your data is the hardest hurdle and by using the could you give it away.

But.. but.. but... smartphones and virtualization and...and...and...free community wireless internet over dark fiber!!!

(Yes, I'm just being silly. Having a slow day at work and the free coffee sucks)

Re:iPads! Clouds! LinkedIn! (1)

DrVomact (726065) | about 2 years ago | (#41317557)

"The Cloud" is only good as secondary backup if you don't care that it becomes public.

Encrypt it all you want. Access to your data is the hardest hurdle and by using the could you give it away.

I'm thinking that people who want to "be in the cloud" don't think about stuff like encrypting. "What, me--worry? I'm using the cloud!" En/Decrypting is work, and the whole idea of the cloud is to avoid work. If any crypto is being done, it's probably a service operated by your friendly (non-local) cloud provider, which means it provides no real security at all.

This willingness of businesses to surrender their family jewels—their data—to complete strangers has puzzled me since this type of service came into vogue. But then, I'm also mystified by people's willingness to put their true names and their personal lives on the web and make them accessible to everyone. There is an acute shortage of paranoia among the sheep these days.

What about Netcraft? (2)

ickleberry (864871) | about 2 years ago | (#41316869)

Do they confirm it? Nothing's actually dieing until Netcraft says so.

Re:What about Netcraft? (0)

Anonymous Coward | about 2 years ago | (#41316993)

I've been lurking on Slashdot for over a decade and have seen this joke/statement repeatedly, but I've never understood it. Can someone explain it?

Re:What about Netcraft? (2)

ickleberry (864871) | about 2 years ago | (#41317165)

Some lad was trollin once and said "BSD is dieing, netcraft confirms it" citing the latest publication by netcraft then it kind of took on a life of its own and now nothing's dieing until Netcraft confirms it

Re:What about Netcraft? (0)

Anonymous Coward | about 2 years ago | (#41317307)

Ah, so it's a "you had to be there" kind of thing.

Re:What about Netcraft? (1)

kiwirob (588600) | about 2 years ago | (#41318731)

I was running something like FreeBSD 4.3 with KDE as my primary desktop (refused to go from Win98 to XP so went BSD instead) at the time of the "BSD is dieing, netcraft confirms it" meme. Believe you me I was there and I took it very personally at the time, how could someone say my beloved desktop of choice was dieing! ....hang on a minute that was their plan all along... bloody trolls. Was probably those pesky linux troll users slagging off the original BSD unix systems in the early 2000's. I get the same thing today from Fandriod trolls slagging off my current BSD distribution known as OSx & iOS.

demand finally saturated (2)

Thud457 (234763) | about 2 years ago | (#41317013)

see, I told you that electronic data processing was a fad

-- Spencer Tracy, "The Desk Set", 1957

Home built + VMs (1)

heezer7 (708308) | about 2 years ago | (#41317349)

We have 2 in house built servers hosting 10 VMs. Waaay less hardware than if it was not virtualized and miles cheaper than the Dell servers we priced out.

Really Cheaper? (0)

Anonymous Coward | about 2 years ago | (#41317705)

Is it really cheaper to build it yourself once you reach a certain size? We just added a few petabytes for storage and went with IBM. It was costing only a little more than ordering parts and doing it ourselves but it wouldn't have been as neat and efficient. That thing is tightly integrated, from the power supplies to the software.

To have something comparable, we would have had to hire a mechanical engineer and an electrical engineer on top of taking time from the people in house to work on the software side.

Of course, that all made me want to add another SAN at home. My own build came out cheaper than buying something pre-built but that's because off-the-shelf parts and software are sufficient for my own needs.

With Google and Amazon, they hired people to design this stuff but I don't see every bank, retail chain, or big companies creating their own solution.

Dell has one advantage (0)

kurt555gs (309278) | about 2 years ago | (#41317739)

Michael Dell can suck harder and deeper on Balmer's cock. It's worked in the past.

Diversify (0)

Anonymous Coward | about 2 years ago | (#41318323)

Not a big impact to HP, IBM, Dell. They have since taken on IT Service Management, Service Desks, BPOS, among other sevriceng lines. More money in servicing then in hardware sales. Labor is cheap, profit margin large. Their shareholdetrs are happy.

Beowulf (0)

Anonymous Coward | about 2 years ago | (#41319527)

I'd say this dude, http://en.wikipedia.org/wiki/Thomas_Sterling_(computing), killed them.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...