×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Data Storage Capacity Mostly Wasted In Data Center

CmdrTaco posted more than 3 years ago | from the and-they-never-turn-the-lights-off dept.

Data Storage 165

Lucas123 writes "Even after the introduction of technologies such as thin provisioning, capacity reclamation and storage monitoring and reporting software, 60% to 70% of data capacity remains unused in data centers due to over provisioning for applications and misconfiguring data storage systems. While the price of storage resource management software can be high, the cost of wasted storage is even higher with 100TB equalling $1 million when human resources, floor space, and electricity is figured in. 'It's a bit of a paradox. Users don't seem to be willing to spend the money to see what they have,' said Andrew Reichman, an analyst at Forrester Research."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

165 comments

Intentional? (5, Insightful)

Anonymous Coward | more than 3 years ago | (#33058424)

I don't know about your data center, but ours keeps drives well below full capacity intentionally.

The more disk arms you spread the operations over, the faster the operations get, and smaller drives are often more expensive than larger ones.

Plus, drives that are running close to full can't manage fragmentation nearly as well.

Re:Intentional? (5, Insightful)

TrisexualPuppy (976893) | more than 3 years ago | (#33058672)

Yep, that's how we run things at my company. Drives and controllers have fewer files to deal with, and all else assumed equal, you get better performance this way.

You also have to think of the obvious spare capacity. In 2005, my company invested in a huge (at the time) 10TB array. The boss rightfully freaked out when we were hitting more than 30% usage in 2007. After having a slow, quasi-linear growth of files for the previous couple of years, the usage jumped to 50% in a matter of months. It ended up that our CAD users switched to a newer version of the software without our knowledge (CAD group managed their own software) and didn't tell us. The unexpected *DOES* happen, and it would have been incredibly stupid to have been running closer to capacity.

Accounting would have probably had half of us fired if they hadn't been able to do their document imaging which tends to take up a lot of space on the SAN.

Yet another sad FUD or FUD-esque article based on Forrester's findings.

Mod parent up (0)

Anonymous Coward | more than 3 years ago | (#33058722)

Interesting. Was the culprit all cad files out of the new rev?

Re:Mod parent up (2, Insightful)

TrisexualPuppy (976893) | more than 3 years ago | (#33058846)

Interesting. Was the culprit all cad files out of the new rev?

Yes, for the most part. Because of a bad config, they were going from drawings around 1-10MB to drawings over 100MB. That's what happens when you get management to take the IT department out of the software management and configuration equation. We were, of course, still left to sweep up the pieces.

Re:Intentional? (3, Insightful)

KernelMuncher (989766) | more than 3 years ago | (#33059948)

I think the above example is a great reason why you should always over-engineer your storage capability somewhat. Demand for space can come up unexpectedly and stop the whole show if it's not there. Also if you don't use the storage today, you will definitely make use of it tomorrow. Data usage always goes up, not down. So there's ROI for the next fiscal year when you can make use of the extra capacity.

Re:Intentional? (4, Insightful)

Nerdfest (867930) | more than 3 years ago | (#33058866)

Simply put, over-provisioning is relatively harmless while under-provisioning is very bad.

Re:Intentional? (1)

HungryHobo (1314109) | more than 3 years ago | (#33060314)

I don't know about the workplace of the writer of TFA but when I worked in a big factory the price of downtime or a failure due to an application or a number of applications running out of disk space could potentially cause a million worth of damage in lost productivity or damaged product (say it gets stuck in a time sensitive step) in less than half an hour.
I heard claims that a full fab down could cost a million in 10 minutes though that could have been a slight exaggeration.

a million worth of extra disk space to significantly cut down on the chances of that happening or to allow apps to have their own disk or partition(so say one buggy app doesn't bring down 10 more) would barely have made the managers blink.

Re:Intentional? (4, Insightful)

hardburn (141468) | more than 3 years ago | (#33058880)

FTA:

Rick Clark, CEO of Aptare Inc., said most companies can reclaim large chunks of data center storage capacity because it was never used by applications in the first place. . . . Aptare's latest version of reporting software, StorageConsole 8, costs about $30,000 to $40,000 for small companies, $75,000 to $80,000 for midsize firms, and just over $250,000 for large enterprises.

In other words, the whole thing is an attempt to get companies to spend tens of thousands of dollars for something that could be done by well-written shell script.

Re:Intentional? (2, Insightful)

dmgxmichael (1219692) | more than 3 years ago | (#33059592)

When I see services advertised at those kinds of rates I can't help but remember P.T. Barnum's slogan: "There's a sucker born every minute."

Re:Intentional? (0, Insightful)

Anonymous Coward | more than 3 years ago | (#33059822)

Clearly the biggest waste is listening to analysts from Forrester Research (or other useless research company).

If your CxOs were hoodwinked by some con-slutant into buying super expensive storage "solutions" (and stuff like "blades"), then you'd probably also "need" expensive stuff like this to figure out how to allocate or reallocate the _overpriced_ space.

But if you got cheaper storage in the first place, it's better to just buy more storage if you run low on space than to spend lots of money on "solutions looking for problems".

Google, ebay, yahoo etc don't use such stuff. I doubt most companies should either.

Re:Intentional? (1)

nobodylocalhost (1343981) | more than 3 years ago | (#33059546)

Agreed. People keep on forgetting, it's not just storage, but iops matter too. When you are running a cluster with hundreds of VMs, you need to size out storage based on how much iops you can get out of these disks instead of how much storage you can give them. Even if you plan out space just enough for each and every application, if disk iops can't keep up at a useful speed, you will get applications that crash, stall, or generally performing horribly.

Re:Intentional? (4, Insightful)

Score Whore (32328) | more than 3 years ago | (#33059770)

Not to mention the fact that over the last few years drive capacities have skyrocketed while drive performance has remained the same. That is, your average drive / spindle has grown from 36 GB to 72 GB to 146 GB to 300 GB to 400 GB to 600 GB, etc. while delivering a non-growing 150 IOPS per spindle.

If you have an application that has particular data accessibility requirements, you end up buying IOPS and not capacity. A recent deployment was for a database that needed 5000 IOPS with services times to remain less than 10 ms. The database is two terabytes. A simple capacity analysis would call for a handful of drives, perhaps sixteen 300 GB drives mirrored for a usable capacity of 2.4 TB. Unfortunately those sixteen drives will only be able to deliver around 800 IOPS at 10 ms per. Instead we had to configure one hundred and thirty 300 GB drives, ending up with over 21 TB of storage capacity that is about ten percent utilized.

These days anytime an analyst or storage vendor starts talking to me about thin provisioning, zero page reclaim, etc. I have to take a minute and explain to them my actual needs and that they have very little to do with gigabytes or terabytes. Usually I have to do this multiple times.

In the near future we will be moving to SSD based storage once more enterprise vendors have worked through the quirks and gained some experience.

Re:Intentional? (1)

Dogers (446369) | more than 3 years ago | (#33060126)

You might like to speak to 3PAR - when we got them in, they didn't only ask how much storage we wanted, they wanted to know how many IOPS we needed. Their stuff works on the basis that not all the data is needed all of the time. Put hot data on SSD, recent data on SAS/fibre drives and stuff that's not been touched for a while on SATA

Re:Intentional? (1)

marcosdumay (620877) | more than 3 years ago | (#33060700)

That's even not talking about the fragmentation that inexorably follows a project that has the exact needed size, and all the costs of managing it.

If you are now using 60% of your storage capability, you are in troble since that can increase quite fast, not giving you time to buy adequate hardware. What follows is a hell of problems, partitioning storage servers, reallocating disks, reconfiguring workstations and so on.

But related to the cost of too little storage. (1)

Z00L00K (682162) | more than 3 years ago | (#33058428)

The cost of too much storage isn't bad.

Of course - you may say that it's necessary to delete old data, but in some cases you can't know which old data that may be needed again.

Do the math (1, Insightful)

Anonymous Coward | more than 3 years ago | (#33059044)

70% used space with the 100TB mentioned in the article, leaving us with 70TB.

Think of how much porn 70TB is!

Re:Do the math (1)

Score Whore (32328) | more than 3 years ago | (#33060168)

Think of how much porn 70TB is!

hottiehost$ find . -type f | wc -l
      8433275
hottiehost$ bc
scale=3
8433275*(70/83)
7109250.825

It's just over seven million images...

Shhhhh (0)

Anonymous Coward | more than 3 years ago | (#33058430)

I want more toys.

100TB = $1 million (2, Insightful)

maxwell demon (590494) | more than 3 years ago | (#33058456)

I didn't know that I've got $25000 dollars worth of storage at home :-)

Re:100TB = $1 million (2, Informative)

phantomcircuit (938963) | more than 3 years ago | (#33058522)

I didn't know that I've got $25000 dollars worth of storage at home :-)

It's not worth that much in your home, unless you happen to have redundant power supplies and redundant uplinks.

Re:100TB = $1 million (2, Funny)

Luyseyal (3154) | more than 3 years ago | (#33058622)

And "human resources".

-l

Re:100TB = $1 million (3, Funny)

aliquis (678370) | more than 3 years ago | (#33058782)

And "human resources"

"I'll go build my own data center, with blackjack and hookers!"?

Re:100TB = $1 million (2, Funny)

Luyseyal (3154) | more than 3 years ago | (#33058902)

In fact, forget the data center and blackjack!

-l

Re:100TB = $1 million (1, Informative)

Anonymous Coward | more than 3 years ago | (#33059796)

Just get married. Of course that will cost you more in the long run - hookers are bounded by the hour.

Re:100TB = $1 million (0)

Anonymous Coward | more than 3 years ago | (#33058972)

And the cost of the SAN infrastructure...DERP

Re:100TB = $1 million (1)

hansamurai (907719) | more than 3 years ago | (#33058956)

That's actually cheap compared to the prices I heard quotes at my company the other day. So sad.

Let's play the odds: (5, Insightful)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#33058466)

Likelihood that I get fired because something important runs out of storage and falls over(and, naturally, it'll be most likely to run out of storage under heavy use, which is when we most need it up...): Relatively high...

Likelihood that I get fired because I buy a few hundred gigs too much, that sit in a dusty corner somewhere, barely even noticed except in passing because there is nobody with a clear handle on the overall picture(and, if there his, he is looking at things from the sort of bird's eye view where a few hundred gigs looks like a speck on the map): Relatively insignificant...

Re:Let's play the odds: (3, Insightful)

qbzzt (11136) | more than 3 years ago | (#33058608)

Exactly, and that's the way it should be. Your CTO wants you to suggest spending a few extra hundreds of dollars on storage to avoid downtime.

Re:Let's play the odds: (1)

ultranova (717540) | more than 3 years ago | (#33059194)

Your CTO wants you to suggest spending a few extra hundreds of dollars on storage to avoid downtime.

A few hundred dollars gets you a few terabytes (it's around 163 dollars for a 2 terabyte drive in the first netstore I checked), not a few hundred gigabytes. Or are these "enterprise harddrives [wnd.com] " ?-)

Re:Let's play the odds: (3, Insightful)

wagnerrp (1305589) | more than 3 years ago | (#33059934)

They're not buying the $100 2TB bargain special, they're buying the $300 300GB 15K SAS drive. They don't care how much storage they have, they just want the IOPS.

Re:Let's play the odds: (1)

BlackSnake112 (912158) | more than 3 years ago | (#33061224)

Most CIOs would not risk their job on non enterprise hard drives. The regular drives may be cheaper, but they may also fail sooner. Data centers and the like are most likely using enterprise level drives.

That being said. Many of us have had enterprise drives fail in under a month and have consumer level drives that are still going strong after 10+ years.

Re:Let's play the odds: (1)

TubeSteak (669689) | more than 3 years ago | (#33059980)

Exactly, and that's the way it should be. Your CTO wants you to suggest spending a few extra hundreds of dollars on storage to avoid downtime.

The way we build servers and do storage has changed *massively* over the last 10 years.
Why is it so hard to imagine that storage is going to evolve again?
FTFA

Aptare's latest version of reporting software, StorageConsole 8, costs about $30,000 to $40,000 for small companies, $75,000 to $80,000 for midsize firms, and just over $250,000 for large enterprises.

"Our customers can see a return on the price of the software typically in about six months through better utilization rates and preventing the unnecessary purchase of storage," Clark said.

A minimum of $5,000 per month strikes me as a touch more than "spending a few extra hundreds of dollars on storage."

Cost/Delay of "Precise" Study vs. Cost of Hardware (1)

billstewart (78916) | more than 3 years ago | (#33061268)

I've got "Precise" in quotes because I'm skeptical that you can ever get really good predictions about the future, or even the present, especially from users. But if you try, it's going to take you a while, and you'll be spending loaded-engineer-salary time and harassed-user time trying to get predictions that the users will tell you are guesses. Meanwhile, you do need to get some disks online, and it's going to take you a while to accumulate tracking data. I'm in that kind of situation now - until there's enough disk and users on the system to get a really good model of users, we won't really know, so we're aiming high.

Re:Let's play the odds: (2, Insightful)

_damnit_ (1143) | more than 3 years ago | (#33059558)

Of course this is the case. This study is as exciting as news that George Michael is gay. There have been plenty of studies to this effect. My company makes tons of money consulting on better storage utilization. [Some Fortune 500 companies I've visited run below 40% utilization.] EMC, IBM, HDS, NetApp and the rest have no real interest in selling you less drives. They all make vague, glossy statements about saving storage money but in reality you need to be wasteful if you want to protect your ass. Think of the things we spend $ on just to get another 9 on the uptime digits: UPS, generators, clustering, DR systems/networks that sit idle, dark fibre between datacenters, RAID 1(+0), RAID 6, tapes, VTLs, Storage Arrays, redundant Fibre Channel SANs, . . .

From a human perspective, fuzzyfungus is right. Over-engineering is less likely to cost your job than failure. Plus, over-engineering is easy to justify.

Some things are just known to cost money if you MUST ensure that business is not subject to fallibility in hw and sw. The fact that there are 50 TBs unused out of your 200 TB of usable storage really might not mean too much. [Some of the numbers quoted could point to the mirrored side of RAID 1 stripes as wasted. It's a cheap gimmick to make the numbers look worse but still true to a certain extent if the performance difference between R5 and R1 is not needed.] Of course, there are usually low hanging fruit that can be attacked to save real money and prevent cascading costs on the other cost centers mentioned above but there will always be waste. It's the cost of five 9's.

Re:Let's play the odds: (1)

Shotgun (30919) | more than 3 years ago | (#33060434)

NetApp and the rest have no real interest in selling you less drives.

Then why is about half of their feature set aimed at helping their customers reduce storage usage (wafl file system, dedupe, etc)?

Why have the instituted a systems group to do nothing BUT coach customers in how to reduce disk usage?

There is a LOT of competitive advantage in selling less drives.

You know if they were under provisioning (1, Interesting)

Anonymous Coward | more than 3 years ago | (#33058504)

The story would be generating much gnashing of teeth about the evil corporations and the corner cutting that was bringing down our pink unicorns.

Can win for losing around here.

Overprovisioning (3, Interesting)

shoppa (464619) | more than 3 years ago | (#33058516)

It's so easy to over-provision. Hardware is cheap and if you don't ask for more than you think you need, you may end up (especially after the app becomes popular, gasp!) needing more than you thought at first.

It's like two kids fighting over a pie. Mom comes in, and kid #1 says "I think we should split it equally". Kid #2 says "I want it all". Mom listens to both sides and the kid who wanted his fair share only gets one quarter of the pie, while the kid who wanted it all gets three quarters. That's why you have to ask for more than you fairly need. It happens not just at the hardware purchase end but all the way up the pole. And you better spend the money you asked for or you're gonna lose it, too.

Re:Overprovisioning (5, Insightful)

Maarx (1794262) | more than 3 years ago | (#33058584)

That mother is terrible.

Re:Overprovisioning (1)

Zerth (26112) | more than 3 years ago | (#33058868)

And works in the budgeting dept of a company I'm glad I'm no longer at.

Re:Overprovisioning (0)

Anonymous Coward | more than 3 years ago | (#33058998)

That mother is terrible.

Terrible? That's a *nice* word for the woman that raised Glenn Beck [youtube.com]

Re:Overprovisioning (1, Insightful)

Lunix Nutcase (1092239) | more than 3 years ago | (#33059056)

There's a reason his mom killed herself. Would you want to be known as the one who gave birth to that festering, pustule of fat?

Re:Overprovisioning (1)

omglolbah (731566) | more than 3 years ago | (#33059504)

Oh my mom was much more devious.

She would let one of us cut the pie, and the other pick the first piece....

Now imagine a 14 and 11 year old using nasa-style tools to divide a piece of pie ;)

Re:Overprovisioning (1, Insightful)

Anonymous Coward | more than 3 years ago | (#33060952)

Oh my mom was much more devious.

She would let one of us cut the pie, and the other pick the first piece....

That's not devious - all moms with even a lick of sense do it that way.

Re:Overprovisioning (0)

Anonymous Coward | more than 3 years ago | (#33058588)

Your siblings must have had a horrible upbringing, with you always taking more than your fair share. Though I'm sure it worked out for you nicely. also your mother is a whore.

Re:Overprovisioning (0)

Anonymous Coward | more than 3 years ago | (#33058762)

He's American so gobbling down 3/4ths of a pie is just a bite-size snack to his oversized gullet.

Re:Overprovisioning (4, Insightful)

Archangel Michael (180766) | more than 3 years ago | (#33058834)

Dad here. Had that fight (or similar). I asked a simple question to the kid who wanted it all. I asked him "all or nothing?" and again he said "all", to which I said "nothing".

Of course he rightly cried "Not Fair!!!", and I said, you set the rules, you wanted it all, setting the rule up that you didn't want to be fair, I'm just playing by your rules.

Never had that problem again. EVER.

Re:Overprovisioning (1)

MagicM (85041) | more than 3 years ago | (#33058966)

I asked him "all or nothing?"

At that point he was screwed. If he said "nothing", he could reasonably expect to get nothing. His only option was to say "all" if he wanted to get a chance at something.

Re:Overprovisioning (0)

Anonymous Coward | more than 3 years ago | (#33059016)

His fatass porker son didn't need any pie to begin with. The fatty should have been on the treadmill instead of huffing and puffing trying to scam more pie.

Re:Overprovisioning (3, Insightful)

Archangel Michael (180766) | more than 3 years ago | (#33059272)

Nope, he wasn't screwed, because it wasn't the only option; it was a false dichotomy. I gave him a chance to offer another choice, it was just veiled. Kobioshi Maru. He could have thought about it and said "half" even though that wasn't an obvious choice.

I often give my kids tests to break them out of self imposed boxes (false dichotomy). Pick a number between 1 and 10 .... 1 - no, 2 - no, 3 - no, 4 - no .... 9 - no, 10 no ... THAT IMPOSSIBLE DAD!!.

No it isn't. The number I had in mind was Pi.

Raising kids to think for themselves, and outside the "boxes" society tends to put on things makes them able to deal better with things that don't appear to make sense.

You can dumb down your kids by not challenging them, or you can challenge them every step of the way, in ways that force them to learn more than they know.

Re:Overprovisioning (2, Insightful)

Culture20 (968837) | more than 3 years ago | (#33060782)

At that point he was screwed. If he said "nothing", he could reasonably expect to get nothing. His only option was to say "all" if he wanted to get a chance at something.

If my son (nobly or stubbornly) said "nothing", I'd offer him half or nothing. Parents are allowed to alter the deals. Pray that they alter them further.

Re:Overprovisioning (0)

Anonymous Coward | more than 3 years ago | (#33059174)

Worst analogy ever!

Disk space is free (5, Interesting)

amorsen (7485) | more than 3 years ago | (#33058574)

Who cares if you leave disks 10% full? To get rid of the minimum of 2 disks per server you need to boot from SAN, and disk space in the SAN is often 10x the cost of standard SAS disks. Especially if the server could make do with the two built-in disks and save the cost of an FC card + FC switch port.

I/O's per second on the other hand cost real money, so it is a waste to leave 15k and SSD disks idle. A quarter full does not matter if they are I/O saturated; the rest of the capacity is just wasted, but again you often cannot buy a disk a quarter of the size with the same I/O's per second.

Re:Disk space is free (2, Interesting)

eldavojohn (898314) | more than 3 years ago | (#33058842)

Who cares if you leave disks 10% full? To get rid of the minimum of 2 disks per server you need to boot from SAN, and disk space in the SAN is often 10x the cost of standard SAS disks. Especially if the server could make do with the two built-in disks and save the cost of an FC card + FC switch port.

I/O's per second on the other hand cost real money, so it is a waste to leave 15k and SSD disks idle. A quarter full does not matter if they are I/O saturated; the rest of the capacity is just wasted, but again you often cannot buy a disk a quarter of the size with the same I/O's per second.

I don't know too much about what you just said but I do know that the Linux images I get at work are virtual machines of a free distribution of Linux. I can request any size I want. But my databases often grow. And then the next thing is that a resizing of a partition is very expensive from our provisioner. So what do we do? We estimate how much space our web apps take up a month and then we request space for 10 years out. Because a resize of the partition is so damned expensive. And those sizes are usually pretty small anyway if you're building databases. Then we occasionally notify our managers when space is getting low by using the provisioner's dashboard tool and we re-assess the application. Is it getting unexpectedly popular or was it bad estimation from the beginning?

I don't know if I should be bothering with the hardware level of things. I sure do like it this way even though it is a really expensive price for the project but the payment remains inside our company anyway. It's internal to the company so we're all using some nebulous group of actual machines and RAIDs to produce a massive cloud of smaller servers as images. There are some downsides and a bit of overhead to pay for virtualization but I thought everyone had moved to this model ...

Re:Disk space is free (1)

marcosdumay (620877) | more than 3 years ago | (#33060874)

Those virtual machines are stored on a real SAN somewhere. The SAN administrator deals with all the things the GP said, that is why you don't need to understand it. Anyway, he'd better have some spare capacity and plan based on I/O, and not storage size (he probably did), otherwise, you'll have big unknown risks.

Re:Disk space is free (2, Interesting)

bobcat7677 (561727) | more than 3 years ago | (#33058878)

Parent has an excellent point. Utilization is not always about how full the disk is...especially in a data center where there is frequently large database operations requiring extreme amounts of IOPS. In the past, the answer was to throw "more spindles" at it. At which point you could theoretically end up with a 20GB database spread across 40 SAS disks making available ~1.5TB of space using the typical 73GB size disks just to reach the IOPS capacity needed to handle heavy update/insert/read operations. Huge waste of space, but only way to do it with spinning disks. SSDs of course can solve the problem, but most SAN vendors are still charging insane prices for what meager SSD options they offer, with some vendors not even offering SSD options yet. And then you can end up on the other end of the scale, with having to buy more IOPS capacity then you need just to get enough SSD space for your data. Adaptec has some cool technology for "hybrid" arrays consisting of both SSDs and spindle disks in the same array (I have heard the latest versions of Solaris can do this with ZFS too). But the applications for Hybrid arrays are somewhat limited because write performance still sucks once any available write cache is saturated (and especially if the controller/software array has no cache).

Re:Disk space is free (1)

joe_frisch (1366229) | more than 3 years ago | (#33059204)

The $1million / 100TB might be real, though it seems high, but he great majority of that is NOT hardware costs. In fact having larger disks than you need may reduce the management costs - less chance a particular disk set will become full, extra space to move data from failing disks, etc.

Or IT is provisioning for peak usage (3, Informative)

Todd Knarr (15451) | more than 3 years ago | (#33058736)

Having too much storage is an easy problem. Sure it cost a bit more, but not prohibitively so or you'd never have gotten approval to spend the money. Not having enough storage, OTOH, is a hard problem. Running out of space in the middle of a job means a crashed job and downtime to add more storage. That probably just cost more than having too much would've, and then you pile the political problems on top of that. So common sense says you don't provision for the storage you're going to normally need, you provision for the maximum storage you expect to need at any time plus a bit of padding just in case.

AT&T discovered this back in the days when telephone operators actually got a lot of work. They found that phone calls tend to come in in clumps, they weren't evenly distributed, so when they staffed for the average call rate they ended up failing to meet their answer times on a very large fraction of their calls. They had to change to staffing for the peak number of simultaneous calls, and accept the idle operators as a cost of being able to meet those peaks.

Re:Or IT is provisioning for peak usage (1)

kirillian (1437647) | more than 3 years ago | (#33059768)

Queue theory...one of the oddest choices for a topic to cover in operating systems class in college, but the most intriguing and useful thing I ever got out of all of my classes - honestly probably the only thing that I use day to day that I learned in class and not from teaching myself. The concept of analyzing a process that can be described with a queue (such as a datacenter or the telephone operators) and then finding an efficient means of handling the queue, including managing desirable wait times and total time in queue is incredibly applicable in corporate environments. Personally, I think queue theory would probably be more useful to business people than most of the other things that they teach them.

Need to read this one carefully (1, Interesting)

Anonymous Coward | more than 3 years ago | (#33058752)

If you RTFA (and admittedly, this is not very clear), the article tries to make the point that you don't need all of this storage capacity to be live. However, you've got a bunch of storage pools or machines just running idling as opposed to actually doing something. What the article is trying to say is that using provisioning tools that will spin up storage pools or servers as they are needed (as capacity increases) is a much better solution to just leaving them running. Obviously peak load will cause issues, but you configure your provisioning tools to be smarter to start bringing up capacity at lighter loads or specific times of day. The point still stands that most data centers just have idling machines that could just as easily be shut off most of the time and automatically brought up when needed, it's just that most do not use these tools despite the savings in electricity, wear, and cooling costs.

The article confounds the issue by starting to talk about the lack of monitoring tools that leads to overprovisioning, and ends with a discussion as to how to make the storage problem more efficient (thin provisioning). Thing is, thin provisioning only works when you have the extra capacity, but it's not live until you need it. You still need to overprovision, but you won't be running all those resources idle at once just in case.

it's cheaper to waste space (1)

alen (225700) | more than 3 years ago | (#33058756)

2 146GB drives from HP are less than $500 for the SAS drives. you can put the same storage on an EMC SAN and provision less for the system drive for a Windows server but by the time you pay their crack dealer prices for hard drives along with the drives for the BCV volumes and pay for the fiber switches and g-bics and HBA's and everything else it's cheaper to waste space on regular hard drives

Re:it's cheaper to waste space (0)

Anonymous Coward | more than 3 years ago | (#33058840)

GBICs and 146GB drives are cheap even from EMC these days. The 500GB drives and SFPs are pretty spendy though.

Re:it's cheaper to waste space (0)

Anonymous Coward | more than 3 years ago | (#33059018)

2TB SAS drives for $500? Where can I buy?

CYA Approach (4, Informative)

MBGMorden (803437) | more than 3 years ago | (#33058770)

This is the CYA approach, and I don't see it getting any better. When configuring a server, it's usually better to pay the marginally higher cost for 3-4x as much disk space as you think you'll need, rather than risk the possibility of returning to your boss asking to buy MORE space later.

Re:CYA Approach (1)

petermgreen (876956) | more than 3 years ago | (#33059664)

And it may well make economic sense too at least if you are talking about a low end server with a pair of SATA drives (though it depends how much your server vendor rips you off on hard drives).

Upgrading drives later has a lot of costs on top of the raw cost of getting the extra drives.

How much does it cost to get hold of those extra drives? (at uni recently someone told me that the total cost of processing a purchase order worked out to about £40 now admittedly some of that is fixed costs but still it makes you think about how you order stuff)
How much does it cost for the server monkey's time to add extra drives?
How much does it cost for the sysadmin time to reconfigure the box to use those new drives?

Not CYA, but optimal cost/benefit (1)

marcosdumay (620877) | more than 3 years ago | (#33060988)

Did you factor in how expensive is it to change storage size, and the costs of failing to change it? Also, there is the cost of adding some storage that isn't compatible to the first chunk. The amount you pay for oversized storage normaly isn't even on the same order of magnitude of all of those.

100 TB for $1,000,000? No way! (1, Informative)

Anonymous Coward | more than 3 years ago | (#33058810)

OK, bare 1TB enterprise class drives cost about $130 at Newegg retail. (half that price if you go for standard grade disks)
A hundred such disk drfives will set you back $13,000.
Figure another $10,000 for mounting, power supplies, connectors, and other obvious hardware.
Another $2,000 for four racks.

Floorspace? Racking them loosely gives you 25 per rack or 4 racks. Each rack is about 10 square feet, or 40 square feet
At $10/square foot, that's maybe $400 or $500 a month or around $5,000 per year
Electricity? 100 drives at 8 watts per drive yields full time load of 800 watts;
at a nominal $0.15 per KWhr, that's around $1100 per year in electric bills.
Air Conditioning ... roughly the same as the power cost ... another $1100 per year.
Replacement at 2 percent failure rate is perhaps $200 per year.

Human costs? The cost of labor to support a 50 TB disk farm can't be much different from that of a 100TB farm.
Indeed, it's probably less labor (and software) intensive to have a system with great overcapacity than one that needs squeezing.
In either case, at most, a 100TB disk farm might need 2 full time staffers. Generously, that's $150,000 per year.

So hardware costs of a 100TB system is around $25,000.
And annual operating costs of around $3,000 per year.
Labor costs of $150,000 per year.

Where do they get the $1,000,000 per year?

Re:100 TB for $1,000,000? No way! (1)

epiphani (254981) | more than 3 years ago | (#33059730)

100TB for a million dollars is about right when you start looking at enterprise storage solutions, such as Netapp or EMC.

Re:100 TB for $1,000,000? No way! (0)

Anonymous Coward | more than 3 years ago | (#33060092)

$130 is for a TB of fast SATA disks. I just had to price out a 6TB Symmetrix VMax SAN w/ EMC (4TB 15K RPM Fibre Channel, 2TB SSD), and the price was $340,000

Minded, the above cost includes the storage controllers, PDU's, rack cabinets, etc. Additional TB's will run us in the neighborhood of $8K-12K (depending on whether we go 15K RPM or SSD)

Spending a million on 100TB of storage for the enterprise is very easily doable, even if you go with slower 10K RPM Fibre Channel disks.

Re:100 TB for $1,000,000? No way! (2, Insightful)

spazimodo (97579) | more than 3 years ago | (#33060286)

I'm not sure if you're trolling or not, but if you're serious did you happen to manage the storage for Microsoft's Sidekick servers?

A couple things wrong with your assumptions:
1) 1TB drives might be great for storing your goat porn collection, but on a server with actual load, how many of those drives do you need to get adequate IOPS? Also exactly 100 of them means no RAID, but that's OK because drives from Newegg never fail so your 100TB of data should be fine.
2) You seem to have left controllers out of your list. Anyone who's ever had a RAID controller start barfing garbage all over a LUN, or take out a second drive after a drive failure will tell you the controller is the really critical bit (and is usually a single point of failure in systems with DAS.)
3) Where's your backup hardware? Where's space for snapshots? Where's space for replication?
4) Ever time a RAID5 rebuild on say a 9 drive LUN with 1TB SATA disks?

Storage is expensive because the data on it has value and making sure that data is available and isn't lost or corrupted costs money. Cheap storage solutions don't end up that way when the drives have to go to OnTrack for recovery and the company's down for a week, or valuable data is lost.

Re:100 TB for $12,000? Backblaze pod! (0)

Anonymous Coward | more than 3 years ago | (#33060476)

If you don't mind a bit of DIY there is the Backblaze pod:
        http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

It is a 4U disk server that holds 45 disks. They have made the chassis design available from Protocase.

I suppose you could call this a SAN server but it really is just a bunch of cheap storage. As has been commented earlier, in a data center multiple disks are often bought for performance not space. You gain performance by having multiple sets of heads moving at the same time. RAID cache helps this but does not eliminate it.

Re:100 TB for $1,000,000? No way! (3, Insightful)

Domint (1111399) | more than 3 years ago | (#33061104)

Most SAN administrators wouldn't be caught dead using your $130 1TB drives. Rerunning your calculations with 15K 450GB SAS drives (around $300 bucks), and you're spending quite a bit more: 228 drives will give you 100TB, sure, but we'd want some redundancy . . . say RAID 5 (not the best approach for SAN design, but let's keep it simple) which pushes the drive count up to 304 with a total cost of $91,200, just for disks. To get a real, enterprise enclosure (or rather, cluster of enclosures considering the drive count) that offers things like FiberChannel, 10Gb iSCSI, or InfiniBand uplinks, and features such as SAN to SAN replication, bit deduplification, and other enterprise-level utilities/features, I'd say you're looking at $500,000 (ballpark guess) just to have something to stick the drives into. We're at ~$600,000 without even taking into account the physical costs of operation, datacenter architecture, or labor costs to maintain such a SAN.

Suddenly, that $1 million isn't so far fetched, eh?

Huh? (0)

Anonymous Coward | more than 3 years ago | (#33058812)

Shitloads of unused disk space is what I *want*.

sounds like the consultants are having a slow year (2, Interesting)

alen (225700) | more than 3 years ago | (#33058852)

time to go and buy up all kinds of expensive software to tell us something or other

it's almost like the DR consultants who say we need to spend a fortune on a DR site in case a nuclear bomb goes off and we need to run the business from 100 miles away. i'll be 2000 miles away living with mom again in the middle of no where and making sure my family is safe. not going to some DR site that is going to close because half of NYC is going to go bankrupt in the depression after a WMD attack

Re:sounds like the consultants are having a slow y (1)

evilviper (135110) | more than 3 years ago | (#33061314)

it's almost like the DR consultants who say we need to spend a fortune on a DR site in case a nuclear bomb goes off and we need to run the business from 100 miles away.

Flood, earthquake, hurricane (yes, possible even in New York), sink hole, etc.

Are you really going to go primeval when any one of those things happens?

First thing, of course you're going to find out if your family is fine. Assuming so, then what? Not only has their home been destroyed, but your job is gone too, so you'll now be dependant on insurance (notoriously unwilling or unable to pay after disasters) and handouts.

Not that you should be spending billions on off-site data storage and redundant systems, but a large company being completely unable to survive the loss of a single building/office is quite short-sighted, even if it happens to cost some money up-front.

ISPs & hosting services (2, Insightful)

shmlco (594907) | more than 3 years ago | (#33058874)

This isn't like an ISP overbooking a line and hoping that everyone doesn't decide to download a movie at the same time. If a hosting service says your account can have 10GB of storage, contractually they need to make sure 10GB of storage exists.

Even though most accounts don't need it.

One client of mine dramatically over-provisioned his database server. But then again, he expects at some point to break past his current customer plateau and hit the big time. Will he do so? Who can say?

It may be a bit wasteful to over-provision a server, but I can guarantee you that continually ripping out "just big enough" servers and installing larger ones is even more wasteful.

Your pick.

This isn't a new problem... (1)

Mysticalfruit (533341) | more than 3 years ago | (#33058894)

This is one of the arguments that's made for using a SAN. Consolidate to make better use of the disk space. Smaller footprint, less power, etc.

Re:This isn't a new problem... (1)

petermgreen (876956) | more than 3 years ago | (#33059852)

However SANs have issues of thier own

1: they are EXPENSIVE, figure you will be paying many times the cost per gigabyte of ordinary drives. Particually if you buy the SAN vendors drives so you get support. This tends to cancel out more efficiant use of space.
2: Even a 1U server has space for a few drives inside, so if you use a SAN with 1U servers it will probablly take up more space than just putting the drives in the servers. Blades would reduce this issue but come with issues of thier own (e.g. vendor lockin)
3: if something does go wrong with a SAN it means everything has problems at once. This can leave all sorts of IT services down for days as IT scramble to first fix the SAN and then fix everything that depends on the SAN (seen this happen at the uni I go to).

I have my doubts on the power consumption front too. Afaict drives are a negligable part of a modern computers power consumption anyway.

On thin provisioning (1)

JasonM314 (1866144) | more than 3 years ago | (#33058974)

Thin provisioning doesn't fix this problem. At least not today.

The only way thin provisioning fixes this problem is if you over-commit the thin pool. That's all well and good, but currently, any given storage chunk that is allocated to a server is stuck being allocated to that server. So, if I were a server admin who found out he'd been given thin LUNs in an over-commited pool, I know that if my neighboring admins don't keep track of their storage use, then my server could wind up crashing because they took up all the storage. So instead, I'm going to write a script first thing when I get the storage to write a text file clear across the drive. There. Now my disk is fully provisioned, and my neighbors can use all the pool they want, it won't affect me. 'course, not everyone can do that, or the pool will fill up lickety split.

Now, someday, the servers will be smart enough to tell the storage array when they're done with a chunk of storage. At which point, that part of the pool can be freed up. When that happens (and it will, but it's going to take some time), thin pools will be ideal. Everyone will have all the storage they need almost all of the time.

However, that day isn't here yet. In the mean time, there are interesting performance reasons to use thin provisioning, but not space-related ones.

Re:On thin provisioning (1)

Guido von Guido (548827) | more than 3 years ago | (#33059986)

The only way thin provisioning fixes this problem is if you over-commit the thin pool. That's all well and good, but currently, any given storage chunk that is allocated to a server is stuck being allocated to that server. So, if I were a server admin who found out he'd been given thin LUNs in an over-commited pool, I know that if my neighboring admins don't keep track of their storage use, then my server could wind up crashing because they took up all the storage. So instead, I'm going to write a script first thing when I get the storage to write a text file clear across the drive. There. Now my disk is fully provisioned, and my neighbors can use all the pool they want, it won't affect me. 'course, not everyone can do that, or the pool will fill up lickety split.

How exactly is using up all of your thinly provisioned disk on purpose all at once any different from your peers not watching their disk use? Answer: they might cause a problem, and you have.

As the storage admin, I'd walk over to your desk and smack you. I'm the one who's watching the size of the pool, and I'm the one who will order new disk when it's necessary. I'm the one who will make other arrangements if management doesn't fork up the money for the disks.

Depending on the technology in use, "other arrangements" could mean the migration of LUNs to other storage arrays behind the scenes (i.e., no downtime), moving virtual machines with storage vmotion, or other, usually uglier methods of dealing with it (i.e., stop the application, migrate the data manually somewhere else, bring up the application).

Re:On thin provisioning (1)

mysidia (191772) | more than 3 years ago | (#33060328)

Now, someday, the servers will be smart enough to tell the storage array when they're done with a chunk of storage.

Servers are that smart... most commonly this is needed for using SSD drives -- SCSI PUNCH, SATA TRIM, or writing a block of all zeros to a sector.... there are OS configurations that support this, and if you don't have an OS that can handle it -- a simple piece of software can take care of this, but most SANs do not understand/take advantage of the server sending those commands.

The servers aren't dumb, the uber-proprietary ultra-expensive SANs are. And when the SAN vendors eventually get the feature to understand the servers' SCSI PUNCH commands, it will probably require a few more million in additional licensing costs, in addition to having current support/upgrades agreements in place.

Writing an all zeros sector is probably the most supported. Some SANs have dedup functionality, and an all-zeros sector is easy to dedup: it just requires special software running on the server.

"Writing to all sectors" in an overcommitted pool doesn't guarantee squat, when the SAN is operating with special features such as snapshots, clones, copy on write foundation, etc.

In some environments, your ability to write or change sectors on your disk may depend on there being free additional space available in the pool, even if you've already written to that sector.

If the pool runs out, which you would in fact be making more likely with server admins pulling such stupid shenanigans, the I/O could easily still get blocked, even though the server had "written to all sectors" previously.

As for a server suddenly trying to use all the thin-provisioned disk space, there's a fix for that too: quotas.

Or, restricting the rate at which a server can consume additional thin-provisioned storage before setting off alarm bells and throttling the server's I/O limit down to forcibly reduce the rate of additional consumption.

Looking at the numbers.... (1)

paulsnx2 (453081) | more than 3 years ago | (#33059048)

So ... 100 TB / 1 Million ==> 1 TB / $10,000.

A 1 TB drive is 60-100 dollars.
The KW/h required to run a 60 watt drive 24/7 = 60/1000 KW x 24 hours x 365.25 days = 526 KW/h.
At .12 cents per KW/h, that's 63.12 per year.

Even if we double or triple the hardware costs, they will only make up a few percentages of the 10 grand per TB cited here.

The labor to maintain 100 or 200 or 400 drives is going to be relatively constant. In fact, with a little more reasonable monitoring software (just reporting drive failures in a raid system, so the labor just has to pull bad drives and replace with good drives), I don't think the capacity of a data center is all that related to the labor costs.

End result, it is just cheaper and easier to throw hardware at problems to reduce labor costs than to pay for expensive software to monitor capacity and be more efficient in the use of capacity.

Mini ITX? (1)

Midnight Thunder (17205) | more than 3 years ago | (#33059070)

Instead of a medium number of large systems, I wonder whether it would make more sense to have a larger number of mini-itx type units that could be:
  - easily replaced
  - put in stand-by when no access - smart load balancer would decide when to wake up sleeping units.
  - simplified cooling?

It would also be nice for a universal back-plane design to support plugging in boards from any company, with minimal or zero cabling.
 

Fire Extinguishers (1)

Ukab the Great (87152) | more than 3 years ago | (#33059074)

Billions of dollars are also wasted every year in the manufacturing and transporting of fire extinguishers, 99% of which will probably never be used.

the truth doesn't take up much space (0)

Anonymous Coward | more than 3 years ago | (#33059080)

explosion(s) in the straits of hormuz. anybody who's read either the bible, or playboy should be able to cipher out what that means. there was even a sci-fi book/movie about it.

No... (2, Interesting)

rickb928 (945187) | more than 3 years ago | (#33059242)

"It's a bit of a paradox. Users don't seem to be willing to spend the money to see what they have,"

I think he meant users don't seem willing to spend the money to MANAGE what they have.

As many have pointed out, you need 'excess' capacity to avoid failing for unusual or unexpected processes. How often has the DBA team asked for a copy of a database? And when that file is a substantial portion of storage on a volume, woopsie, out of space messages can happen. Of course they should be copying it to a non-production volume. Mistakes happen. Having a spare TB of space means never having to say 'you're sorry'.

Aside from the obvious problems of keeping volumes too low on free space, there was a time when you could recover deleted files. Too little free space pretty much guarantees you won't be recovering deleted files much older than, sometimes, 15 minutes ago. In the old days, NetWare servers would let you recover anything not overwritten. I saved users from file deletions over the span of YEARS, in those halcyon days when storage became relatively cheap and a small office server could never fill a 120MB array. Those days are gone, but without free space, recovery is futile, even over the span of a week. Windows servers, of course, present greater challenges.

'Online' backups rely on delta files or some other scheme that involves either duplicating a file so it can be written intact, or saving changes so they can be rolled in after the process. More free space here means you actually get the backup to complete. Not wasted space at all.

Many of the SANs I've had the pleasure of working with had largely poor management implementations. Trying to manage dynamic volumes and overcommits had to wait for Microsoft to get its act together. Linux had a small lead in this, but unless your SAN lets you do automatic allocation and volume expansion, you might as well instrument the server and use SNMP to warn you of volume space, and be prepared for the nighttime alerts. Does your SAN allow you to let it increase volume space based on low free space, and then reclaim it later when the free space exceeds threshold? Do you get this for less than six figures? Seven? I don't know, I've been blessed with not having to do SAN management for about 5 years. I sleep much better, thanks.

Free space is precisely like empty parking lots. When business picks up, the lot is full. This is good.

Re:No... (1)

slinches (1540051) | more than 3 years ago | (#33060040)

If

Having a spare TB of space means never having to say 'you're sorry'.

and

"Love means never having to say you're sorry"

Then

Love means having a spare TB of space?

How much does under capacity cost? (1)

houghi (78078) | more than 3 years ago | (#33059362)

What is the cost if you have 1% of shortage on your capacity? I am sure it will be more then what you pay for over capacity.

HD Size (0)

Anonymous Coward | more than 3 years ago | (#33059562)

What trout. I suspect a large amount of this 'wastage' is due to the fact that the smallest HD's available are into the 100's of GB *.

Many dedicated server users do not waste the space but simply never needed it in the first place. Applications that need a dedicated server do not necessary need the storage that comes with it.

* Currently 160GB on an entry level Dell Server

Turn it up to 11 (1)

Tisha_AH (600987) | more than 3 years ago | (#33059720)

Unlike in the movie "This is Spinal Tap" there is not an 11 on the volume control for storage capacity in a data center. We will not see proud proclamations from boards of directors "today we are running our data storage at 115% of capacity!"

Having been in the predicament many times of frantically trying to ration out disk storage space for some critical application at 3 AM Sunday morning I think that running data centers at 80-90% is being conservative and may save your ass the next time you cannot get into your data center due to some sort of natural disaster like a hurricane (remember the data center in New Orleans a few years ago?)

Storage space does cost money, when we are looking at terabytes (petabytes anyone?) of storage there does need to be some cost factor calculations. In the telco world we do a similar exercise with Erlang calculations and blocking probability for data circuits. I would rather that the cut-off point between enough or too much storage capacity be made by well informed engineers rather than some clueless MBA looking for a feather in their hat.

takeaswag (0)

Anonymous Coward | more than 3 years ago | (#33059894)

Alright, I'm not running a bunch of petabytes in a big datacenter right now, but I've been doing this for a really really long time. Hasn't the rule of thumb always been to have a MINIMUM of 25% capacity free? Everyone has always been much more comfotable with 50% free. This old school rule of thumb applies at any scale, megabyte to petabyte, doesn't it?

IO'/second count matters, too (4, Insightful)

natoochtoniket (763630) | more than 3 years ago | (#33060182)

There are two numbers that matter for storage systems. One is the raw number of gigabytes that can be stored. The other is the number of IO's that can be performed in a second. The first limits the size of the collected data. The second limits how many new transactions can be processed per time period. That, in turn, determines how many pennies we can accept from our customers during a busy hour.

We size our systems to hit performance targets that are set in terms of transactions per second, not just gigabytes. Using round numbers, if a disk model can do 1000 IO/second, and we need 10,000 IO/second for a particular table, then we need at least 10 disks for that table (not counting mirrors). We often use the smallest disks we can buy, because we don't need the extra gigs. If the data volume doesn't ever fill up the gigabyte capacity of the disks, that's ok. Whenever the system uses all of the available IO's-per-second, we think about adding more disks.

Occasionally a new SA doesn't understand this, sees a bunch of "empty" space in a subsystem, and configures something to use that space. When that happens, we then have to scramble, as the problem is not usually discovered until the next busy day.

Re:IO'/second count matters, too (1)

hibiki_r (649814) | more than 3 years ago | (#33060654)

And that's not even the whole picture: When dealing with databases, not all IO operations are equal. Reading a million records on a sequential scan in a certain part of the disk is different than reading them on a different part of the disk, or reading said records in a random order.

Large amounts of empty space are just the nature of data warehousing, and there's no way to go around that. In some cases, the RAM expense is even higher than the expense on disk, because for cases where a lot of throughput is needed, sometimes you are better off giving up on the disk array and relying on RAM to make your logical IOs faster.

anyone pay close attention? (1)

nimbius (983462) | more than 3 years ago | (#33061080)

Aptare's latest version of reporting software, StorageConsole 8, costs about $30,000 to $40,000 for small companies, $75,000 to $80,000 for midsize firms, and just over $250,000 for large enterprises. "Our customers can see a return on the price of the software typically in about six months through better utilization rates and preventing the unnecessary purchase of storage," Clark said. just another industry slashvertisement. nothing to see here that we didnt know about already. please move along.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...