×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

SSDs: The New King of the Data Center?

samzenpus posted about 10 months ago | from the feeling-solid dept.

Data Storage 172

Nerval's Lobster writes "Flash storage is more common on mobile devices than data-center hardware, but that could soon change. The industry has seen increasing sales of solid-state drives (SSDs) as a replacement for traditional hard drives, according to IHS iSuppli Research. Nearly all of these have been sold for ultrabooks, laptops and other mobile devices that can benefit from a combination of low energy use and high-powered performance. Despite that, businesses have lagged the consumer market in adoption of SSDs, largely due to the format's comparatively small size, high cost and the concerns of datacenter managers about long-term stability and comparatively high failure rates. But that's changing quickly, according to market researchers IDC and Gartner: Datacenter- and enterprise-storage managers are buying SSDs in greater numbers for both server-attached storage and mainstream storage infrastructure, according to studies both research firms published in April. That doesn't mean SSDs will oust hard drives and replace them directly in existing systems, but it does raise a question: are SSDs mature enough (and cheap enough) to support business-sized workloads? Or are they still best suited for laptops and mobile devices?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

172 comments

Great for some apps (see netflix blog) (5, Interesting)

ron_ivi (607351) | about 10 months ago | (#43992893)

This blog article's very relevant: http://techblog.netflix.com/2012/07/benchmarking-high-performance-io-with.html [netflix.com]

TL/DR: "The relative cost of the two configurations shows that over-all there are cost savings using the SSD instances"

at least for their use-case (Cassandra).

At work we also use SSDs for a couple terabyte Lucene index with great success (and far cheaper than getting a couple TB of DRAM spread across the servers instead)

Re:Great for some apps (see netflix blog) (4, Interesting)

wvmarle (1070040) | about 10 months ago | (#43993133)

So you're replacing RAM with SSD, not HD with SSD. Interesting.

And would you even be able to do this with DRAM modules? Normal PC motherboards don't support that.

Re:Great for some apps (see netflix blog) (5, Interesting)

SQL Error (16383) | about 10 months ago | (#43993307)

You can build a 48-core Opteron server with 512GB of RAM for under $8000. Going over 512GB in a single server gets a lot more expensive (you either need expensive high-density modules or expensive 8-socket servers - or both) but if you can run some sort of cluster that's not a problem.

Re:Great for some apps (see netflix blog) (1)

ron_ivi (607351) | about 10 months ago | (#43994595)

And would you even be able to do this with DRAM modules? Normal PC motherboards don't support that.

Even low-end (dual-CPU 2-U) servers these days support either 192 or 256GB [asacomputers.com]. It's not that hard or expensive to get 4 256GB or 6 192GB servers.

But as that link to Netflix's' blog points out - SSDs can have better price/performance than DRAM at the moment if you need a lot.

Re:Great for some apps (see netflix blog) (0)

Archangel Michael (180766) | about 10 months ago | (#43995255)

I just spec'ed out a 2U Dual 8 Core server with 386 GB RAM. The thing could hold 768 GB total If I didn't put in the GPU. The cost for doubling the RAM doubled the cost of the server. And at that point, having more CPU is more useful than RAM.

As for SSD vs HD, you should really start looking at something like Nimble Storage, which tiers storage between onboard RAM, SDD and regular HDs, to provide huge IOPS advantages over regular SAN storage, with the same kind of drive type/counts. In the datacenter, its IOPS for your storage, followed by Size. Long term slow, and big storage is less useful than having high speed access to data you actually need at that moment. IOPS is key to getting data on the wire and to the processors that need it.

Re:Great for some apps (see netflix blog) (1)

wisnoskij (1206448) | about 10 months ago | (#43994087)

How does that make sense. Sure SSD is very similar to RAM physically, but it is still like a thousand times shower, is it not?

Re:Great for some apps (see netflix blog) (1)

Bill, Shooter of Bul (629286) | about 10 months ago | (#43994361)

I don't understand the confusion, maybe a car analogy will help.

John smith is switching to a new Mustang from a mid 90's civic to reduce his merge time. This represents a huge savings over buying a Porsche. Make sense now?

Re:Great for some apps (see netflix blog) (1)

wisnoskij (1206448) | about 10 months ago | (#43994483)

Maybe if he was using a golf cart instead of a porsche it would be a better analogy.

Re:Great for some apps (see netflix blog) (5, Informative)

ron_ivi (607351) | about 10 months ago | (#43994631)

How does that make sense.

As the link to Netflix pointed out -- they benchmarked the entire system with the same REST API in front.

They configured one cluster of SSD-based servers; which another cluster of spinning-disk-with-large-RAM-based servers. It took a cluster of 15 SSD-backed servers to match the throughput of 84 RAM+Spinning servers. With throughput matched, the SSD-based cluster provided better latency and lower cost.

TL/DR: "Same Throughput, Lower Latency, Half Cost".

Re:Great for some apps (see netflix blog) (1)

ron_ivi (607351) | about 10 months ago | (#43994705)

but it is still like a thousand times shower, is it not?

Yes; but it's still like 5-500x faster than spinning disks too (obviously depending on if you're talking sequential I/O, or random-acces).

20x faster (3, Informative)

drabbih (820707) | about 10 months ago | (#43992895)

By switching to SSD's on a data intensive web application, I got 20 times speed improvement - from 20 hits per second to 400. I trust SSDs more than physical spindles any day.

Re:20x faster (1)

Anonymous Coward | about 10 months ago | (#43993015)

Do you use TRIM? Otherwise that speed bump will go way down once the device runs out of untouched sectors. And TRIM over RAID is still a no-go in most environments.

Re:20x faster (2, Insightful)

Anonymous Coward | about 10 months ago | (#43993047)

TRIM isn't necessary if the SSD uses spare sectors to keep the write amplification low. You can also partition the SSD to have a swath of unused space for that purpose.

Re:20x faster (0)

wonkey_monkey (2592601) | about 10 months ago | (#43993187)

Otherwise that speed bump will go way down once the device runs out of untouched sectors.

TRIM isn't necessary if the SSD uses spare sectors

See where you went wrong there?

Re:20x faster (2, Interesting)

Anonymous Coward | about 10 months ago | (#43993259)

No, but then I can read and understand that "once the device runs out of untouched sectors" is not an "if" but a "when". An untouched sector is not the same as a spare sector either, because sectors which are used for reducing the write amplification are touched. An SSD maintains available sectors, not untouched or free sectors.

Re:20x faster (2, Insightful)

donaldm (919619) | about 10 months ago | (#43993119)

By switching to SSD's on a data intensive web application, I got 20 times speed improvement - from 20 hits per second to 400. I trust SSDs more than physical spindles any day.

When designing storage for any Business or Enterprise the disks (solid state or spinning) should always be in some sort of RAID configuration that supports disk redundancy. Failure to do this could result in loss of data when the disk eventually fails and it will. I am often asked "How long" and my answer is "How long is a peace of string".

At the moment SSD's are excellent when you need high I/O from a few disks up to say a few TB however if you look at enterprise storage solutions of 10's or even 1000's of TBytes you are still looking at spinning media with large cache front ends (BTW I am talking about $20k up to many millions of dollars storage area networks). Of course for smaller scale computing SSD's are excellent for high performance but unless you don't really care about your data you still need disk redundancy or I hope your backup and recovery services are excellent, keeping in mind that an outage may cost a considerable amount of money for every hour or even minute you are down.

It must be noted that when designing any computing system you really need to consider performance expectations as well as backup and recovery requirements. The choice of using SSD's, spinning media or even SAN's is normally made after Business or Enterprise expectations are made clear.

Re:20x faster (4, Insightful)

Twinbee (767046) | about 10 months ago | (#43993251)

and my answer is "How long is a piece of string".

Sorry, that phrase always strikes a nerve with me. More useful answers would include an average, or even better, a graph detailing the death rate of SSDs (and how they tend to die early if they do die, but tend to last if they get past that initial phase).

Re:20x faster (0)

Anonymous Coward | about 10 months ago | (#43994069)

Someone failed Probability 101... Learn your distributions! I hate trying to teach this stuff to people over and over again. And here's the thing, I get that "this is not what you do" so you don't think you should have to spend any brain power on it (you know, like how you don't memorize the cast of your favorite TV show because that's not what you do) but then don't ask the question since you are just wasting everyone's time. You have already admitted that you don't get it and don't want to, so leave it to the people who do get it.

Re:20x faster (4, Funny)

Culture20 (968837) | about 10 months ago | (#43993549)

"How long is a peace of string"

I have never known string to break a cease-fire.

Re:20x faster (1)

Anonymous Coward | about 10 months ago | (#43994045)

"How long is a peace of string"

I have never known string to break a cease-fire.

Throw a couple of const char*'s at it and all hell will break loose.

Re:20x faster (0)

Anonymous Coward | about 10 months ago | (#43995167)

A piece of string walks into a bar and orders a beer. The bartender tells him, "We don't serve your kind here, get out!"

So the string goes outside, gets mad, roughs up his ends and twists himself around, then walks back into the bar.

The bartender asks, "Are you that same piece of string I just threw out of here a minute ago?"

The string replies, "No, I'm a frayed knot!"

Beware. Drunken strings can start wars. Just ask my ex-wife!

Thanks! I'll be here all week!

Re:20x faster (1)

drsmithy (35869) | about 10 months ago | (#43993579)

At the moment SSD's are excellent when you need high I/O from a few disks up to say a few TB however if you look at enterprise storage solutions of 10's or even 1000's of TBytes you are still looking at spinning media with large cache front ends (BTW I am talking about $20k up to many millions of dollars storage area networks).
Well, what you're usually looking at is a storage system with multiple types and speeds of disks that automatically moves data through the tiers depending on the frequency and type of access. SSDs will form one of these tiers. If the storage system is any good, it will also let you manually pin or hint specific subsets of your data so that they are always held on the fastest tier (ie: SSDs).
Since the _active_ subset of data even in quite large organisations is generally relatively small, a few hundred GB or a few TB of flash will often give 90%+ of the real-life performance that a pure flash array would.

Re:20x faster (1)

drabbih (820707) | about 10 months ago | (#43994633)

Let me rephrase this. The web application I use was on a read-only database which was recoverable. The uptime requirements were low. I used 3 SSD drives in a raid 0 configuration to attain 1800 MB/s transfer rate, which was constant whether or nto the read was sequential or random. That is faster than a 10GBe connection to any SAN configuration, much less expensive, and much more responsive. The machine has been running for one year in a 24x7 operation without issue. Obviously, RAID configurations let us mitigate speed and safety requirements on storage systems regardless of their underlying media. But for my real case example, the perfromance and reliability is incredible.

Re:20x faster (1)

Big Hairy Ian (1155547) | about 10 months ago | (#43993233)

I know some hosting companies that have been all SSD for years this article is no surprise given how much data is flung around on the cloud.

Reliability data? (1)

sjbe (173966) | about 10 months ago | (#43994447)

I trust SSDs more than physical spindles any day.

Based on what evidence? Where is your data? Faster != More reliable. Spindle based hard drives are (usually) quite reliable and there is plenty of real world usage data documenting exactly how reliable they are. Companies with big data centers like Google have extremely detailed reliability performance figures. SSDs have a lot of advantages but they only recently have started receiving wide distribution and to date they have poor market penetration in data centers where it is easiest to measure their reliability in the real world. Manufacturers estimates of reliability don't mean much in the real world since they have an obvious conflict of interest.

I have little doubt that SSDs will over time replace spinning platters in most places but claims regarding their reliability in relation to spinning platters is somewhat premature, especially in a data center environment. I wouldn't be the least bit shocked to find out they were more reliable (having no moving parts helps a lot) but just because they should be doesn't mean they will be.

Long-term, not short-term (4, Insightful)

Todd Knarr (15451) | about 10 months ago | (#43992903)

The question is really going to be what kind of shape the drives will be in a year or so from now after 12+ months of constant heavy usage. The usage profile in consumer computers is a lot different from that in a server, and the server workload's going to stress more of the weakest areas of SSDs. And when it comes to manufacturer or lab test results, simple rule: "The absolute worst-case conditions achievable in the lab won't begin to approximate normal operating conditions in the field.". So, while SSDs are definitely worth looking at, I'll let someone else to do the 24-36 month real-workload stress testing on them. There's a reason they call it the bleeding edge after all.

Re:Long-term, not short-term (5, Informative)

SQL Error (16383) | about 10 months ago | (#43993061)

We've been using SSDs in our servers since late 2008, starting with Fusion-io ioDrives and Intel drives since then - X25-E and X25-M, then 320, 520 and 710, and now planning to deploy a stack of S3700 and S3500 drives. Our main cluster of 10 servers has 24 SSDs each, we have another 40 drives on a dedicated search server, and smaller numbers elsewhere.

What we've found:

* Read performance is consistently brilliant. There's simply no going back.
* Random write performance on the 710 series is not great (compared to the SLC-based X25-E or ioDrives), and sustained random write performance on the mainstream drives isn't great either, but a single drive can still outperform a RAID-10 array of 15k rpm disks. The S3700 looks much better, but we haven't deployed them yet.
* SSDs can and do die without warning. One moment 100% good, next moment completely non-functional. Always use RAID if you love your data. (1, 10, 5, or 6, depending on your application.)
* Unlike disks, RAID-5 or 50 works pretty well for database workloads.
* We have noted the leading edge of the bathtub curve (infant mortality), but so far, no trailing edge as older drives start to wear out. Once in place, they just keep humming along.
* That said, we do match drives to workloads - SLC or enterprise MLC for random write loads (InnoDB, MongoDB) and MLC for sequential write/random read loads (TokuDB, CouchDB, Cassandra).

Re:Long-term, not short-term (1)

0ld_d0g (923931) | about 10 months ago | (#43994349)

Do you happen to know the failure rate off hand? Also did you do any research into which manufacturer has the least failure rate before deciding on the brand?

Re:Long-term, not short-term (3, Informative)

Anonymous Coward | about 10 months ago | (#43994351)

If you do RAID5 or RAID6, you should match your RAID block exactly to the write block size of the SSD. If you do not, then you will generally need two writes to each SSD for every actual write performed. This will reduce the lifetime for the SSD and reduces the efficiency. Most RAID controllers have no way of doing this automatically and it is not easy to learn what the write block size is on an SSD (it is not generally part of the information on the drive).

Re:Long-term, not short-term (1)

greg1104 (461138) | about 10 months ago | (#43995183)

I did my first write heavy deployment of PostgreSQL on Intel DC S3700 drives about a month ago, with each one of them replacing two Intel 710 drives. The write performance is at least doubled--the server is more than keeping up even with half the number of drives--and in some cases they easily look as much as 4X faster than the 710s. I've been able to get the 710 drives to degrade to pretty miserable read performance on mixed read/write workloads too, as low as 20MB/s, but the DC S7300 drives don't seem to fall down that way either. I'm replacing older Intel drives that are struggling with DC S7300 models now as fast as I can get them.

Re:Long-term, not short-term (1)

Spoke (6112) | about 10 months ago | (#43995207)

now planning to deploy a stack of S3700 and S3500 drives.

Yep, these are the only drives I'd recommend for enterprise use - or any other use where you want to be sure that losing power will not corrupt the data on the disk thanks to actual power-loss protection.

Intel's pricing with the S3500 places it very competitively in the market - even for desktop/laptop use I would have a hard time not recommending it over other drives unless you don't care about reliability and really need maximum random write performance or really need the lowest cost.

Re:Long-term, not short-term (0)

Anonymous Coward | about 10 months ago | (#43995343)

* That said, we do match drives to workloads - SLC or enterprise MLC for random write loads (InnoDB, MongoDB) and MLC for sequential write/random read loads (TokuDB, CouchDB, Cassandra).

What are some models of SLC drives? They seem to be rare, and I have a hard time finding them.

Re:Long-term, not short-term (1)

wvmarle (1070040) | about 10 months ago | (#43993147)

Will also depend greatly on your specific use case: whether it's lookups from a huge, mostly read-only database, or for use in a mail server which is constantly writing data as well. By my understanding at least it's the writes that wear out the SSD, not the reads.

Re:Long-term, not short-term (0)

Anonymous Coward | about 10 months ago | (#43993891)

The question is really going to be what kind of shape the drives will be in a year or so from now after 12+ months of constant heavy usage.

Just fine. EFDs have been front ending SAN storage as a 'high tier' for years now, being hit far harder than any single server could manage.

I've not seen anything like the attrition rate on them that I do on SAS / SATA.

Re:Long-term, not short-term (1)

silas_moeckel (234313) | about 10 months ago | (#43995109)

Enterprise SSD's have been out for half a decade in production. I have roughly 300 Ent SSD's and more than a thousand consumer ones in servers and no failures. Retired many of the early ent SSD's well before they were pushing there write limits as we aged out servers (3-5 years service life). The consumer ones are acting as read cache for local and iscsi disk does wonders.

Silver Bullet (4, Informative)

SQL Error (16383) | about 10 months ago | (#43992907)

We have hundreds of SSDs in production servers. We couldn't survive without them. For heavy database workloads, they are the silver bullet to I/O problems, so much so that running a database on regular disk has become almost unimaginable. Why would you even try to do that?

Re:Silver Bullet (0)

Anonymous Coward | about 10 months ago | (#43993031)

Are your DBs primarily read-only? I would think that they'd wear our quickly under heavy write usage.

Re:Silver Bullet (1)

SQL Error (16383) | about 10 months ago | (#43993097)

It's a mix. We use enterprise drives for the really heavy stuff, and mainstream drives for data that's either read-only, read-mostly, or is in a database that does sequential writes like TokuDB or Cassandra.

Re:Silver Bullet (3, Insightful)

AK Marc (707885) | about 10 months ago | (#43993561)

write wear is a read herring. Unless you are over-writing the entire drive multiple times a day, you'll last longer with an SSD than spinning disk. And even then, the current generation will last longer than the early ones, and early ones are lasting longer than predicted.

Re:Silver Bullet (0)

Anonymous Coward | about 10 months ago | (#43993795)

I call Bullshit*.

We've gone back to spinning disk for heavy write applications as SSD just doesn't last.
Its failure modes are also bad - you go from working -> fatally dead immediately. Spinning disk at least gives some warning.

For low writes - eg web serving its fine. For anything where you do > moderate disk writes - forget it.
We've had *far* too many failures on different SSD's to even consider them. Its still uncharted territory.

Based off personal experience - +-60 SSD's of different brands tested. **100%** failure rate achieved, some within weeks, none lasted more than 6 months.

I wouldn't touch SSD with a bargepole UNLESS its backed up elsewhere. For pure caching its fine, for data storage - forget it.

Re:Silver Bullet (0)

h4rr4r (612664) | about 10 months ago | (#43994603)

Why does the failure mode matter?
You toss another one in the array when it fails. Rebuild goes mighty quick with SSDs.

You always have to have backups. SSDs do not change that.

Re:Silver Bullet (1)

SQL Error (16383) | about 10 months ago | (#43995013)

Because if write wear is the prime failure mode and you're running RAID, you're likely to lose multiple SSDs in a relatively short interval.

Near-line storage only: Has been for some years. (5, Informative)

MROD (101561) | about 10 months ago | (#43992913)

You have to remember that enterprise level storage isn't a single set of drives holding the data, it's a hierarchy of different technologies depending upon the speed of data access required. Since SSDs arrived they've been used at the highest access rate end of the spectrum, essentially using their low latency for caching filesystem metadata. I can see that now they are starting to replace the small, high speed drives at the front end entirely. However, it's going to be some time before they can even begin to replace the storage in the second tier and certainly not in the third tier storage where access time isn't an issue but reliable, "cheap" and large drives are required. Of course, beyond this tier you generally get on to massive robotic tape libraries anyway, so SSDs will never in the foreseeable future trickle down to here.

I ran a Minecraft GSP off them for a year. (0)

Anonymous Coward | about 10 months ago | (#43992915)

I paid extra to have two in each machine so I could RAID 1 them in case one died. Minecraft is write-intensive and we also had map generation, although the maps were written to magnetic disks because they are so huge. Before that year, we were running everything on magnetic disks on hardware RAID 1.

As soon as we switched, iowait went down to practically zero. System load followed. Map updates incurred high iowait on the magnetic disks, but had no impact on the SSDs or server performance, and they also finished a little faster.

I don't think we ever had an issue with an SSD going bad. We did lose a magnetic hard drive once. Because the Minecraft servers were on SSDs, we just reinstalled the OS on a new drive and mounted the existing SSD RAID. It went alright.

I would definitely use SSDs again if I was hosting something IO heavy, especially write heavy. They excel at it in a way that even a RAID10 of magnetic drives would be hard pressed to equal. You back up your data so if it fails, you're ready.

Re:I ran a Minecraft GSP off them for a year. (0)

Anonymous Coward | about 10 months ago | (#43994571)

Minecraft isn't particularly write-heavy... it just likes to fsync(). A LOT.

Perfect! (0)

Anonymous Coward | about 10 months ago | (#43992927)

Perfect! Now we just to sit back and watch the price come down even further.
I am using Samsung (Pro) SSDs for all OS partitions and only use HDs for bulk storage nowadays.

SAS SSD (0)

Anonymous Coward | about 10 months ago | (#43992935)

Ok great but where can you find an affordable SAS SSD?

I really think that SSD have a great value but you must rethink you infra and application to work with them.

Re:SAS SSD (2)

TheRaven64 (641858) | about 10 months ago | (#43993413)

SAS doesn't really get you anything useful with an SSD. The extra chaining isn't that important, because it's easy to get enough SATA sockets to put one in each drive bay. There's no mSATA equivalent for denser storage, and if you really need the extra speed then why not go all the way and get something like FusionIO cards that hang directly off the PCIe bus?

enterprise class SSDs not the same (5, Interesting)

Anonymous Coward | about 10 months ago | (#43993023)

The enterprise class SSDs are not the same as the "consumer" ones: http://www.anandtech.com/print/6433/intel-ssd-dc-s3700-200gb-review [anandtech.com]

Don't be surprised if you stick a "consumer" grade one to a heavily loaded DB server and it dies a few months later.

Fine for random read-only loads.

And some consumer grade SSDs aren't even consumer grade (I'm looking at you OCZ: http://www.behardware.com/articles/881-7/components-returns-rates-7.html [behardware.com] ).

Price (4, Interesting)

asmkm22 (1902712) | about 10 months ago | (#43993029)

Pricing really needs to come down on these things. A single drive can easily cost as much as a server, and when you're talking about RAID setups, forget it. It's still much more effective to use magnetic drives and use aggressive memory caching for performance, if you really need that.

Another 3 to 5 years this idea might have more traction for companies that aren't Facebook or Google, but right now, SSD costs too much.

Re:Price (1)

Anonymous Coward | about 10 months ago | (#43993285)

When you take a look at total cost of ownership, it's not bad (perhaps even cheaper) for many applications.

An SSD is about two orders of magnitude lower latency than even the best high performance magnetic drive. Magnetic drives simply cannot compete with that, even in the most robust RAID setups. Magnetic media RAID setups can compete with single SSDs in sequential reads, but only by using many non-redundant drives.

For any application where sequential read performance is the bottleneck (say, a media server), a RAID array of magnetic drives is likely the most cost effective. For any application in which random read/write performance is the bottleneck (almost all database driven applications), there simply is no competing on a performance or cost/performance measure with SSDs. You *cannot* achieve the same level of sustained IOPs (no matter how you configure the storage) with magnetic media as with SSDs.

This, of course, is not to say there are no other concerns, such as amount of data, that may change the cost analysis.

Re:Price (1)

TheRaven64 (641858) | about 10 months ago | (#43993431)

Even for sequential reads, SSDs can be an improvement. My laptop's SSD can easily handle 200MB/s sequential reads, and you'd need more than one spinning disk to handle that. And a lot of things that seem like sequential reads at a high level turn out not to be. Netflix's streaming boxes, for example, sound like a poster child for sequential reads, but once you factor in the number of clients connected to each one, you end up with a large number of 1MB random reads, which means your IOPS numbers translate directly to throughput.

Spinning disks are still best where capacity is more important than access times. For example, hosting a lot of VMs where each one is typically accessing a small amount of live data (which can be cached in RAM or SSD) but has several GBs of inactive data.

Hurrrrmmm.... (-1, Offtopic)

Anonymous Coward | about 10 months ago | (#43993049)

"Nearly all of these have been sold for ultrabooks, laptops and other mobile devices that can benefit from a combination of low energy use and high-powered performance."

"ultrabooks", really? So this is intel spam?

Single component failure not a big deal any more. (5, Informative)

12dec0de (26853) | about 10 months ago | (#43993051)

I think that the wide range adoption of server SSDs also shows how far server installations have progressed toward eliminating all single points of failure.

In the passt HA and 'five nines' was something only done by a few niches, like telephony provider switches or banking big iron. Today it is common in many cloud installations and most sizeable server setups. A single component failing will not stop your service.

If your business can support the extra cost for the SSDs, a failing drive will not stop you and the performance of the service will see great improvements anyway. The power savings may even make the SSD not so costly after all.

Re:Single component failure not a big deal any mor (1)

necro81 (917438) | about 10 months ago | (#43993757)

A single component failing will not stop your service

Correction: a single component failing should not stop your service, if you have done your job right (either in designing and building, or in finding a vendor to provide the service). But having a single component failing can and still does ruin somebody's day on a regular basis.

Re:Single component failure not a big deal any mor (1)

antifoidulus (807088) | about 10 months ago | (#43994027)

I was actually curious about the power consumption so I went poking around and found this [notebookreview.com](Sorry I couldn't find the original article. The power consumption is markedly different....not sure it's enough to COMPLETELY offset the cost, but certainly makes it easier to swallow.

And beyond SSD, the future is PCIe Flash (1)

snowtigger (204757) | about 10 months ago | (#43993091)

SSDs are slow in that they rely on old school disk protocols like sata. Sure, you'll get better performance than spinning disk. But if you want screaming fast performance, you should look at flash devices connected through the PCIe bus.

Products from Fusion IO [fusionio.com] would be an example of this. Apple Mac Pro would be another: "Up to 2.5 times faster than the fastest SATA-based solid-state drive".

Re:And beyond SSD, the future is PCIe Flash (1)

Twinbee (767046) | about 10 months ago | (#43993181)

How about SATA 3? Is nearly a GB per second not good enough? Unless you're talking about latency....

Re:And beyond SSD, the future is PCIe Flash (1)

jones_supa (887896) | about 10 months ago | (#43993695)

SATA 3.0 is only 600 MB/s.

Re:And beyond SSD, the future is PCIe Flash (1)

Rockoon (1252108) | about 10 months ago | (#43995405)

Yes, and thats peak.

The year SATA 3 was put into production, SSD's designs were reconfigured to saturated it, and those fusion I/O drives saturate their PCI lane bandwidth....

SATA 3 was and always will be shortsighted bullshit brought to you by a consortium of asshats intentionally trying to undercut feature demand in their desperate attempt to preserve the old guard.

Re:And beyond SSD, the future is PCIe Flash (4, Insightful)

wonkey_monkey (2592601) | about 10 months ago | (#43993191)

Up to 2.5 times faster

Ah, "up to." Marketing's best friend.

Re:And beyond SSD, the future is PCIe Flash (4, Funny)

cdrudge (68377) | about 10 months ago | (#43994015)

They have to say up to. Reads and writes towards the inside of the chip are slower then they are towards the outside of the chip. I don't think anyone makes a constant linear velocity SSD.

Re:And beyond SSD, the future is PCIe Flash (1)

rjstanford (69735) | about 10 months ago | (#43994845)

Really though its linguistically equivalent to saying, "We promise that it won't be more than 2.5 times faster. Could even be slower - who knows - but it certainly isn't 3 times faster."

Re:And beyond SSD, the future is PCIe Flash (0)

Anonymous Coward | about 10 months ago | (#43993213)

We saved our company quite literally millions of pounds (GB) using just a pair of off-the-shelf servers with FusionIO cards and DRBD. They took over from an existing NetApp which was completely saturated by the workload (which laughed in the face of flash caching).

Re:And beyond SSD, the future is PCIe Flash (1)

shinzawai (964083) | about 10 months ago | (#43994929)

What did you use as transport between the two servers for DRBD traffic? 1GB? 10 GB? Infiniband?

Re:And beyond SSD, the future is PCIe Flash (1)

silas_moeckel (234313) | about 10 months ago | (#43995401)

PCIe based flash is nice have more than a few in production. The downside is hot swap pcie MB's are extremely expensive and getting more than 7 pcie slots is also nearly imposible. I can get 10 or more 2.5 hot swaps on a 1ru server. I can get hardware raid even redundancy with the right back planes. I can connect up external chassis via sas if I need more room (yea pcie expansion chassis exist as well they are funky to deal with at times). The use cases for needing extremely fast IO without redundancy exist but are s small subset.

Virtualisation (5, Interesting)

drsmithy (35869) | about 10 months ago | (#43993177)

This is being driven primarily by increasing levels of virtualisation, which turns everything into a largely random-write disk load, pretty much the worst case scenario for regular old hard disks.

Prices? (1)

Savage-Rabbit (308260) | about 10 months ago | (#43993249)

are SSDs mature enough (and cheap enough) to support business-sized workloads? Or are they still best suited for laptops and mobile devices?

I don't see maturity as a problem. If there is money to be made drive manufacturers will throw enough engineering and computer science talent at the task of solving the teething troubles. What interests me is that if SSDs mount a major invasion of server-rooms and data-centers worldwide it also means that we will now finally start to see SSD pricing drop like rock. Cheap high capacity external SSD drives, I can't wait. If we are lucky this will also popularize Thunderbolt with PC motherboard makers since that's where you start seeing some real performance advantages, i.e. when the time it takes to make a backup of your laptop/desktop system to an external drive drops by half or more compared to USB 3.0.

What does the NSA use? (-1)

Anonymous Coward | about 10 months ago | (#43993295)

I imagine it's the 10TB+ half height cards we use. They usually have a chunk of flash to speed them up. A 46U rack, how many racks in a cabinet 8? 10?, a few thousand cabinets maybe in a huge data center like that. Surely that isn't classified?

That's at least the exabyte range for each data center, with all the current and future data centers they have it could even be approaching zettabytes.

So 1GB to 1TB of data for each man woman and child using the Internet.

Enough to store all their surfing logs, emails, messaging, login data, search data, phone logs, health data, insurance data, a lot of phone calls, pictures, financials, FB, all the data the IRS is collecting on them to no doubt:
http://money.msn.com/credit-rating/irs-tracks-your-digital-footprint

They have a $4 billion budget, say 10% on servers, overpriced hardware at $100/TB = 4 million terabytes, or 4 billion gigabytes, in other words 2 GB per year per person using the internet.

Does 2GB/Year sound like phone meta data to you??

Does the NSA have mod points? (0)

Anonymous Coward | about 10 months ago | (#43993839)

NSA uses hard disk, bigger capacity, you don't store 2GB/yr of data per person in SSD's. They might have some SSD cache, but more likely it's RAM cache, since they have they're own 150 Megawatt power station, its easier to hold things in RAM during data mining and ensure the power will stay on.

Disks for the bulk of the people, RAM cache for the influencers in the graph (people who originate ideas, are tagged by the 15000 cyber staff as potential targets).

If General Keith Alexander took off the limits on surveillance so it could be applied to the USA, I have no doubt he also took off the limits on propaganda too so we could get our share of NSA propaganda on Slashdot.

14000 Cyber soldiers, mean we have plenty here on Slashdot, a lot with mod points.
http://www.wired.com/threatlevel/2013/06/general-keith-alexander-cyberwar/all/

"Alexander’s agency has recruited thousands of computer experts, hackers, and engineering PhDs to expand US offensive capabilities in the digital realm. The Pentagon has requested $4.7 billion for “cyberspace operations,” even as the budget of the CIA and other intelligence agencies could fall by $4.4 billion."

"The forces under his command were now truly formidable—his untold thousands of NSA spies, as well as 14,000 incoming Cyber Command personnel, including Navy, Army, and Air Force troops. Helping Alexander organize and dominate this new arena would be his fellow plebes from West Point’s class of 1974: David Petraeus, the CIA director; and Martin Dempsey, chair of the Joint Chiefs of Staff."

"In May, work began on a $3.2 billion facility housed at Fort Meade in Maryland. Known as Site M, the 227-acre complex includes its own 150-megawatt power substation, 14 administrative buildings, 10 parking garages, and chiller and boiler plants. The server building will have 90,000 square feet of raised floor—handy for supercomputers—yet hold only 50 people. Meanwhile, the 531,000-square-foot operations center will house more than 1,300 people. In all, the buildings will have a footprint of 1.8 million square feet. Even more ambitious plans, known as Phase II and III, are on the drawing board. Stretching over the next 16 years, they would quadruple the footprint to 5.8 million square feet, enough for nearly 60 buildings and 40 parking garages, costing $5.2 billion and accommodating 11,000 more cyberwarriors."

Re: What does the NSA use? (0)

Anonymous Coward | about 10 months ago | (#43994131)

i see you haven't met my teenage daughter.

Re:What does the NSA use? (1)

bobbied (2522392) | about 10 months ago | (#43995533)

But the real *issue* here is being able to actually go though the data looking for information. Storage of this much data has been a fairly easy problem to solve if you have money, finding a way to organize and search though huge data sets to give timely results is not so easy even if you have money.

Buying spindles and connecting them in huge RAID arrays is well understood. You just build what size you need and dump your data onto it. Yea, you will have to battle OS size limits on partitions and files, but that's not too bad or very expensive. As you point out, getting the hardware isn't that expensive, even at the apparent scale involved here. Buying enough power to turn it all on and keep it cool shouldn't be an issue either, but you need to include that in the $4 Billion budget. In short, if you have money, getting the hardware off the shelf is easy. Software for this is NOT off the shelf.

The REAL money is going towards the software systems that mine the information being collected. There is no system configuration running MySQL that's going to be able to support ongoing data collection (inserts) and any kind of meaningful query results on a petabyte sized data base. I'm guessing that half their budget goes to research and development of software and systems used to collect, store and mine the data. I'm also guessing that they spend roughly 40% of their hardware budget on processing, 40% on storage and 20% on maintenance and operating costs. This puts their hardware budget ($4 Billion * 50%) * 40% or about $1 Billion, give or take.

This means that your 2 Gig turns into about 1/4th that, not accounting for the space being thrown away because it is obsolete. I'm guessing there really isn't that much being kept around on folks who are not interesting, however that is defined.

Any experiences on Hybrid RAID-1? (1)

schweini (607711) | about 10 months ago | (#43993347)

What a coincidence! I am getting ready to transition our main DB servers (couple of GB mysql data) to SSD, but I simply dont want to trust it that much yet. So my plan is to set up RAID-1, with an SSD and another conventional drive. There seems to be this "--write-mostly" option that tells linux to preferably read from the SSD. Anybody know if this is worth it? If it works? What kind of random access performance gains can i look forward to, running mysql on SSD? I found it surprisingly hard to find any good data on these subjects.

Re:Any experiences on Hybrid RAID-1? (1)

jaseuk (217780) | about 10 months ago | (#43993377)

I'm using that setup. I'm using a cheap, but high Capacity OCZ drive (960GB), with a software raid 1 mirror to a SATA replacement. I'm running this on Windows, which crucially always uses the FIRST drive for reads. So reads are at SSD speeds, writes are at SAS speeds.

It's working well enough. I've not benchmarked this. We have had 1 drive failure, I suggest keeping 1 cold-spare to hand. Delivery times on SSDs are pretty variable, you won't want your entire DB running on a SAS drive for too long.

Jason.

Re:Any experiences on Hybrid RAID-1? (1)

drsmithy (35869) | about 10 months ago | (#43993625)

Your writes will be limited to the speed of the conventional drive, so if your workload is mostly reads, then you will see a significant benefit.
Though, if your workload is mostly reads, you'd probably see the same benefit for a lot less $$$ by putting more RAM in your server...

Re:Any experiences on Hybrid RAID-1? (1)

rjstanford (69735) | about 10 months ago | (#43994927)

That's what we ended up doing with our databases - did a bunch of comparisons and ended up sticking to 15K disks and maxing out RAM instead. Even at Rackspace prices we came out ahead on price/performance.

Re:Any experiences on Hybrid RAID-1? (1)

silas_moeckel (234313) | about 10 months ago | (#43995577)

With a couple GB of data just put in ram you can get to 128GB cost effectively and if your read heavy you will end up with everything cached. If your writing just go all SSD it's night and day a single SSD pair easily outperforms a whole shelf of 15k drives.

As early adopters... (0)

Anonymous Coward | about 10 months ago | (#43993369)

We've been using SSDs in our data centers for quite a while now, specifically to store our high volume i/o databases. Its pretty much an indispensable technology when your system depends on speed. Think communications & financial transactions. Delays can result in decreased user experience in the case of communications, or arbitrage opportunities for those who do it faster in the case of financial transactions.

Specific applications now, everything later (2)

crucial_hendo (2950521) | about 10 months ago | (#43993611)

I work for an Australian hosting company and we have deployed the SolidFire all-SSD SAN for our cloud-based hosting (shared, reseller, cloud/virtual server), the major benefits of an all-SSD storage solution speak for themselves: far lower I/O wait time, huge IOPS numbers - in SolidFire's case 250,000+ distributed IOPS in our current configuration. We've recently shifted from the HP SAS-based Lefthand SAN offering up to 15,000 IOPS to the new SolidFire all-SSD SAN and the team behind SolidFire are partly from the Lefthand operation from HP, so there's some good know-how there. The article is quite broad in its content, for big data applications SSD SAN storage is still incredibly more expensive ($/GB) than SATA or SAS based SANs - our SolidFire was a huge investment. Many hosting providers are now switching to all-SSD based servers for the performance benefits, however the drawback is primarily total storage capacity of course. For example a typical VPS node using local storage with 10 x SATA drives can get up to 4TB of usable RAID-protected storage. The numbers for an all-SSD node in RAID configuration would be much lower in capacity and suitable higher in cost. Its important to note that many people view SSDs as desktop only hardware, which is fundamentally incorrect, as there are many units out there that offer write longevity much longer than needed (5-10+ years). For many server based applications (not big-data purposes), SSDs are, and will become the predominant choice among many hosting companies. Not every provider can afford the investment of an enterprise grade SAN, however the speed of development from Intel and Samsung will mean the $/GB will drop steeply and disk sizes will increase exponentially (like what we've with SATA in the past 5 years).

My company's experience (1)

necro81 (917438) | about 10 months ago | (#43993793)

At my company, we have gradually been moving away from spinning disks in favor of SSDs. My company does a lot of R&D work, so we have a lot of people doing CAD, simulation, number crunching, etc. For those users, our IT department hasn't built a machine with spinning media in over two years: the performance boost from SSD is outstanding, and the local storage needs are pretty modest. On the back end, our backup solution (daily incremental backups of everyone's machine, hourly for the network storage) uses a cabinet of HDDs, with a RAID of SSDs that contain the backup database / index. (there are tape backups in the mix, too, with offsite storage, but I forget the details). I'm sure if they could afford to create a 100 TB array of SSDs and do away with the spinning discs entirely, they would.

Re: My company's experience (0)

Anonymous Coward | about 10 months ago | (#43994299)

The entire article is about the cost savings of SSD's. How could you afford not to switch? I wonder if bcache is an efficient lowend solution. Of course, when absolute performance is required, customers pay, which means ssd will only be around long enough to transition from disk based infrastructure to everything in memory. - flash.
what we really need is bigger batteries and more generators.

Re: My company's experience (0)

Anonymous Coward | about 10 months ago | (#43995371)

If you want just raw storage with 'low' io then a normal HD is the way to go. If you need good io SSD is the way to go.

At this point for raw storage (think 20-40TB+) with modest retrieval requirements normal HD's still curb stomps the cost of SSD.

You will see an inflection point when SSD's of 1-1.5TB become common at a reasonable price. You will see huge swaths of HD's retired and replaced with SSD.

I use a similar bcache sort of thing on my own laptop (the intel windows flavor, I have 1TB worth of software I want at hand quickly). It works 'ok'. But not like a real SSD.

If I could get 1TB for under 250 bucks I would buy them in a heartbeat. Right now you are looking at the 1-4k range for 1TB. If you need 100TB of HD that is nearly 100k-500k depending on what you buy.

I figure about 3 years from now you will be hard pressed to buy a 1TB normal HD.

Also normal HD capacity is not slowing much. They are talking 10TB in one drive within 5 years. SSD will be hard pressed to keep up with that sort of density. The per cell write limit has consistently gone down every generation. It is not uncommon now to see cells with 3k re-writes when 10 years ago it was 100k.

"sd will only be around long enough to transition from disk based infrastructure to everything in memory" that works for transactional data that is short lived. But if you want to keep it long term you must write it out somewhere. There will be a power failure. Plan on it.

Of course business adoption is small (1)

neokushan (932374) | about 10 months ago | (#43993907)

I recently was given the task of upgrading my development machine. We're a small company but management is happy to spend money on hardware if we need it.

I decided I'd prefer an SSD and yet when I looked at the big suppliers of office machines - Dell, HP, etc. none of them even offered SSD's as an option. SSD's only came into it when you started looking at the really high-end, £2,000+ workstations but there's no reason why this should be the case.

In the end, I just custom built the machine as it was the only way to get the hardware I needed without having to fork out for workstation graphics (which I didn't need).

Re:Of course business adoption is small (1)

h4rr4r (612664) | about 10 months ago | (#43994623)

We just buy a normal dell and toss the drive out when it arrives. Installing a hard drive is not difficult and you get to keep the NBD warranty on the rest of the machine.

Re:Of course business adoption is small (1)

neokushan (932374) | about 10 months ago | (#43994741)

I would agree with that, but the cost Dell was charging was higher than what I could pay for a custom built option with the same (or in fact, better) specs.

Re:Of course business adoption is small (1)

h4rr4r (612664) | about 10 months ago | (#43994765)

That makes sense.
We also do not buy one off machines for devs or really anyone. We just upgrade one of the hundreds of desktops we buy at a time.

Million Dollar SSD's (0)

Anonymous Coward | about 10 months ago | (#43993941)

We have spent over a million on a SSD setup for a single host. I'm talking about IBM's V7000's behind IBM SVC. And this is considered Mid-Range, cheaper stuff.

Thing is, when you move from several hundred spinning disks to several dozen SSD's, we it's just not that impressive.

It does the job sure, but it's not Eureka!

It depends... (0)

Anonymous Coward | about 10 months ago | (#43994007)

The main concern I see (from a storage *network* perspective) is the cost of flash along with write issues (limited write cycles and speed deteriorating on an erase/write), but there are a few companies attempting to lessen the impact of those problems. PureStorage and EMC's XtremeIO attempt to increase efficiency and minimize write-wearing by using inline deduplication with a layer of cache at the controller level. One or both of them also write the data down in a RAID-3 type fashion. Save an entire stripe in cache, then lay it down on the flash to ensure you're not having to go back and fill in spaces, possibly doing erase/writes. Now...I haven't gotten to fully test one, but I'd like to see what happens when you start deleting and fragmenting data. That one should be interesting.

From an internal storage perspective, FusionIO is magnificent (there are others, but I'm not too concerned), though there are a few sticking points that mess with me. Transient data database I've seen thrown on it blaze through like nothing I've ever seen, but if you want any data replicated (to my knowledge, correct me if I'm wrong) you have to do operating system or database level replication. You have a blisteringly fast storage device that finally brings the storage up to speed with the new CPU, and, if you replicate, you start eating into the CPU cycles you'll need to push that storage to the limit.

More Common Than You Think... (4, Interesting)

Whatchamacallit (21721) | about 10 months ago | (#43994293)

SSD's might not be used as primary storage, yet. The cost of using a lot of SSD's in a SAN is still too high. However, that doesn't mean that SSD technology is not being used. Many systems started using SSD's as Read/Write caches or highspeed buffers, etc. The PCIe SSD cards are popular in highend servers. This is one way that Oracle manages to blow away the competition when benchmarks are compared. They put a PCIe SSD cards into their servers and use them to run their enterprise database at lightning speeds! ZFS can use SSD's as Read/Write caches although you had better battery backup the Write cache!.

Depending on a particular solution, a limited number of SSD's in a smaller NAS/iSCSI RAID setup can make sense for something that needs some extra OOMF! But I don't yet see large scale replacement of traditional spinning rust drives with SSD's yet. In many cases, SSD's only make sense for highly active arrays where reads and writes are very heavy. Lots of storage sits idle and isn't being pounded that hard.

"business-sized" (0)

Anonymous Coward | about 10 months ago | (#43994433)

it does raise a question: are SSDs mature enough (and cheap enough) to support business-sized workloads?

Yes and no. It's kind of a stupid question, because "business-sized" is undefined. Different people are doing different things. Some of them need big storage, some need less.

Hot/Crazy (1)

bill_mcgonigle (4333) | about 10 months ago | (#43994469)

Two years on and this is still relevant: The Hot/Crazy Solid State Drive Scale [codinghorror.com].

I love SSD's in servers and they don't burn me because I always expect them to fail. Sure, one MLC SSD is fine for a ZFS L2ARC, because if it fails reads just slow down, but for a ZFS ZIL, that gets a mirror of SLC drives, because a failure is going to be catastrophic.

If I'm using Facebook's FlashCache, two drives get mirrored by linux md and treated as a cache device and smartd lets me know when one of them goes TU. Another advantage here is linux md is hot-replaceable while pure FlashCache isn't. I just got linux 3.9 on my first server this weekend (thanks, ElRepo) and haven't yet tried Redhat's dm-cache, but the same logic ought to apply; it's only the abstraction and syntax that differs.

Yeah, there's a write penalty with mirrors, but SSD mirror writes are way faster than the best pure spinning-rust RAID (no rotational latency), so it's way better than other options.

Clustering can help too. The other strategy is to make nothing redundant except for a massive cluster of servers. I haven't benchmarked the two strategies (I tend to work with smaller clusters in small businesses) but tech time isn't free either. I suspect at Megalocorp scales where there are several people whose job it is to replace failing disks all day (this is a real thing), going redundant on the compute node scale is a better option. My systems tend to be remote in far-away data centers and nobody wants to have to touch them more than every few months.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...