Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware IT

SSDs: The New King of the Data Center? 172

Nerval's Lobster writes "Flash storage is more common on mobile devices than data-center hardware, but that could soon change. The industry has seen increasing sales of solid-state drives (SSDs) as a replacement for traditional hard drives, according to IHS iSuppli Research. Nearly all of these have been sold for ultrabooks, laptops and other mobile devices that can benefit from a combination of low energy use and high-powered performance. Despite that, businesses have lagged the consumer market in adoption of SSDs, largely due to the format's comparatively small size, high cost and the concerns of datacenter managers about long-term stability and comparatively high failure rates. But that's changing quickly, according to market researchers IDC and Gartner: Datacenter- and enterprise-storage managers are buying SSDs in greater numbers for both server-attached storage and mainstream storage infrastructure, according to studies both research firms published in April. That doesn't mean SSDs will oust hard drives and replace them directly in existing systems, but it does raise a question: are SSDs mature enough (and cheap enough) to support business-sized workloads? Or are they still best suited for laptops and mobile devices?"
This discussion has been archived. No new comments can be posted.

SSDs: The New King of the Data Center?

Comments Filter:
  • by ron_ivi ( 607351 ) <sdotno@cheapcomp ... s.com minus poet> on Thursday June 13, 2013 @03:20AM (#43992893)
    This blog article's very relevant: http://techblog.netflix.com/2012/07/benchmarking-high-performance-io-with.html [netflix.com]

    TL/DR: "The relative cost of the two configurations shows that over-all there are cost savings using the SSD instances"

    at least for their use-case (Cassandra).

    At work we also use SSDs for a couple terabyte Lucene index with great success (and far cheaper than getting a couple TB of DRAM spread across the servers instead)

    • by wvmarle ( 1070040 ) on Thursday June 13, 2013 @04:23AM (#43993133)

      So you're replacing RAM with SSD, not HD with SSD. Interesting.

      And would you even be able to do this with DRAM modules? Normal PC motherboards don't support that.

      • by SQL Error ( 16383 ) on Thursday June 13, 2013 @04:58AM (#43993307)

        You can build a 48-core Opteron server with 512GB of RAM for under $8000. Going over 512GB in a single server gets a lot more expensive (you either need expensive high-density modules or expensive 8-socket servers - or both) but if you can run some sort of cluster that's not a problem.

      • by ron_ivi ( 607351 )

        And would you even be able to do this with DRAM modules? Normal PC motherboards don't support that.

        Even low-end (dual-CPU 2-U) servers these days support either 192 or 256GB [asacomputers.com]. It's not that hard or expensive to get 4 256GB or 6 192GB servers.

        But as that link to Netflix's' blog points out - SSDs can have better price/performance than DRAM at the moment if you need a lot.

    • How does that make sense. Sure SSD is very similar to RAM physically, but it is still like a thousand times shower, is it not?

      • I don't understand the confusion, maybe a car analogy will help.

        John smith is switching to a new Mustang from a mid 90's civic to reduce his merge time. This represents a huge savings over buying a Porsche. Make sense now?

        • Maybe if he was using a golf cart instead of a porsche it would be a better analogy.

        • I guess, but he's only going from 7.5 seconds (1995 Civic Si) 0-60 to about 6.8 seconds (2013 Mustang V6 automatic), so only about a 10% improvement. I think the overall improvement from a HDD to a SSD is significantly more than that. Now if you said a mid-90s Civic LX to a new Mustang GT you might have a better point.

          • by lgw ( 121541 )

            A V6 Mustang is like the Matrix sequels or Star Wars prequels - enthusiasts know they don't actually exist.

      • How does that make sense.

        As the link to Netflix pointed out -- they benchmarked the entire system with the same REST API in front.

        They configured one cluster of SSD-based servers; which another cluster of spinning-disk-with-large-RAM-based servers. It took a cluster of 15 SSD-backed servers to match the throughput of 84 RAM+Spinning servers. With throughput matched, the SSD-based cluster provided better latency and lower cost.

        TL/DR: "Same Throughput, Lower Latency, Half Cost".

      • by ron_ivi ( 607351 )

        but it is still like a thousand times shower, is it not?

        Yes; but it's still like 5-500x faster than spinning disks too (obviously depending on if you're talking sequential I/O, or random-acces).

      • Yeah. When we're talking RAM, we are talking modern interfaces, such as DDR (now DDR2 or DDR3), whereas NAND flash, which is what is used here, uses page mode read & writes. Not to mention that internal writes, which are there on flash but not on RAM, would automatically slow down the process, even if the same interface were used (compare SRAM and NOR flash, as a comparison point).

        I think what's contributing to the confusion is SSDs being available not just in SATA interfaces, but now, in PCI-X inte

  • 20x faster (Score:3, Informative)

    by drabbih ( 820707 ) on Thursday June 13, 2013 @03:20AM (#43992895)
    By switching to SSD's on a data intensive web application, I got 20 times speed improvement - from 20 hits per second to 400. I trust SSDs more than physical spindles any day.
    • Re: (Score:3, Insightful)

      by donaldm ( 919619 )

      By switching to SSD's on a data intensive web application, I got 20 times speed improvement - from 20 hits per second to 400. I trust SSDs more than physical spindles any day.

      When designing storage for any Business or Enterprise the disks (solid state or spinning) should always be in some sort of RAID configuration that supports disk redundancy. Failure to do this could result in loss of data when the disk eventually fails and it will. I am often asked "How long" and my answer is "How long is a peace of string".

      At the moment SSD's are excellent when you need high I/O from a few disks up to say a few TB however if you look at enterprise storage solutions of 10's or even 1000's

      • Re:20x faster (Score:5, Insightful)

        by Twinbee ( 767046 ) on Thursday June 13, 2013 @04:45AM (#43993251)

        and my answer is "How long is a piece of string".

        Sorry, that phrase always strikes a nerve with me. More useful answers would include an average, or even better, a graph detailing the death rate of SSDs (and how they tend to die early if they do die, but tend to last if they get past that initial phase).

      • by Culture20 ( 968837 ) on Thursday June 13, 2013 @06:05AM (#43993549)

        "How long is a peace of string"

        I have never known string to break a cease-fire.

      • by drsmithy ( 35869 )

        At the moment SSD's are excellent when you need high I/O from a few disks up to say a few TB however if you look at enterprise storage solutions of 10's or even 1000's of TBytes you are still looking at spinning media with large cache front ends (BTW I am talking about $20k up to many millions of dollars storage area networks).
        Well, what you're usually looking at is a storage system with multiple types and speeds of disks that automatically moves data through the tiers depending on the frequency and type of

      • by dbIII ( 701233 )

        "How long is a peace of string"

        About the same as a "Concordat of Worms".

    • I know some hosting companies that have been all SSD for years this article is no surprise given how much data is flung around on the cloud.
    • I trust SSDs more than physical spindles any day.

      Based on what evidence? Where is your data? Faster != More reliable. Spindle based hard drives are (usually) quite reliable and there is plenty of real world usage data documenting exactly how reliable they are. Companies with big data centers like Google have extremely detailed reliability performance figures. SSDs have a lot of advantages but they only recently have started receiving wide distribution and to date they have poor market penetration in data centers where it is easiest to measure their r

      • by jedidiah ( 1196 )

        I have had hard disks last for 7 years. I have some now that are about 5 years old. When I can say that about an SSD, I will have more trust in them. Until then, trust is really unwarranted. Without some actual experiences (yours or something else), you are really just engaging in a leap of "faith".

        • by jon3k ( 691256 )
          That's probably too small of a sample to draw any reliable conclusion don't you think? Even if you had 1 SSD that lasted 20 years, does that really tell us anything, statistically?

          For what it's worth, I bought my first SSD, a 30GB OCZ Vertex SSD (original version) on 6/21/2009 (i just logged into newegg and checked) and it's still going strong without a single problem. It's since been "demoted" to my HTPC in the living room, which has been great because the bootup is very "appliance-like" and it's com
      • by jon3k ( 691256 )
        There are lots of very large installations using pure SSD (MySpace went all SSD in 2009 for example). However, no one seems to be making the data available. And one reason it wouldn't help is that the lifespan of an SSD is incredibly dependent on the work load, unlike traditional disks. If you're workload is 99% reads and 1% writes, your failure rate would be exceptionally low. But if my workload was 50/50 reads/writes my failure rate COULD BE substantially higher than yours.
  • by Todd Knarr ( 15451 ) on Thursday June 13, 2013 @03:21AM (#43992903) Homepage

    The question is really going to be what kind of shape the drives will be in a year or so from now after 12+ months of constant heavy usage. The usage profile in consumer computers is a lot different from that in a server, and the server workload's going to stress more of the weakest areas of SSDs. And when it comes to manufacturer or lab test results, simple rule: "The absolute worst-case conditions achievable in the lab won't begin to approximate normal operating conditions in the field.". So, while SSDs are definitely worth looking at, I'll let someone else to do the 24-36 month real-workload stress testing on them. There's a reason they call it the bleeding edge after all.

    • by SQL Error ( 16383 ) on Thursday June 13, 2013 @04:07AM (#43993061)

      We've been using SSDs in our servers since late 2008, starting with Fusion-io ioDrives and Intel drives since then - X25-E and X25-M, then 320, 520 and 710, and now planning to deploy a stack of S3700 and S3500 drives. Our main cluster of 10 servers has 24 SSDs each, we have another 40 drives on a dedicated search server, and smaller numbers elsewhere.

      What we've found:

      * Read performance is consistently brilliant. There's simply no going back.
      * Random write performance on the 710 series is not great (compared to the SLC-based X25-E or ioDrives), and sustained random write performance on the mainstream drives isn't great either, but a single drive can still outperform a RAID-10 array of 15k rpm disks. The S3700 looks much better, but we haven't deployed them yet.
      * SSDs can and do die without warning. One moment 100% good, next moment completely non-functional. Always use RAID if you love your data. (1, 10, 5, or 6, depending on your application.)
      * Unlike disks, RAID-5 or 50 works pretty well for database workloads.
      * We have noted the leading edge of the bathtub curve (infant mortality), but so far, no trailing edge as older drives start to wear out. Once in place, they just keep humming along.
      * That said, we do match drives to workloads - SLC or enterprise MLC for random write loads (InnoDB, MongoDB) and MLC for sequential write/random read loads (TokuDB, CouchDB, Cassandra).

      • by 0ld_d0g ( 923931 )
        Do you happen to know the failure rate off hand? Also did you do any research into which manufacturer has the least failure rate before deciding on the brand?
        • Not off hand, sorry. I haven't been the sysadmin for 18 months (moved back to programming), and I don't want to give a guess that might be off by a factor of two.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        If you do RAID5 or RAID6, you should match your RAID block exactly to the write block size of the SSD. If you do not, then you will generally need two writes to each SSD for every actual write performed. This will reduce the lifetime for the SSD and reduces the efficiency. Most RAID controllers have no way of doing this automatically and it is not easy to learn what the write block size is on an SSD (it is not generally part of the information on the drive).

      • I did my first write heavy deployment of PostgreSQL on Intel DC S3700 drives about a month ago, with each one of them replacing two Intel 710 drives. The write performance is at least doubled--the server is more than keeping up even with half the number of drives--and in some cases they easily look as much as 4X faster than the 710s. I've been able to get the 710 drives to degrade to pretty miserable read performance on mixed read/write workloads too, as low as 20MB/s, but the DC S7300 drives don't seem t

      • by Spoke ( 6112 )

        now planning to deploy a stack of S3700 and S3500 drives.

        Yep, these are the only drives I'd recommend for enterprise use - or any other use where you want to be sure that losing power will not corrupt the data on the disk thanks to actual power-loss protection.

        Intel's pricing with the S3500 places it very competitively in the market - even for desktop/laptop use I would have a hard time not recommending it over other drives unless you don't care about reliability and really need maximum random write performance or really need the lowest cost.

      • by AcquaCow ( 56720 )

        Have you looked at the price point of the ioScale cards?

      • by jon3k ( 691256 )
        When you say it outperforms a RAID10 array of 15K RPM disks - how MANY disks? 4? 100?
        • by jon3k ( 691256 )
          Also - thanks for the info, very interesting and honestly what I would have suspected. Nice to see it play out in the real world.
    • Will also depend greatly on your specific use case: whether it's lookups from a huge, mostly read-only database, or for use in a mail server which is constantly writing data as well. By my understanding at least it's the writes that wear out the SSD, not the reads.

    • Enterprise SSD's have been out for half a decade in production. I have roughly 300 Ent SSD's and more than a thousand consumer ones in servers and no failures. Retired many of the early ent SSD's well before they were pushing there write limits as we aged out servers (3-5 years service life). The consumer ones are acting as read cache for local and iscsi disk does wonders.

  • Silver Bullet (Score:5, Informative)

    by SQL Error ( 16383 ) on Thursday June 13, 2013 @03:21AM (#43992907)

    We have hundreds of SSDs in production servers. We couldn't survive without them. For heavy database workloads, they are the silver bullet to I/O problems, so much so that running a database on regular disk has become almost unimaginable. Why would you even try to do that?

    • by dbIII ( 701233 )
      I found write performance hit a huge wall once the things started filling up. Perfect to kb/s in an instant and then getting stuck at that speed, and of course since an erase is a write recovery from that state took ages. The answer I suppose is to not let them get anywhere near full - where that point is will undoubtedly vary by model based on their internal controllers. I can't recall where it fell over but I think it was still under 90% with one set of SSDs.
      I replaced them with spinning storage and pe
      • Depends a lot on the drive, but that can be a problem. The best solution is to either buy a drive with a significant amount of over-provisioning built in (like the Intel S3700 or Seagate 600 Pro) or over-provision it yourself. That means that when it fills up it still has plenty of spare area to remap blocks.

        Enterprise drives typically have at least 20% over-provisioning; consumer drives can be 5% or less. A 400GB Seagate 600 Pro is the same as a 480GB Seagate 600, except for that setting.

  • by MROD ( 101561 ) on Thursday June 13, 2013 @03:25AM (#43992913) Homepage
    You have to remember that enterprise level storage isn't a single set of drives holding the data, it's a hierarchy of different technologies depending upon the speed of data access required. Since SSDs arrived they've been used at the highest access rate end of the spectrum, essentially using their low latency for caching filesystem metadata. I can see that now they are starting to replace the small, high speed drives at the front end entirely. However, it's going to be some time before they can even begin to replace the storage in the second tier and certainly not in the third tier storage where access time isn't an issue but reliable, "cheap" and large drives are required. Of course, beyond this tier you generally get on to massive robotic tape libraries anyway, so SSDs will never in the foreseeable future trickle down to here.
    • by jon3k ( 691256 )
      This guy basically nails it. SSD will slowly move it's way from "Tier 0" out, and eventually in 20 or 30 years even our near-line will be SSD, quite possibly.
  • by Anonymous Coward on Thursday June 13, 2013 @03:59AM (#43993023)

    The enterprise class SSDs are not the same as the "consumer" ones: http://www.anandtech.com/print/6433/intel-ssd-dc-s3700-200gb-review [anandtech.com]

    Don't be surprised if you stick a "consumer" grade one to a heavily loaded DB server and it dies a few months later.

    Fine for random read-only loads.

    And some consumer grade SSDs aren't even consumer grade (I'm looking at you OCZ: http://www.behardware.com/articles/881-7/components-returns-rates-7.html [behardware.com] ).

  • Price (Score:5, Interesting)

    by asmkm22 ( 1902712 ) on Thursday June 13, 2013 @04:01AM (#43993029)

    Pricing really needs to come down on these things. A single drive can easily cost as much as a server, and when you're talking about RAID setups, forget it. It's still much more effective to use magnetic drives and use aggressive memory caching for performance, if you really need that.

    Another 3 to 5 years this idea might have more traction for companies that aren't Facebook or Google, but right now, SSD costs too much.

  • by 12dec0de ( 26853 ) on Thursday June 13, 2013 @04:05AM (#43993051) Homepage

    I think that the wide range adoption of server SSDs also shows how far server installations have progressed toward eliminating all single points of failure.

    In the passt HA and 'five nines' was something only done by a few niches, like telephony provider switches or banking big iron. Today it is common in many cloud installations and most sizeable server setups. A single component failing will not stop your service.

    If your business can support the extra cost for the SSDs, a failing drive will not stop you and the performance of the service will see great improvements anyway. The power savings may even make the SSD not so costly after all.

    • by necro81 ( 917438 )

      A single component failing will not stop your service

      Correction: a single component failing should not stop your service, if you have done your job right (either in designing and building, or in finding a vendor to provide the service). But having a single component failing can and still does ruin somebody's day on a regular basis.

    • I was actually curious about the power consumption so I went poking around and found this [notebookreview.com](Sorry I couldn't find the original article. The power consumption is markedly different....not sure it's enough to COMPLETELY offset the cost, but certainly makes it easier to swallow.
  • SSDs are slow in that they rely on old school disk protocols like sata. Sure, you'll get better performance than spinning disk. But if you want screaming fast performance, you should look at flash devices connected through the PCIe bus.

    Products from Fusion IO [fusionio.com] would be an example of this. Apple Mac Pro would be another: "Up to 2.5 times faster than the fastest SATA-based solid-state drive".

    • by Twinbee ( 767046 )
      How about SATA 3? Is nearly a GB per second not good enough? Unless you're talking about latency....
      • SATA 3.0 is only 600 MB/s.
        • Yes, and thats peak.

          The year SATA 3 was put into production, SSD's designs were reconfigured to saturated it, and those fusion I/O drives saturate their PCI lane bandwidth....

          SATA 3 was and always will be shortsighted bullshit brought to you by a consortium of asshats intentionally trying to undercut feature demand in their desperate attempt to preserve the old guard.
          • I don't know really. The SATA 3 spec was released in July 2008, which was about the year when only the very first consumer SSDs started to appear. Maybe the spec was mostly designed for fast HDDs and they couldn't fully predict the need for the speed. And it was a natural thing to just double the data rate.
    • by wonkey_monkey ( 2592601 ) on Thursday June 13, 2013 @04:33AM (#43993191) Homepage

      Up to 2.5 times faster

      Ah, "up to." Marketing's best friend.

    • PCIe based flash is nice have more than a few in production. The downside is hot swap pcie MB's are extremely expensive and getting more than 7 pcie slots is also nearly imposible. I can get 10 or more 2.5 hot swaps on a 1ru server. I can get hardware raid even redundancy with the right back planes. I can connect up external chassis via sas if I need more room (yea pcie expansion chassis exist as well they are funky to deal with at times). The use cases for needing extremely fast IO without redundancy e

  • Virtualisation (Score:5, Interesting)

    by drsmithy ( 35869 ) <drsmithy&gmail,com> on Thursday June 13, 2013 @04:29AM (#43993177)

    This is being driven primarily by increasing levels of virtualisation, which turns everything into a largely random-write disk load, pretty much the worst case scenario for regular old hard disks.

  • are SSDs mature enough (and cheap enough) to support business-sized workloads? Or are they still best suited for laptops and mobile devices?

    I don't see maturity as a problem. If there is money to be made drive manufacturers will throw enough engineering and computer science talent at the task of solving the teething troubles. What interests me is that if SSDs mount a major invasion of server-rooms and data-centers worldwide it also means that we will now finally start to see SSD pricing drop like rock. Cheap high capacity external SSD drives, I can't wait. If we are lucky this will also popularize Thunderbolt with PC motherboard makers since th

  • What a coincidence! I am getting ready to transition our main DB servers (couple of GB mysql data) to SSD, but I simply dont want to trust it that much yet. So my plan is to set up RAID-1, with an SSD and another conventional drive. There seems to be this "--write-mostly" option that tells linux to preferably read from the SSD. Anybody know if this is worth it? If it works? What kind of random access performance gains can i look forward to, running mysql on SSD? I found it surprisingly hard to find any g
    • by jaseuk ( 217780 )

      I'm using that setup. I'm using a cheap, but high Capacity OCZ drive (960GB), with a software raid 1 mirror to a SATA replacement. I'm running this on Windows, which crucially always uses the FIRST drive for reads. So reads are at SSD speeds, writes are at SAS speeds.

      It's working well enough. I've not benchmarked this. We have had 1 drive failure, I suggest keeping 1 cold-spare to hand. Delivery times on SSDs are pretty variable, you won't want your entire DB running on a SAS drive for too long.

      Jason.

    • by drsmithy ( 35869 )

      Your writes will be limited to the speed of the conventional drive, so if your workload is mostly reads, then you will see a significant benefit.
      Though, if your workload is mostly reads, you'd probably see the same benefit for a lot less $$$ by putting more RAM in your server...

      • That's what we ended up doing with our databases - did a bunch of comparisons and ended up sticking to 15K disks and maxing out RAM instead. Even at Rackspace prices we came out ahead on price/performance.

    • With a couple GB of data just put in ram you can get to 128GB cost effectively and if your read heavy you will end up with everything cached. If your writing just go all SSD it's night and day a single SSD pair easily outperforms a whole shelf of 15k drives.

  • I work for an Australian hosting company and we have deployed the SolidFire all-SSD SAN for our cloud-based hosting (shared, reseller, cloud/virtual server), the major benefits of an all-SSD storage solution speak for themselves: far lower I/O wait time, huge IOPS numbers - in SolidFire's case 250,000+ distributed IOPS in our current configuration. We've recently shifted from the HP SAS-based Lefthand SAN offering up to 15,000 IOPS to the new SolidFire all-SSD SAN and the team behind SolidFire are partly fr
  • At my company, we have gradually been moving away from spinning disks in favor of SSDs. My company does a lot of R&D work, so we have a lot of people doing CAD, simulation, number crunching, etc. For those users, our IT department hasn't built a machine with spinning media in over two years: the performance boost from SSD is outstanding, and the local storage needs are pretty modest. On the back end, our backup solution (daily incremental backups of everyone's machine, hourly for the network storage)
  • I recently was given the task of upgrading my development machine. We're a small company but management is happy to spend money on hardware if we need it.

    I decided I'd prefer an SSD and yet when I looked at the big suppliers of office machines - Dell, HP, etc. none of them even offered SSD's as an option. SSD's only came into it when you started looking at the really high-end, £2,000+ workstations but there's no reason why this should be the case.

    In the end, I just custom built the machine as it was t

    • by h4rr4r ( 612664 )

      We just buy a normal dell and toss the drive out when it arrives. Installing a hard drive is not difficult and you get to keep the NBD warranty on the rest of the machine.

      • I would agree with that, but the cost Dell was charging was higher than what I could pay for a custom built option with the same (or in fact, better) specs.

        • by h4rr4r ( 612664 )

          That makes sense.
          We also do not buy one off machines for devs or really anyone. We just upgrade one of the hundreds of desktops we buy at a time.

  • by Whatchamacallit ( 21721 ) on Thursday June 13, 2013 @08:36AM (#43994293) Homepage

    SSD's might not be used as primary storage, yet. The cost of using a lot of SSD's in a SAN is still too high. However, that doesn't mean that SSD technology is not being used. Many systems started using SSD's as Read/Write caches or highspeed buffers, etc. The PCIe SSD cards are popular in highend servers. This is one way that Oracle manages to blow away the competition when benchmarks are compared. They put a PCIe SSD cards into their servers and use them to run their enterprise database at lightning speeds! ZFS can use SSD's as Read/Write caches although you had better battery backup the Write cache!.

    Depending on a particular solution, a limited number of SSD's in a smaller NAS/iSCSI RAID setup can make sense for something that needs some extra OOMF! But I don't yet see large scale replacement of traditional spinning rust drives with SSD's yet. In many cases, SSD's only make sense for highly active arrays where reads and writes are very heavy. Lots of storage sits idle and isn't being pounded that hard.

  • Two years on and this is still relevant: The Hot/Crazy Solid State Drive Scale [codinghorror.com].

    I love SSD's in servers and they don't burn me because I always expect them to fail. Sure, one MLC SSD is fine for a ZFS L2ARC, because if it fails reads just slow down, but for a ZFS ZIL, that gets a mirror of SLC drives, because a failure is going to be catastrophic.

    If I'm using Facebook's FlashCache, two drives get mirrored by linux md and treated as a cache device and smartd lets me know when one of them goes TU. Another a

For God's sake, stop researching for a while and begin to think!

Working...