Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Supercomputing Upgrades Hardware

US Supercomputer Uses Flash Storage Drives 72

angry tapir writes "The San Diego Supercomputer Center has built a high-performance computer with solid-state drives, which the center says could help solve science problems faster than systems with traditional hard drives. The flash drives will provide faster data throughput, which should help the supercomputer analyze data an 'order of magnitude faster' than hard drive-based supercomputers, according to Allan Snavely, associate director at SDSC. SDSC intends to use the HPC system — called Dash — to develop new cures for diseases and to understand the development of Earth."
This discussion has been archived. No new comments can be posted.

US Supercomputer Uses Flash Storage Drives

Comments Filter:
  • Wow.. (Score:3, Funny)

    by i_want_you_to_throw_ ( 559379 ) on Sunday September 06, 2009 @08:25AM (#29330575) Journal
    Imagine a beo...... umm.. nevermind
    • "... intends to use the HPC system -- called Dash -- to develop new diseases for cures...

      There, fixed that for ya.

  • 1) design SSDs with a longer lifespan [slizone.com]
    • Re: (Score:3, Informative)

      by Barny ( 103770 )

      You not been following that thread enough, you will note the new patriot SSD have a 10yr warranty, but of course a "supercomputer" wouldn't use those.

      Other pci-e based SSD I have seen around give upto 50yr life span.

      Damage.Inc here btw

      • Re: (Score:2, Informative)

        by gabebear ( 251933 )
        The article says it's using intel SSDs hooked up via SATA, which come with the regular 3 year disk warranty.
        • by Barny ( 103770 )

          They say they are using Intel SSD via SATA, their higher end drives typically have a 2M hrs MTBF, almost double what most HDD are these days.

          http://www.intel.com/design/flash/nand/extreme/index.htm [intel.com]

          • MTBF isn't the only parameter to care about. I'm highly suspicious of the exceedingly high MTBF numbers given for SSDs. What are they doing, giving you the MTBF with absolutely zero data being written to the flash chips? I mean, honestly, a 228 *year* MTBF should raise some red flags about how relevant the number is.
            • by Barny ( 103770 )

              FusionIO SSD are rated upto 48 years of continuous use at 5TB written/erased a day.

              Not sure if intel rate theirs that high.

              • Interesting. I guess 43 years with 5 TB/day turnover and an 80 gig drive is consistent with 1 million erase/write cycle durability flash. The main reason a claimed life of 200+ years makes me wonder is other electronics components often have a life of far less, often limited by things like bond wire failures. Also, once a company starts bragging life spans that are not only 5 times longer than most people keep the devices but are over twice the lifespan of the potential users it starts to become a spec o
                • by afidel ( 530433 )
                  MTBF has NOTHING to do with a devices lifetime, it has to do with the average time for devices of that type to fail. If you have 200 of these running you should expect to have 1 to fail per year. This compares with the ~2% per year failure rate I've experienced in my datacenter over the last 3 years using mostly Segate enterprise class drives in a large number of different chassis from a variety of vendors. If you have thousands of these running you might have a failure per day, but still about one fourth t
    • Re: (Score:3, Informative)

      by nbert ( 785663 )
      My favorite computer magazine [heise.de] once tested an ordinary USB flash drive and it still worked after 16 million write cycles on the same file. Since they are using Intel SATA-SSD at SDSC I'm assuming that those drives are SLC, which last ~10x longer than (cheaper) multi-cell drives.

      But even if drives start to fail they'll just replace them like they do with any other supercomputer setup, so it's more a cost factor than a problem.
      • SATA SSDs have write-leveling implemented in the disk controller. If the drive was primarily empty then moving the file to a new block every time it was written would be trivial. I don't read German, but unless the disk was first filled to capacity, then test wasn't very useful in determining the reliability of SSDs.
    • You've been out of the loop for awhile I see.

      A 250GB SSD can have over 2.5PB (petabytes) written to it before it cannot be written to anymore.
      • by maxume ( 22995 )

        10 years from now, there are going to be people with 7 year old flash drives fretting about the fact that they wear out.

  • by gweihir ( 88907 )

    FLASH is about read access time. Throughput can be gotten far cheaper with conventional drives and RAID1.

    The rest is the usual nonsense for the press.

    • You are talking about 'cheaper' in regards to supercomputers? Their super computer has 68 of these, [appro.com] which are so expensive they wont even give you a price tag without calling them for a quote.
      • by afidel ( 530433 )
        Uh, no enterprise class vendor just gives out pricing without working through a salesman or VAR. Mostly because they want the lead but also because they want to make sure the solution being quoted is the correct one for the customer. If you're serious it's not hard to get quotes, we had quotes for a half dozen different vendors and almost a dozen different solutions when we recently purchased a new SAN.
    • Re: (Score:2, Informative)

      by pankkake ( 877909 )

      FLASH is about read access time. Throughput can be gotten far cheaper with conventional drives and RAID1.

      You mean RAID0 [wikipedia.org]. Note that you could do RAID0 with Flash drives and have both.

    • You mean throughput on sequential reads. What makes you assume that is the type of throughput they are measuring?

  • Cost savings? (Score:5, Insightful)

    by gabebear ( 251933 ) on Sunday September 06, 2009 @08:39AM (#29330641) Homepage Journal

    "Hard drives are still the most cost-effective way of hanging on to data," Handy said. But for scientific research and financial services, the results are driven by speed, which makes SSDs makes worth the investment.

    Why is the super computer ever being turned off? Why not just add more RAM?

    SSD is cheaper than DDR ( ~$3/GB vs ~$8/GB ), but also ~100 times slower.

    • RAM + uninterruptible power supply, and you're done. The only thing you need storage for is loading apps and data to begin with.

    • Re: (Score:2, Interesting)

      by maxume ( 22995 )

      It could be a technical issue (i.e., they are targeting simplicity). Hooking up 1 TB of SSDs involves 4 SATA cables, hooking up an additional terabyte of RAM involves finding special widgets that hold as much RAM as possible, and the parts to make them talk to the nodes.

      • True, I'm sure they have a valid technical reason, but the article completely fails to point out what that would be.

        If the "special I/O nodes" are connected using a 10Gb network, then the 4x3Gb SATA drives would let them saturate the network bandwidth.
      • by Eugene ( 6671 )

        Space, Heating, Electricity will all be a factor building pure RAM Drives.

    • Re:Cost savings? (Score:5, Informative)

      by MartinSchou ( 1360093 ) on Sunday September 06, 2009 @10:13AM (#29331133)

      Space requirements.

      Biggest DDR3 SO-DIMM modules I could find were 4 GB. They are 30 mm x 66.7 mm [valueram.com] and the standard allows for [jedec.org]

      The DDR3 SO-DIMM is designed for a variety of maximum component widths and maximum lengths, refer to the applicable raw card for exact componet size allowed. Components used
      in DDR3 SO-DIMMs are also limited to a maximum height (as shown in dimension "A" of MO-207) of 1.35 mm. [page 19]

      You now have an absolute minimum size of 2,701.35 mm^3 (1.35 mm x 30 mm x 66.7 mm), or 675.3375 mm^3/GB. This is a very very idealized minimum by the way.

      An Intel 2½" drive is 49,266.28 mm^3 (100.4 mm x 7 mm x 70.1 mm) [intel.com] and currently maxes out at 160 GB leaving you with 307.91425 mm^3/GB. That's 46% of the space that would be needed for DDR3 RAM. Add to that that Intel's 2nd generation SSDs are only using one side of the PCB, and you can expect the storage space requirements to be halved.

      Then there's the fact that the SSDs are directly replaceable. In other words, they don't need to rebuild the computer, buy super special boards or anything like that - you can replace a harddrive with an SSD without having to spec out a new supercomputer.

      In the end, if you wanted to replace the system with something that could provide 1 TB of RAM per node, they would need a VERY expensive system. Even with 8 GB modules, you would need to somehow fit 128 of them onto a board. I'd really love to see the mother- and daughter-boards involved with that.

      In the end it doesn't just come down to raw price or speed of the storage device (RAM vs SSD vs HDD vs tape), but also all the other factors involved, such as space, power, heat and the stuff you need to use it (i.e. a brand new super computer that can support 1 TB RAM/node vs 48 GB at the moment.

      Or to use a really bad car analogy, some company has found out that using a BMW M5 Touring Estate [bmw.co.uk] gives them faster deliveries than using a Ford Transit. Now you're suggesting that they should be delivering stuff via aeroplanes. Yes, it's much faster, but you need a brand new transportation structure built up around this, which you also need to factor into your cost assessments.

      • I really doubt it's the space requirements. The cooling system is likely going to be several times the size of the computer.

        You have a couple facts wrong.
        • They only have 4 nodes with these drives
        • Each node has 16 DDR2 slots that hold 4GB sticks

        They aren't maxing out the RAM slots on each node and they seem to be relying on these IO nodes to increase performance. I'd like to know how/why and this article doesn't explain anything.

        DRAM/DDR drives aren't anything new; hooking up 1TB of DDR would be expen

        • Long-running simulations can run completely awry if one of the DIMMs dies part-way in.

          Being able to record snapshots for later reuse or verification helps ensure the correctness of the simulation.

          • Long-running simulations can run completely awry if one of the DIMMs dies part-way in.

            Being able to record snapshots for later reuse or verification helps ensure the correctness of the simulation.

            Sure, but mechanical disks make more sense for storing snapshots. They have 768GB of RAM and 4TB of SSDs in their cluster.

            • Sure, but mechanical disks make more sense for storing snapshots. They have 768GB of RAM and 4TB of SSDs in their cluster.

              Perhaps. But 768GB would take a really long time to write to disk. Maybe they don't want to lose all that time? The fastest SSD setups I've seen have multi-GB/sec throughput.

              If you can read and write 2GB per second, you can use flash as a sort of "slow RAM" - although I'm not saying they're doing that in this case.

      • by afidel ( 530433 )
        If they're doing 48GB/node they better have speced that machine more than a year ago because everyone has known since this time last year that Nehalem is the best density solution and has the best $/MIP out there and almost all of the Nehalem boards support 72GB/node using cheap 4GB DIMM's. It really has been stupid to use anything else except possibly cell since the beginning of this year.
    • Re:Cost savings? (Score:4, Insightful)

      by SpaFF ( 18764 ) on Sunday September 06, 2009 @12:40PM (#29332179) Homepage

      There are plenty of reasons why supercomputers have to be shut down....besides the fact that even with generators and UPSes facilities outages are still a fact of life. What if there is a kernel vulnerability (insert 50 million ksplice replies here...yeah yeah yeah)? What if firmware needs to be updated to fix a problem? You can't just depend on RAM for storage. HPC jobs use input files that are ten's of Gigabytes and produce output files that can be multi Terrabytes. The jobs can run for weeks at a time. In some cases it takes longer to transfer the data to another machine that it takes to generate/process the data. You can't just assume that the machine will stay up to protect that data.

  • by joib ( 70841 ) on Sunday September 06, 2009 @08:40AM (#29330643)

    TFA isn't particularly detailed, beyond saying SSD's are used on "4 special I/O nodes".

    One obvious thing would be to use SSD's for the Lustre MDS while using SATA as usual for the OSS's. That could potentially help with the "why does ls -l take minutes" issue familiar to Lustre users on heavily loaded systems, while not noticeably increasing the cost of the storage system as a whole.

  • Until supercomputers use SD cards.
    • With SDXC going up to 104MB/sec and 2 TB, it's only a matter of time.

      • "up to"

        Hehe... ;D

        I'll take some of that 5.0gbit USB 3.0 as well, please.

        • I've gotten quite good speeds on SD cards. The usual problem is a lackluster USB interface, but they don't all connect via USB. There's nothing wrong with SD that eliminating the USB connection won't solve.

  • I've lost track of how many times hardware dudes have jammed a bunch of the newest fastest hardware into a box to achieve "100x" the "performance" of prior systems. Without a sliver of irony, or the slightest effort to analyze how software will use all this new hardware. Or what the serviceability of the new machine will be. Or any of the hundred other things that will combine to turn their "100x" into "1.25x".

    --------

    Boot time is O(1).
  • I think it was Amdahl who said that a "supercomputer is a machine which is fast enough to turn cpu-bound problems into io-bound problems", which means that disk speed could become a limiting factor.

    I have trouble seeing how having SSD arrays can make a big difference though!

    All current supercomputers have enough RAM to handle the entire problem set, simply because _all_ disks, including SSDs, are far slower than RAM.

    A supercomputer, like those which are used by oil companies to do seismic processing, does n

    • by pehrs ( 690959 )

      I can only agree, but notice that they are talking about a very small HPC (5.2 teraflops) and claim that they can significantly speed up data searches. There are certainly a few scenarios where you need to quickly and frequently search through a sparse, permanent dataset that is an order too large for your RAM and can benefit from the decreased delay in SSD storage.

      However, for general purpose HPC systems the SSD is still a hard disk, and therefore way too slow for anything involving the computation. The ex

    • The article specifically talk about document searching. Some of the document searching people we have require >110 Terabytes of memory (per job). On a cluster that is pretty difficult to store in ram.

      A lot of supercomputers are clusters and those clusters typically don't have huge amounts of memory ... they are high on compute.

  • Comment removed based on user account deletion
    • That's what you have SMART for. Just run smartd [sourceforge.net] or add smartctl [sourceforge.net] to your own scripts. Intel SSDs report the wear parameter in SMART attribute 233.

  • SSDs and databases (Score:3, Interesting)

    by Richard_J_N ( 631241 ) on Sunday September 06, 2009 @04:38PM (#29334123)

    I've just gone through the process of setting up a pair of servers (HP DL380s) for Linux/Postgres. Our measurements show that the Intel X25-E SSDs beat regular 10k rpm SAS drives by a factor of about 12 for fdatasync() speed. This is important for a database system, as a transaction cannot COMMIT until the data has really, really hit permanent storage. [It's unsafe to use the regular disk's write cache, and personally, I don't trust a battery-backed write cache on the RAID controller much either. So not having to wait for a mechanical seek is really useful. Read speeds are also better (10x less latency), and the sustained throughput is about 2x as good.

    So, yes, SSDs are a good idea for database loads, where the interaction is with the real world, and where once a transaction has completed, some other real-world process has happened. BUT, most supercomputer workloads are, in principle, re-startable (i.e. if you lose an hour's work due to a hardware failure, you can just re-run the simulation code, and throw away the intermediate state).

    So, for simulations, the cost of dataloss is an hour of re-work, not irretrievable information. Given that, we can get much better performance by storing everything in RAM, enabling all the write-caches, and sticking with standard SATA, provided that, every so often, the data is flushed out to disk. If something goes wrong, just revert to the last savepoint, which could be an hour ago, rather than having to be 10ms ago.

    [BTW, HP "don't support" SSDs in their servers, but the Intel SSD X25-E disks do work just fine. Though I did, unfortunately, have to buy some of HP's cheapest SAS drives ($250 each) just to obtain the mounting kits for the SSDs.]

    • We moved from the battery-backed 5i controller on the DL380's (7x 15K drives in an MSA30) to the BL460c's and the EVA4400 (using basically the same 15K drives, but with 16 of them -- Exchange, a .8TB SQL server and a few fileservers also access this space).

      The disk speed increase was enormous -- it really blew us away. What used to take between 3 and 4 hours can be done in about 8 minutes now.

      IMHO using physical drives is much safer than using the SSD's and to scale up all we do is add additional shelves
      • Could you be more specific about what actually gave the improvement? Was it just something simple, eg RAID 5 -> RAID6.

        My main point though was that for supercomputer simulations (but not email or warehouse management), it's OK to risk data, and then just re-start the simulation from an hour ago. So why not just enable the disk write-caches or put the database on a RAMdisk? Without the safety requirements (such as no write-cache), the benefits of SSDs aren't needed.

        BTW, I am very happy with the Intel SSDs

        • The management overhead using local vs consolidated storage is significant. We've been able to reduce worries and speed disk access with the SAN. That was a win for the techs and management.

          There's not a direct comparison with RAID on an individual server and RAID on the EVA4400. Yes it's still a RAID5 (safety is most important to us) but the leveling aspect of the SAN provides additional performance. If I want to increase performance I add drives to the SAN and give them to that slice I've allocated for
          • I know -- bad form to reply to myself etc etc. but it occurred to me that you may not be familiar with how this SAN applies storage.

            Your RAID 5 partition allocated to this server (think of it like a slice of the whole pie) is smaller than the total SAN storage. In a "normal" single-server storage environment you probably allocate all space among the local drives. Each hardware RAID partition usually goes on separate drives, so that if you have 7 drives and need 2 partitions one of which is RAID-5, at least
            • Thanks for your informative posts. Sadly we can't afford a SAN anyway, though it might be a nice idea in future. What I don't understand is, how can a SAN improve the time for fdatasync() - i.e. for the data to be flushed to physical disk, and then control to return to the application? This is essential for database stuff.

              As to my disinclination to trust battery backed cache - if the power goes out, it means we have about 4 hours to get it back. If that also fails, we have dataloss.

    • by afidel ( 530433 )
      There are 3rd parties that sell the HP disk sleds for a LOT less than $250 per! Oh, and I'd like to find out why you don't trust BBW, tons of enterprises rely on them every day and I don't hear tons of horror stories about problems caused by the BBW.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...