Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Petabyte Storage Array 185

knight13 writes "Engadet is reporting that EMC is rolling out a petabyte RAID array. From the article, "And if you're ready for that level of storage, there's now someplace to get it: EMC has launched its first petabyte array, a version of the company's flagship Symmetrix DMX-3 system that includes nine room-filling cabinets of drives." The price? A mere $4 million."
This discussion has been archived. No new comments can be posted.

Petabyte Storage Array

Comments Filter:
  • by IdleTime ( 561841 ) on Tuesday January 31, 2006 @12:06AM (#14604337) Journal
    JPEGS would that be?
  • Kinda Interesting (Score:3, Interesting)

    by synthparadox ( 770735 ) on Tuesday January 31, 2006 @12:07AM (#14604339) Homepage
    This is pretty interesting in that it's yet another item that we all wish we had just for overkill purposes.

    However, I doubt they'll sell many of these. The only places I can think of that would benefit from this are supercomputing institutes, but they often build their own redudant RAID systems and/or NAS systems.

    It's nice and all, but seriously people, who's the audience?
    • Re:Kinda Interesting (Score:4, Interesting)

      by TinyManCan ( 580322 ) on Tuesday January 31, 2006 @12:11AM (#14604365) Homepage
      You're mistaken.

      If this was slightly less high-end disk (DMX's are EMC's top of the line) it would be perfect for disk-to-disk backups. We send approx 50 TB a day of data to tape to send offsite. I would *love* to have the last 50 days data on disk, onsite for instant restores.

      • But again, I refer to my question. Is it worth $4 million? Are you willing to pay $4 million for something like this? Usually the people who are interested in this have their own ability & money to build a NAS or backup systems and support them.
        • If the man has 50 Terabytes of critical data that he needs to backup and ship every day, I'd say he probably has a budget that could accomidate one of these things. While multi-terabyte arrays are more common than they once were, anyone carrying around that much data still needs to spend millions just to keep their infrastructure intact.

          Now all he's got to do is get his boss to sign the check. :-P
          • Just a guess but...

            He probably works for a company that provides backups to other companies in a data center. He'll still need to move the data offsite and the only practical way to move that amount of data is by tape. Having a $4 million dollar array would probably mean he wouldn't have to drive to the offsite storage facility three times a day to fetch tapes to do recoveries and could spend more time reading slashdot. The money this would cost to buy and maintain could easily cover several FTEs for ma
        • WHile not wanting to say too much, that cost is definitely reasonable considering the benefits.

          Imagine just once, you had to wait 4 hours for some tapes to come back onsite. Now that is four hours times approx 40,000 people (number of employees unable to work). That one outage just cost you 160,000 hours of downtime, where you could not serve your customers. Assuming you pay on average $25/per employee/per hour you've paid for the system in one go.

          This is not including the amount of money lost because p

          • Re:Kinda Interesting (Score:5, Informative)

            by Theatetus ( 521747 ) on Tuesday January 31, 2006 @01:31AM (#14604788) Journal
            Imagine just once, you had to wait 4 hours for some tapes to come back onsite. Now that is four hours times approx 40,000 people (number of employees unable to work). That one outage just cost you 160,000 hours of downtime, where you could not serve your customers. Assuming you pay on average $25/per employee/per hour you've paid for the system in one go.

            Only if you use Enron math. You have to pay $25 per employee per hour either way. The only thing that matters is what you mentioned as a side note, revenue from customers lost during the outage. If whatever system relies on this backup is generating you $1,000,000 per hour, then an array like this would pay for itself in one four-hour outage. But, that doesn't take into account opportunity cost: you could still be better off if you put that $4 million to use generating revenue; if it made back more than the outage costs you you're still on top.

            • If whatever system relies on this backup is generating you $1,000,000 per hour, then an array like this would pay for itself in one four-hour outage. But, that doesn't take into account opportunity cost: you could still be better off if you put that $4 million to use generating revenue; if it made back more than the outage costs you you're still on top.

              Well, yes and no. Let's say that the system is generating $1mm per hour in revenue. What do you think the potential backlash would be from a four hour down
      • I design disk backup systems, and use low end disks in it, with sufficient redundancy. I also device ways in it so you do not need so much disk space to go back in time for very prolonged periods (deltas which to the user look like full backups). Mail me, it is always fun to design a new system.
    • However, I doubt they'll sell many of these. The only places I can think of that would benefit from this are supercomputing institutes, but they often build their own redudant RAID systems and/or NAS systems.

      I suspect the marketing strategy is to sell the smaller versions of the system - the petabyte version is just an assembly of modular components.

      It's nice and all, but seriously people, who's the audience?

      For the full meal deal? Probably nobody - but it makes a hell of an advertisement for the small

      • For the full meal deal? Probably nobody - but it makes a hell of an advertisement for the smaller systems in the same product line.

        I'd bet you that you are wrong on this. EMC is going to sell a lot of these systems.

        Previously you could get a 230 TB (? might be off, going from memory?) DMX3000 array. EMC has a lot of customers with several (many in some instances) of these installed. A good percentage of these customers would probably consolidate into a single array. Some customers like the advantages of

    • It'd be awesome if a major hosting provider bought some of these and let you buy space in whatever size chunks you needed.
    • The only places I can think of that would benefit from this are supercomputing institutes
      We have been in the design phase for a storage solution that would provide scalability up to an exabyte (on paper, in theory, with some dark magic and light kludges).
      The only thing that has amazed me to this point is not the fact we can do it, its the number of applications that require (and the others that logically should require if they were not stuck in the 90's) a serious storage solution.

      SHAMELESS PLUG BELOW:
      Howev
      • I'm interested in this approach, but have always had concerns on the effect of constant power cycling on the drives. Have you seen a reduction in lifespan on the drives from this on-off-on-off behavior? Does this increased failure rate, and the increase in the number of drives to 'dial in the reliability' end up costing more in the long run?
    • There are several organizations that are required by law to keep records of data for several years. An example:

      When I was in the telecom field, we were required to keep a log of every call made for 6 years. Now imagine you are a tiny mom and pop organization -- let's just call it Sam and Bob's Communications. You buy another small mom-and-pop sop, Art, Trent, & Trevor. You've now got a combined customer base of, say 500 million people. Half of them also have a cell phone, and 20% have two phone lin
    • However, I doubt they'll sell many of these. The only places I can think of that would benefit from this are supercomputing institutes, but they often build their own redudant RAID systems and/or NAS systems.

      That's funny, I was saying almost the same thing about Terabyte storage not many years ago. Now I wish I had a full terabyte of space at home to store music, pictures, videos, etc.

  • Holy Truman, Batman! (Score:5, Interesting)

    by MarkRose ( 820682 ) on Tuesday January 31, 2006 @12:09AM (#14604352) Homepage
    Interesting calculation: If you live 80 years, that's 435.5 KB per second -- enough for a TV-quality video of your entire life.
    • Is there enough tv for all that space?
    • yes, it can document my entire life. that sure is a lot of porn.
    • Interesting calculation: If you live 80 years, that's 435.5 KB per second -- enough for a TV-quality video of your entire life.

      If you live for 80 years, that's 75 years longer than an average hard drive will last. That's 6.9 Megs of data breaking every second.

    • Does that mean I can fast forward through the commercials?
    • The phrase "TV-quality" is always amusing to me. Digital television is just starting to roll now in Holland (or maybe I'm just now starting to notice it) and the one feature that is supposed to sell it is the great quality of the picture. But that's not really the problem with television. Everyone I know has cable tv, which is good enough for me. The problem is there's never anything on.
      So saying that you could make a TV-quality video of your life is like saying your life sucks.
      • Yes- and for the purpose of documenting my life, I don't need no stinkin' TV quality either. Given this, what I want is a digital camera that can handle sound and picture, recorded constantly to a Type III CF slot, at 240x360 pixels and 16 bit stereo sound, in MPEG-2 format. I then would want the Petabyte array at home- and Hitachi's new 6GB microdrive on the road. I'd then be able to record all the interesting (read waking) hours of my life, and have them indexed by day and hour for recall, pretty easil
        • Why anybody would think that their life is interesting enough to warrant 24/7 recording of it is beyond me.
          • It isn't- most of those gigabytes will be wasted. What this is useful for is as an external memory periphereal to your brain. Human brains are very efficient at storing data, but notoriously inefficient at data retrieval- by the time you're 30 the pathways for retrieval are begining to degrade in reverse- you'll be still able to remember what your locker combination was in high school, but what you had for breakfast this morning may be lost forever.

            Such a "life recording system", especially if incorporat
        • have them indexed by day and hour for recall, pretty easily.

          I have a friend who has photographic memory. She can take pictures of things with her mind, and look back at them later. If she wants, she can snap an entire textbook and read it later.

          The problem is, though, that whenever she wants data she still has to read it. If she doesn't study for tests, then she has to flip through textbooks in her mind to try and find the data, which is a lot more tedious than you would think. If you had a recording of
          • If you had a recording of your life and wanted to know your boss's exact statement about your project 6 months ago, you will need to spend hours and hours and hours flipping through footage looking for it.

            That's where visual search tools come into use- I'm not saying we have the technology RIGHT NOW to find stuff in a 6 month archive of video- I'm saying that the storage is coming close (perpendicular recording Hitachi Microdrives are coming out in 2007, at which point you'll be able to carry 60GB around
          • I have a friend who has photographic memory. She can take pictures of things with her mind, and look back at them later. If she wants, she can snap an entire textbook and read it later.

            I don't think I have a photographic memory, but back when I was in school, I used to cram for exams by scanning (fast reading - not electronically) relevant books or texts. Then when I was in the exam room and read a question, I had to remember which page of the book the answer was on (or the subject was dealt with) and then

  • Heh (Score:4, Insightful)

    by Anonymous Crowhead ( 577505 ) on Tuesday January 31, 2006 @12:12AM (#14604372)
    I was just thinking about how 4 years ago you could build a terabyte array for about $5-10,000 down from many millions 8 years ago. Today, you can get a terabyte for less than $500. In a few years, a petabyte is only going to cost $5,000. If you just buying space for future growth, it seems like a total waste of money.
    • true (Score:2, Interesting)

      by xusr ( 947781 )
      It's amazing how quickly storage increases and prices go down. On the other hand, it's interesting to keep in mind that as amazing as an iPod nano would be in 1985, the invention of paper was the single biggest leap in storage density we've ever seen.
      • ...the invention of paper was the single biggest leap in storage density we've ever seen.
        Oh, come on -- dividing by zero doesn't count!

        (d/dx density = new medium/old medium;
        paper/nothing = n/0)
    • "Today, you can get a terabyte for less than $500"

      Where? I've been looking for that! (no I'm not kidding or trolling)

    • I was just thinking about how 4 years ago you could build a terabyte array for about $5-10,000 down from many millions 8 years ago. Today, you can get a terabyte for less than $500. In a few years, a petabyte is only going to cost $5,000.

      Law of Accelerating Returns [wikipedia.org]
  • I love the last line -- if you just want bragging rights -- these days that'll last 3 days.
  • Been there done that (Score:3, Interesting)

    by hackstraw ( 262471 ) * on Tuesday January 31, 2006 @12:15AM (#14604388)
    http://www.archive.org/web/petabox.php [archive.org]

    By those who truly care about the human tradition, and spreading the music of the Grateful Dead [archive.org] and other freely available media.

    Is this another slashvertisement?

    • The I/O performance of a petabox is just *slightly* lower than a DMX-3.

      This box, and the software used to manage it, make it considerably more useful than a petabox.
      • The I/O performance of a petabox is just *slightly* lower than a DMX-3.

        Are you being facetious here? Any details?

        Keep in mind that these are two very different beasts here. The petabox is one rack, the EMC "box" is 9 racks. More drives always gives you better performance. I'd be happy with a petabox for my music and porn. That would serve me fine for a couple of years.

        My point was that this is being _advertised_ as something new. Being that most slashdotters are still in their mother's basements tryin
        • The petabox is one rack, the EMC "box" is 9 racks. More drives always gives you better performance
          No, the petabox is 10 racks to get a peta. You only (only?) get 100 terabytes in a rack. See the second bullet in the Overview.
    • It's only a "Petabox" in name, not capacity. (unless by "box" they mean a 20'x8'x8' shipping container)

      From the linked site:

      * High density-- 100 Terabytes per rack
      * Colocation friendly-- requires our own rack to get 100TB/rack, or 50TB in a standard rack

      So even with the special Internet Archive racks, you'd need 10 of the racks to get a Petabyte.

      Though it seems that capricorn-tech [carpricorn-tech.com] has improved on the capacity since the Internet Archive page was written, advertising up to 80 TB per standard rack, s

    • The Archive is working, as I understand it, to move the Greatful Dead fan recordings off to somewhere else. Some of that content is streaming-only, and the fans keep playing the same streams over and over again. This is using up too much of the Archive's bandwidth.
  • Failure rate (Score:5, Insightful)

    by joNDoty ( 774185 ) on Tuesday January 31, 2006 @12:22AM (#14604430)
    Let's assume for a moment that the average lifetime of one hard disk in this petabyte array is 6.5 years. Since there are 2,400 hard drives, that means that once this thing has been running for a while, you will be replacing, on average, one broken hard drive per day, for the entire lifetime of the array. That's about $350 per day in replacement parts alone!
    • Re:Failure rate (Score:2, Insightful)

      by Anonymous Coward
      No, it doesn't work like that. When you buy an array like (esp. from EMC) you buy a _service plan_ to go with it.

      You pay $xxx,xxx.xx up front for a years service. The EMC arrays call home when a drive is getting ready to die (i.e. well before there is _any_ data loss) and EMC sends a tech onsite. The drive is swapped out and you as the customer notice absolutely nothing, except a line in the security log where the tech showed up at the datacenter.

      • And having worked at a company which had one of the smaller EMC storage arrays, this pretty much happened weekly. Got to the point that the technician was stopping by our office every couple of days!

        T'was claimed that they'd had a bad batch of drives from IBM :-)

        Worst part was, as the company was winding down, we had some kind of problem with the "phone-home" logic on the storage array. So every ten minutes, you'd hear the very loud and anachronistic sound of a modem dialing out and trying to warble a c

    • Let's assume for a moment that the average lifetime of one hard disk in this petabyte array is 6.5 years. Since there are 2,400 hard drives, that means that once this thing has been running for a while, you will be replacing, on average, one broken hard drive per day, for the entire lifetime of the array. That's about $350 per day in replacement parts alone!

      Good math!

      However, for a cool $4mil, hopefully this would include some kind of drive replacement program. With those raw numbers, a drive a day would c
      • The likelihood of failure on any given day is not independent of any other day. Chance a drive will fail on the second day is pretty low (first day is pretty high, installation mess-ups and the like). Chances increase over time, so maintenance will follow a pattern of exponential increase. And in 6.5 years, I can have a 1PB raid under my desk at home.
    • Re:Failure rate (Score:5, Interesting)

      by TopSpin ( 753 ) * on Tuesday January 31, 2006 @05:55AM (#14605573) Journal
      Large data centers often have far more than 2400 operational disks. Under these conditions, at any given moment, some fraction of all storage has faulted and repair activity is continuous. This is one reason SCSI hardware is preferred: the disks are more uniform (capacity, electrical interface, etc.,) and replacements remain available over longer intervals.

      This isn't the slightest bit unusual. At any moment some fraction of the power transmission and distribution system has faulted. Some percentage of all aircraft are grounded. Various segments of all wide area communications systems are down. Repairs never cease.

      $350 equates to a few minutes of aggregate labor costs spent financing, provisioning, securing and monitoring a petabyte of storage. Other large ongoing costs include power and cooling. $350/day is lost in the noise.

      EMC's new offering will reduce many of these costs for a given amount of storage. The thing to do then is build data centers to host these machines by the dozen.

    • I worked for a search engine in the Bay Area (not Google) which employed about 200-300 database servers, each stocked with 3 hard drives (one root, one mirrored set for the database files), the mirrored disks pegged with I/O.

      We replaced 4-5 a week, on average.

    • The Internet Archive Project http://www.archive.org/ [archive.org] is running on the PetaBox http://petabox.com/ [petabox.com] rack system, which was commercialized by Capricorn Tech http://www.capricorn-tech.com/ [capricorn-tech.com] more than a year ago.

      This system uses absolutely no board/controller lever redundancy, instead they use a separate file system on every disk, then mirror pairs of 1U units, and finally mirror the entire (mirrored) rack to a geographically distant location.

      I am currently testing a much denser solution, the SATABeast http://ne [nexsan.com]
  • by rolfwind ( 528248 ) on Tuesday January 31, 2006 @01:40AM (#14604836)
    The thing is built around 2,400 500GB hard drives.

    I wonder when (if) the average consumer can get 1PB harddrives?

    I don't know if Moores law applies historically to harddrives, but if doubling of capacity occured every 18 months and figuring 500GB is the limit size now and the doubling continues into the future:

    500GB - Now
    1TB - 18 months
    2 - 36
    4 - 54
    8 - 72
    16 - 90
    32 - 108
    64 - 126
    128 - 144
    256 - 162
    512 - 180
    1024TB = 1PB - 198months which is 16.5 years.
    • According to the paper High Density Hard Disk Drive Trends in the USA [tomcoughlin.com], hard drive density has doubled every 12 months.

      500GB - Now
      1TB - 1 year
      2 - 2
      4 - 3
      8 - 4
      16 - 5
      32 - 6
      64 - 7
      128 - 8
      256 - 9
      512 - 10
      1024TB = 1PB - 11 years, Assuming that ariel density continues to double and the form factor stays the same.
      • Unfortunately, that paper only covers 1998-2000.

        A few data points from that paper...

        1998.08 - 12Gb/sq in
        1999.08 - 26.5Gb/sq in
        2000.10 - 60Gb/sq in
        2002 - projected at 75Gbits/sq in

        1 (square inch) = 6.4516 square centimeters

        The Hitachi 7K400 series is only 62Gb/sq in. That's the 400GB drive that came out last fall (Sep 2005?). GMR was rumored to top out at around 80-100Gb/sq in, IIRC.

        Perpendicular recording is supposed to give us up to 230 Gb/sq in or up to 245 Gb/sq in, depending on who you ta
    • Petabyte drives... (Score:5, Informative)

      by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday January 31, 2006 @02:44AM (#14605067) Homepage Journal
      It really depends and Moore's Law doesn't really apply to it. The jumps tend to be much larger and much more random. The problem is that capacity is limited by several factors: drive speed, disk rigidity, read/write-head speed and the distance the read head is from the disk surface.


      The faster a disk spins, the more disk surface is exposed to the magnetic field used to write to the drive, so the less storage you have. Disk rigidity is important for two reasons - it limits how close the read head can get and it limits how precisely you can know how much disk surface has been visible. The faster you can either read magnetic fields or generate them, the less disk you need to write to, thus increasing storage. The distance of the read head determines the surface area exposed to the magnetic field on writing, so determines how far apart your data must be to not overlap.


      A trivial question might be: Using a standard, existing hard disk (but modifying the controller as necessary) increase the capacity of a hard drive? The answer is "probably".


      One way to do it would be to add enough RAM such that a fairly substantial portion of the disk can be held in ramdisk on the controller. Because you are then not reading and writing to the disk directly, but going through ramdisk, the speed of the drive becomes much less important. If you slow the drive down substantially, whilst writing to it at the same speed, the data won't be smeared over the disk as much, so you should be able to increase the density.


      In practice, as disk manufacturers don't design their disks with that kind of mod in mind, you are very likely to run into significant problems with defects on the surface that simply aren't visible at 7200 or 15000 RPM. Other problems, such as stability (drives depend a lot on gyroscopic effects and aren't built to go slow), may also limit how much you can cheat on the density.


      Another option would be to seriously cool the read/write head, so that you could flip the magnetic state faster. Again, you're limited. Mechanical devices don't like being freeze-dried - even when they ARE dry. However, you may be able to get some improvement that way.


      If you're just looking for ANY increase in capacity, then that's trivial and requires no engineering (but some programming). Modern computers are very fast, compared to modern hard drives. If you have one physical sector per physical track, then break down the structure entirely in memory, you eliminate the need for inter-sector gaps, physical sector headers, etc. You might be able to squeeze out another 10%-15% by this method, which isn't a whole lot but isn't bad for the effort it would take.


      There are very likely other mods that hard disk manufacturers could use, but which would be totally beyond anyone doing homebrew stuff. The platters probably aren't using the absolute ideal materials - let's face it, they're in business to make money and there are far more home buyers wanting cheap drives than there are perfectionists wanting perfect drives. I suspect there are other areas they could improve on, using existing technology, but won't because it's not economic.


      That's probably why you see bursts of improvement. When there's a massive enough need for the extra storage, it can be achieved. When there isn't, it's not worth the extra investment.

  • Damn I really need to create my own business in the storage market. I am not exactly sure about what EMC provides in this $4 million package (servers, 24x7 contract, maintenance, hard disk replament, etc) but I KNOW how to create a 1 PB storage device for less than half the price ($2 million instead of $4 million). And I am pretty sure about my numbers...

    I am sick of the current state of the storage market. Vendors are either designing unnecessarily expensive solutions, or are having HUGE margins...

  • I'll say it (Score:3, Funny)

    by Profane MuthaFucka ( 574406 ) <busheatskok@gmail.com> on Tuesday January 31, 2006 @02:17AM (#14604983) Homepage Journal
    A petabyte ought to be enough for anybody. And I mean it this time.
  • After the experiences that I have had with EMC gear in several different companies around the world I am suprised that anyone would still use EMC junk. Ohhh. but I guess EMC would support someone who is paying $4million for their storage gear rather than someone that only paid $0.5million... Since the company has started to use alternative vendors for storage we have had fewer problems, with higher performance and better support at a much cheaper price. And what is more interesting is that the EMC story I a
  • by drgonzo59 ( 747139 ) on Tuesday January 31, 2006 @06:15AM (#14605623)
    I like the section heading: Peripherals

    With a beast like this that fills up a whole room, anything else becomes a peripheral....

  • Just don't name it the Peta-file. :)

    Rumor has it that Sony tried to use such a name for a tape system several years ago, until the North American team heard it. Not sure it that's true, even though it came from a friend that worked in Sony marketing in Canada at the time.

    MadCow
  • That it says you can get 480TB for $250K, so two of those would give you 0.96PB for $500K rather than spending 8 times that to get the other 0.04PB?
  • this should be enough to store hashes of all possible 8-charachters password for 92 keys. More or less, I mean.
    Man, thank god that windows has 256-chars password length

    (it's all fiction, I made the numbers up. but I'm pretty sure about the size of the hashes db..)
  • 4000 bucks per terabyte sounds a little pricey. Whatever happened to economy of scale? On the other hand I get $3900 bucks for the price after 10 generations of splitting the price by 2. So figure in 10-15 years you'll have that petabyte, but by then you'll be drooling over the sextabyte or whatever it's called. (insert puns here)

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...