Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Sun Unveils Thumper Data Storage 285

zdzichu writes "At today's press conference, Sun Microsystems is showing off a few new systems. One of them is the Sun Fire x4500, known previously under the 'Thumper' codename. It's a compact dual Opteron rack server, 4U high, packed with 48 SATA-II drives. Yes, when standard for 4U server is four to eight hard disks, Thumper delivers forty-eight HDDs with 24 TB of raw storage. And it will double within the year, when 1TB drives will be sold. More information is also available at Jonathan Schwartz's blog."
This discussion has been archived. No new comments can be posted.

Sun Unveils Thumper Data Storage

Comments Filter:
  • :O (Score:4, Funny)

    by joe 155 ( 937621 ) on Tuesday July 11, 2006 @04:22PM (#15700941) Journal
    24TB... thats almost enough to hold all my pr0n!
  • I want one! (Score:3, Interesting)

    by andrewman327 ( 635952 ) on Tuesday July 11, 2006 @04:23PM (#15700952) Homepage Journal
    This is perfect for the space constraints applied to many server rooms now days. I wonder how they managed to control the heat output. My laptop only has one HDD and it gets pretty warm. I am very impressed that (according to Sun) costs $2 per gig! As always, I hope it works as promised.
    • Re:I want one! (Score:5, Informative)

      by cyanics ( 168644 ) on Tuesday July 11, 2006 @04:25PM (#15700972) Homepage Journal
      and they are especially showing off the low power usage in that kind of space..

      48 Hds, 2CPUs, and still less than 1200 Watts.

      Oh many. Datafarm in a single rack.
    • Maybe they didn't do much at all to control the heat? It wouldn't be the first time a vendor left heat issues to the end user to resolve.

      I doubt this is the case though. Sun tends to make pretty good hardware. At least that's my limited experience.
    • Re:I want one! (Score:3, Informative)

      by Jeff DeMaagd ( 2015 )
      It's not that big of a problem. A 7200RPM drive might take 15W max. 48 drives brings the total up to 675W. Not that bad in the server world, especially given the capacity.
    • Re:I want one! (Score:3, Insightful)

      From TFA (the last one): "We're still figuring out what to call the product, 'open source storage' or 'a data server,' but by running a general purpose OS on a general purpose server platform, packed to the gills with storage capacity, you can actually run databases, video pumps or business intelligence apps directly on the device itself, and get absolutely stunning performance. Without custom hardware (ZFS puts into software what was historically done with specialized hardware). All for around $2.50/gigaby
    • 2|=2.5 that 50 cents a gig makes a big price difference... when you are talking about 2.50*24*1024 that 50 cents adds 12 grand to the final price...
  • Interesting (Score:3, Funny)

    by Dark Paladin ( 116525 ) <jhummel.johnhummel@net> on Tuesday July 11, 2006 @04:24PM (#15700959) Homepage
    I've been talking to the wife about getting a NAS for the house - but now a 1 to 2 terabyte system seems so...puny.

    Hey, honey - remember how I said I wanted to store *all* the movies on the server? Get a load of this ;).
  • Did you see how tightly packed the drives were? Is heat a concern or is there a tornado cooling system in place?

    http://religiousfreaks.com/ [religiousfreaks.com]
    • Re:Holy SHIT! (Score:2, Insightful)

      by IflyRC ( 956454 )
      That's the Bambi Cooling Add-On system.
    • Re:Holy SHIT! (Score:2, Insightful)

      by Anonymous Coward
      Could you please put the link to your stupid website in your sig, so those of us who are uninterested don't need to read it a dozen times in every story? KTHX...
    • Re:Holy SHIT! (Score:5, Informative)

      by imsabbel ( 611519 ) on Tuesday July 11, 2006 @05:03PM (#15701282)
      Why does everybody here get so up with "The HEAT!!111".
      Its 48 hds in a 4U case. 48HDs is about 600W under full load.
      If you compare this to the fact that there are dual-socket - dual core servers out there that push 300W through a 1U case, thats nothing.

      Also, a 4U case allows the use of nice fat 12cm fans in the front, while the horizontal backplane allows for free airflow (in contrast to vertical ones like used before)
      • Re:Holy SHIT! (Score:3, Interesting)

        by UberLame ( 249268 )
        It might have allowed for 12cm fans, but it you had looked, you would see that they are using 10 much smaller fans. Ick.

        Meanwhile, the x4600 (8 dual core Opteron system) does apparently use 2 12cm fans.

        With all those disks, I suppose it might not make much difference, but I would have rather seen them using 12cm fans on the x4500 as well.
        • Re:Holy SHIT! (Score:3, Interesting)

          by buysse ( 5473 ) *
          Sun typically worries more about redundancy than noise. The 10 small fans are hot-swappable and run at ridiculous speeds (and yes, sound like a A320 revving up for takeoff), but I bet the thermal budget allows four of them to be dead at any given time.
  • "starting as low as $2 per GB"


    Doesn't sound like much.. but that's $42,000 for the top 24TB model.

    Perhaps it's time to start using "per TB" costs for these things. Surely no one sells sub-terabyte storage servers anymore.

    • > Surely no one sells sub-terabyte storage servers anymore.

      Most "storage" servers sold today have less than a terabyte
      of capacity.
  • Okay... (Score:5, Funny)

    by Vo0k ( 760020 ) on Tuesday July 11, 2006 @04:29PM (#15701009) Journal
    ...but how good is it at repelling the antlions?
  • No doubt that fully loaded it generates almost as much heat as the sun, too!

    Really snazzy tech, but that's a lot moving parts in a little space... and probably too hot to touch. Could you imagine the cooling required for a densely-packed data center of these things?

    Or am I way off base here?
  • Dune.. (Score:4, Funny)

    by WizADSL ( 839896 ) on Tuesday July 11, 2006 @04:30PM (#15701018)
    Thumper? I hope the sand worms stay away...
  • cooling (Score:3, Interesting)

    by Zheng Yi Quan ( 984645 ) on Tuesday July 11, 2006 @04:31PM (#15701027)
    Heat output from all those drives is a concern, but if you look at the photo on the ponytailed hippie's blog, you can see that the box has 20 fans in the front and probably more in the back. Makes you wonder what the thrust-to-weight ratio is. This box is going to make a screaming database server. 2GB/sec throughput to the internal disk beats anything out there, -and- the customer doesn't need to invest in SAN hardware to do it.
    • Great, so it's throwing all that heat out to raise the ambient temperature in the rack and force you to invest in more air conditioning power and more specialized airflow in the rack to keep this thing from damaging your other systems.

      I'd be interested to see how your actual overall power consumption within a rack and within a data center is affected by this thing.
    • Database performance is generally more related to IO/s, not GB/s. Thumper may still win an equal-cost comparison against eterprisey SAN equipment because it gets more spindles.
  • I would love to be able to spend $33K on that. I'd be lucky to get something in the $3-$4K range approved though. Do you have anything in that price range that I might actually get past my boss?
  • I am a worng thinking that AoE is a little more flexible / interesting than this?
  • Wow (Score:2, Funny)

    by bepolite ( 972314 )
    If my math is right... that's 50,331,648MB / 295,734,134 (US Population) = 174.27683 kilobytes for every man woman and child in the US. In one box!
    • 296,344,308,438,456,234 * 349,000,000 = Who the hell cares about that statistic??? Seriously, lets compare the bit count per 1U to the number of chicken eggs laid per year in the US.
    • I remember when I got my 1541-compatible MSD SuperDrive for my Commodore 64. Those 5 1/4" floppies held an amazing 170KB of data. That was like the equivalent of 20 5-minute tapes!

    • 174.27683 kilobytes for every man woman and child in the US. In one box!

      Holy shit, this can easily be one 1024x768 jpeg image per person! If we automatically throw out the dudes and everyone under 18* or over 35**, there would be enough space for a small gallery!

      --
      *, ** - adjust to your preference; further filtering depends on available data
    • Re:Wow (Score:3, Funny)

      by pimpimpim ( 811140 )
      Still less than the 640 k that should be enough for everybody, though! But with four of these you might be ok :)
  • by Jerk City Troll ( 661616 ) on Tuesday July 11, 2006 @04:54PM (#15701224) Homepage

    It would be nice if the system had a setting where you could transparently specify a redundancy factor in sacrifice of capasity. For example, I could set a ratio of 1:3 where each bit is stored on three separate disks. This ratio could increase to the number of disks in the system. And of course, little red lights appear on failed disks, at which point you simply swap it out and everything operates as if nothing happened (duH). Sure, we have a degree of this already, but managing redundant arrays is still a very manual process and when we start talking about tens or soon hundreds of terabytes, increased automation becomes a necessity.

    • by Anonymous Coward on Tuesday July 11, 2006 @05:02PM (#15701273)
      Check out ZFS-- http://www.opensolaris.org/os/community/zfs [opensolaris.org]

      It makes managing this sort of storage box a snap, and allows you to dial up or down the level of redundancy by using either mirroring (2-way, 3-way, or more) or RAIDZ. And soon, RAIDZ2.

      Additionally, Solaris running on the machine has fault managment support for the drives, and can work with the SMART data to predict drive failures, and exposes the drives to inspection via IPMI and other management interfaces. Fault LEDs light when drives experience failures, making them a snap to find and replace.
    • by Wesley Felter ( 138342 ) <wesley@felter.org> on Tuesday July 11, 2006 @05:04PM (#15701296) Homepage
      ZFS can provide anywhere between 200% and 10% redundancy depending on what mode and stripe size you use. It should also automatically repair when failed disks are replaced.
    • And of course, little red lights appear on failed disks, at which point you simply swap it out and everything operates as if nothing happened...

      I was thinking about that. With that case design, you have to pull the entire server and pop off the cover to yank one drive. I couldn't tell from the pictures how the enclosures worked, either; handles didn't seem evident. The idea is interesting, though. It almost looks like they could even squeeze it down to 2U with some creative cooling.

      Since they have 12-ro
    • In the current development kernel, Raid-Z2 gives you double redundancy. This should be in one of the forthcoming updates to Solaris 10 (it didn't make update 2).

      Tp.
  • I saw at least 2 different companies offering almost identical ideas at least 2 years ago. Sure the total storage wasn't as high as the disks weren't as big yet, but the big 4U chassis with a ton of disk installed vertically isn't anything new.

    • Re:Pfft... (Score:3, Interesting)

      by spun ( 1352 )
      You can also buy commodity 3U server chassis that hold 16 drives. We built a number of these as ROCKS cluster head nodes for Los Alamos National Labs. Two 3ware SATA raid cards running 8 drive RAID 5 arrays, bonded together in software as a RAID 0 array. Decent performance relatively inexpensively. Which is after all what the I in RAID is supposed to stand for. If you do this, get the SATA backplane that uses 4 Infiniband cables instead of 16 SATA cables and the cards that support that. I've done it both wa
      • so, you can answer the question: how hot does this all get? Can the disks really stay cool enough during full use? And what about the swappability of the disks, in front loaders it'll be easy to reach, but to remove these vertical ones it seems you'll have to do a lot of hassle to get them out (and hope they're not hot :) )
  • I have an SSA 1000 in storage with a matching SparcStation 10 that has a fibre channel host bus adapter. The SSA had 30 SCSI disks in groups of 10 and a fibre channel interface. Once upon a time it made for a lot of fun with doing some database performance modeling with 30 1GB SCSI drives. The size was roughly the same.
  • Considering you can get 750GB drives now, shouldn't this thing be currently capable of 36 TB raw capacity?
  • to imagine a Beowulf cluster of these?

    Sorry, couldn't resist; I'm usually about a day late for that particular well-worn meme.
  • by linuxbaby ( 124641 ) * on Tuesday July 11, 2006 @05:11PM (#15701353)
    We were waiting anxiously for this item to be announced, because we have about 100TB of storage (now) and add about 8TB per month. Perfect customer for these.

    But, unfortunately, they're not quite as cheap as I had thought. (Friend on the inside thought Sun was going to price them at $1.25 per GB, not $2 per GB)

    Instead, we've been using these. Very good cooling:
    http://www.rackmountpro.com/productpage.php?prodid =2348 [rackmountpro.com]

    32 SATA-II 750g drives = 24TB, same as the Sun X4500, but for only $16,000 for the entire system (chassis, mobo, ram, drives) instead of $70,000 for the Sun Thumper. Huge difference especially if you're ordering many of them.
    • I agree, this Sun box is way over priced. In May, I received a simliar box for our lab from Atipa [atipa.com]. Dual opteron, 24 x SATA II 500G SATA. I'm testing it in a RAID 60 configuration right now. I'm pulling over 350 MB/s at the application level. I'm using a pair of Areca [areca.us] raid 6 controllers (with real Open Source kernel support, thanks Eric Chen!) and striping them together with mdadm. It's amazingly fast. And with 500GB platters, I'm relieved to have N+2 redundency.
    • If you buy 10 at a time, it comes down to around $47k each (http://store.sun.com/CMTemplate/CEServlet?process =SunStore&cmdViewProduct_CP&catid=151017). Also, if you're paying list price on Sun kit, you're doing something wrong.
    • Sure, but if you're ordering many (ie in 10's), you're paying $47,099.50 a piece. Still more expensive tough. But as I understand it you also get the entire rack as well (no clue how cheap that is though).

      Also, the one you're linking to is a 7U unit, whereas Sun's is a 4U unit. IOW you can mount I think 6 units from Rackmount or 10 units from Sun, for 144 TB/rack vs 240 TB/rack. (That's with a 42U rack, which I believe is standard).

      I won't get into anything wrt servicability, management etc., as I've absolu
    • by illumin8 ( 148082 ) on Tuesday July 11, 2006 @07:43PM (#15702341) Journal
      Instead, we've been using these. Very good cooling:

      Unfortunately, with a generic motherboard and an off-the-shelf SATA RAID controller, good luck fixing the thing when a drive fails. What's that? The RAID controller is reporting a bad drive, but you have no idea which drive it is because there's no way to light it up without shutting down the server and going into the RAID controller BIOS and telling it to flash the drive light?

      Tough luck. There is a reason why Sun is a little more expensive: RAS. RAS is Sun's main hardware principle. It stands for Reliability, Availability, and Serviceability. Sun hardware is truly built with these concepts in mind. Concepts like: A failed component should trigger a visible alert (warning light), as well as a human readable syslog message that calls out the exact part that failed. You will never see these things in a self-built beige box without some serious hardware hacking on your own, and at that point, you might as well hire a team of EEs to reinvent the wheel.
  • You got your Blog linked on the front page of Slashdot. Now get your butt upstairs, Mom needs help with the dishes!
  • I smell a lawsuit from Disney around the corner...
  • ZFS (Score:5, Insightful)

    by XNormal ( 8617 ) on Tuesday July 11, 2006 @05:15PM (#15701385) Homepage
    This fits nicely with Sun's new ZFS [opensolaris.org] file system.

    ZFS blurs the traditional boundaries between volume management, RAID and file systems. All disks are added into one big pool that can be carved out into either the native ZFS filesystem format or virtual volumes that can be formatted as other filesystem formats. It has many other interesting features like instantaneous snapshots and copy-on-write clones.
  • by monopole ( 44023 ) on Tuesday July 11, 2006 @05:23PM (#15701464)
    Or the complete text content of the Library of Congress, coupled with 6 Academic Research Libraries, with the capacity to dump the equivalent of 2 pickup trucks worth of books every second . In a 4U rack. For the price of several cars. Now that's my type of bookshelf system!

  • Crazy (Score:3, Funny)

    by Reality Master 101 ( 179095 ) <RealityMaster101@gmail. c o m> on Tuesday July 11, 2006 @05:29PM (#15701517) Homepage Journal
    <old_man_mode>Yeh dern kids today are gawdamn spoiled. Back in mah day, we didn't have these FANcy tahrabyte arrays! My TRS-80 had 128K -- that's right, KAY-uh -- on a floppy! And the operating system took about 40K of that, leavin' me about 85K left! And I was happy to have it! I had tuh use a paper hole-puncher and cut a write-protect tab so I could flip the floppy over tuh get more space! Damn kids these days... -mumble- -grumble-</old_man_mode>
  • by xenophrak ( 457095 ) on Tuesday July 11, 2006 @05:30PM (#15701522)
    I'm glad that they are at least offering a server in this class with 3.5" disks. The 2.5" 10K RPM SAS disks that are on the x4100 and x4200 are just junk pure and simple.
  • by HockeyPuck ( 141947 ) on Tuesday July 11, 2006 @06:21PM (#15701862)
    If you liked the concept of the e450, you'll like this box.

    If you are interested in storage consolidation and increasing utilization while reducing storage islands. This isn't for you.

    With 48disks, you'll want protection... all implemented in software raid. So you do raid-5, probably create raid groups of 12 disks? 8 disks? as the number of disks in the raid group goes down, the amount of disk you waste on parity, and the amount of CPU cycles done on calculating parity goes up.

    As the industry moves to FC boot and iSCSI boot to alleviate the need to stock disk drives from 15 different vendors, this is an interesting idea for those who don't want to have a raid array. But in most shops, huge internal storage is sooooo '90s.
    How do you replicate this beast? VeritasVolume Replicator. Serverless backup? Nope.
  • What 4U "standard" (Score:3, Informative)

    by thenerdgod ( 122843 ) on Wednesday July 12, 2006 @07:40AM (#15704257) Homepage
    Yes, when standard for 4U server is four to eight hard disks

    Bullpucky. Maybe on your planet. A PC 4U NAS box in my world holds 24 SATA HDDs. Oh, you mean a standard 4U Server... Which usually means a quad-CPU box with 4GB of RAM and a couple fugly FC controllers. See, your problem is that thumper is for Storage in which the 4U form-factor is for drives, and the standard is more like 12 to 24.

    </flame>

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...