Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

3 Terabytes, 80 Watts 219

legoburner writes "The Enquirer is reporting that Capricorn have released a mini-itx based 1U-sized storage computer featuring four 750-GB hard drives and a 1-Ghz controller system with a typical power usage of an astounding 80 W per machine. A full 40U rack only uses 3.2 kW, which is less than 30 kW for an entire Petabyte!"
This discussion has been archived. No new comments can be posted.

3 Terabytes, 80 Watts

Comments Filter:
  • Ouch (Score:2, Funny)

    by lucky13pjn ( 979922 )
    And they would need all that storage to record their utility bills.
    • Re:Ouch (Score:4, Informative)

      by $RANDOMLUSER ( 804576 ) on Tuesday August 29, 2006 @12:35PM (#16000565)
      3200 Watts for 120 Terra bytes - that's like two hand-held hair dryers!
    • Where do you live? (Score:2, Interesting)

      by ackthpt ( 218170 ) *

      And they would need all that storage to record their utility bills.

      Where do you live that 80 watts is a big drain on financial resources?

      My CPU consumes 39 watts and I consider that loverly, compared to the old CPU which sucked 70+ watts.

      • by Alaria Phrozen ( 975601 ) on Tuesday August 29, 2006 @12:57PM (#16000732)
        You think that's bad? My CPU fan consumes 70+ watts.

        What's worse the rig I built to cool my harddrives is essentially a system of Peltier devices http://en.wikipedia.org/wiki/Peltier-Seebeck_effec t [wikipedia.org] in series!

        The very premise of the grandparent is ridiculous. Now if it were one-point-twenty-one-jigga-watts, I'd be worried, but 80 watts? I've got more on in light bulbs right now!
        • by DarthStrydre ( 685032 ) on Tuesday August 29, 2006 @01:54PM (#16001104)
          I'll hope you mean in parallel on your drives. Peltier in series are not befitting that application unless you live in an unusually hot house, or have drives requiring cryo conditions.

          Placing peltier patties in series decreases the amount of heat they can move, but obviously increases the temperature differential, but only if the stack is properly designed. It is very easy to put two peltier in series and have worse performance than a single device. In your case, with a non-static system (i.e. the hard drives are actively PRODUCING heat that you wish to remove) heat handling seems more imprtant than massive temperature differential. In thermodynamics, there is no free lunch... your secondary peltier is not only moving the heat away from the drive, but has to struggle with the heat it produces (however much electrical power it consumes is heat) as well as the first stage cooler.

          To optimize a multistage thermoelectric cooler, a rule of thumb is that each stage should recieve 1/2 to 1/3 as much current as the previous one. This roughly translates to an equivalent voltage ratio, though as the temperature and temperature delta change, the silicon has different resistances, and the Seebeck also changes the apparent resistance.

          In a PC, if you really want to do multistage peltier patties, and assuming they are 12v devices, you would notice an increase in performance (i.e. less heat coming off the hot side, and a colder cold side) if you were to connect the hard drive peltier to the +5V rail, and the heat sink peltier to the +12 rail. This is a very crude system, but definitely better than running both on +12.

          I still maintain that the coolers in parallel are preferable for nearly any computer usage. You have a metric library of congress of BTUs (slashdot measurement) to move quickly. Stacked units do this poorly.

          I have some data at home to determine near-optimal steady-state stacked configurations. Google is a help too, though sorting through the deep research and crackpot FAQs is rather tedious in this realm.
    • You've never seen a utility bill before. Don't worry, you'll see one very soon after you move out of mum's basement.

      The article says a rack of 40 of these little babies consumes in the neighbourhood of 3.2 kW. That's rougly equivalent to two nice microwave ovens. Yes yes, I know you don't run microwave ovens 24/7. But if you didn't close your refrigerator door all the way and it ran all day it would cost about the same as this unit. It would hardly put you in the poor house, especially if you had the m
  • by legoburner ( 702695 ) on Tuesday August 29, 2006 @12:32PM (#16000547) Homepage Journal
    At last, a chance for a rejected ask slashdot of mine... What is the structure of your file storage area / file server? How do you filter and back things up for your home file server?
    • Re: (Score:3, Informative)

      by jjeffries ( 17675 )
      backuppc [sourceforge.net] rocks!
      • Re: (Score:2, Informative)

        by JayAEU ( 33022 )
        Indeed it does, but not on a system like that. BackupPC relies heavily on MD5-checksums and does on the fly (de-)compression of archived files, so a little more horsepower is necessary for smooth operation.

        But other than that, there's nothing like BackupPC for a pain- and effortless networkbased backup system.
    • by Ant P. ( 974313 )
      I just put all my media files in /home/ and back it up with scp. Call me lazy, but it works.
    • I have an old 386 running fedora and samba on a 120GB drive with no RAID whatsoever. The machine won't fit another drive and an upgrade will involve so much hassle I've been putting it off over and over. Any reasonable upgrade would have to involve a terabyte machine because I don't want to go through the hassle of upgrading too soon after.

      CD based backups would be laughable considering that the disk is almost filled with downloaded TV shows and movies. Ditto for DVD's, not to mention the impracticality. USB HDDs; Backups are meant to be _more_ secure. Internet backup; not enough bandwidth. I never thought I'd say this, but I miss tapes.

      It's going to give out. I know it, you know it, we all know it. Bloody shows aren't even that good... *grumble*....
      • Lies! (Score:3, Informative)

        by Inoshiro ( 71693 )
        "I have an old 386 running fedora and samba on a 120GB drive with no RAID whatsoever. The machine won't fit another drive and an upgrade will involve so much hassle I've been putting it off over and over. Any reasonable upgrade would have to involve a terabyte machine because I don't want to go through the hassle of upgrading too soon after."

        Yea, well, 1986 called, they want their CPU back.

        Your system isn't a 386, though; old PATA IDE controllers on those things couldn't address more than 4 or 8gb (the firs
    • by tf23 ( 27474 )
      rsync to big external f/w ide hd's on a seperate box. depending on the data, daily backup, sometimes weekly, sometimes monthly. add int cvs repo's and tarballs of those and it's pretty easy to cron up. the lesser used f/w drives you can turn off between backup runs. you can pickup some fairly large ide drives and el cheapo enclosures to slap them in.

      if you really want to be careful, you'd have a few to rotate and then you could drop one at the safe-deposit-box.

    • Three tiers:

      1. Working copy on my laptop, gets checked into
      2. a subversion repository on a machine with a RAID-1 set.
      3. At the end of every week, a machine in a different country makes a dump of the repository and rsyncs it across. It then keeps five weeks of these in separate files, in case the main server is compromised (or corrupted) and I don't notice.
      I'm just coming up to the end of a PhD, and I don't want to lose my thesis and have to start again.
    • 300GB RAID1 - data, databases, subversion
      600GB RAID10 - backups (rsync snapshots or Second Copy via Samba)

      Offsite backups are a set of 300GB external USB/Firewire drives, rotated periodically. Those drives pretty much only hold a snapshot at a current point in time. They also concentrate on non-replaceable data rather then system configuration.

      System config is a combination of storing any edited config files in Subversion along with weekly snapshots via Dirvish / RSnapshot / RSync. The SubVersion al
  • by SpasticMutant ( 748828 ) on Tuesday August 29, 2006 @12:33PM (#16000555) Homepage
    As long as it's not silver, black or grey, I'm fine with adding another petabyte to my current configuration. If only my file system could handle it...
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Tuesday August 29, 2006 @12:33PM (#16000556)
    Knowing my luck, I'd probably tip the rack over and lose all the bank data to a massive head crash.

    If information is power, then this thing is a perpetual motion machine.
  • by BlahMatt ( 931052 ) on Tuesday August 29, 2006 @12:35PM (#16000569)
    "you could have every reality TV program at your fingertips for a little less than the cost of an average house."
    Thank you I think I'll buy the house...

    A) Because why would you want every reality TV program at your fingertips?
    B) Because we already do (See http://bittorrent.com/ [bittorrent.com])
    C) Because... just because.
  • by kcbrown ( 7426 ) <slashdot@sysexperts.com> on Tuesday August 29, 2006 @12:39PM (#16000595)

    Not when I keep getting a new internet every few minutes. A whole internet every few minutes! Can you imagine how many libraries of congress that is? I don't know about you, but I'd have a lot of trouble stuffing an entire library of congress into one of those tubes! And since the library of congress is obviously a lot bigger than this storage computer, there's no way you could stuff it into it!

    Until they come out with one of these that's bigger than the library of congress, I'm not buying!

    - Senator Ted Stevens, computer guru extraordinaire

    • Re: (Score:2, Insightful)

      Ted Stevens jokes will not get old for a long, long while. It is a glorious time to be alive.
  • by StressGuy ( 472374 ) on Tuesday August 29, 2006 @12:40PM (#16000604)
    My head spinning with the amazing possibilities such an immense data storage per kW solution could be applied to...why, it was even re-kindling my interest in the mini-ITX board. Then....I read the article

    "The next step up is the TB120 PetaBox, basically a rack of 40 GB3000s and an ethernet switch or two."

    WOW! so far so good...then, things turn ugly

      "If you need more space than that, I would say it is time to lay off the naughty pictures for a bit and seek serious help.

    In any case, Capricorn is saying you can get into one to the TB120s for about $1.50 a GB, and a little math says a full rack would cost under $200K. If you think that is a lot, imagine the Tivo you could make out of one, you could have every reality TV program at your fingertips for a little less than the cost of an average house."

    Yup, for a mere $200K, you too can have every reality TV program and/or naughty picture at your fingertips...and here I was thinking about mundane things like virtual libraries, genome sequencing, protien folding, etc.

    I'm going to be sad now...

    • It's coming to the point where every home should come equipped with a file server. We have MP3 collections, DVRs, home video to be edited, backups, etc.. Most technically savvy folks realize at some point that it's much cheaper and manageable to build out a file server than to try to replicate across multiple machines. Worst is when a box needs to be updated...

      A typical computer power user may have the following:
      1) 100G-200G worth of DV video from home movies
      2) 10G-20G of MP3s
      3) 5G-10G of digital camera pic
    • by cdrudge ( 68377 )

      Yup, for a mere $200K, you too can have every reality TV program and/or naughty picture at your fingertips...and here I was thinking about mundane things like virtual libraries, genome sequencing, protien folding, etc.

      I'm going to be sad now...

      The problem with saying they can be used for genome sequencing, protein folding, etc is that the typical Enquirer reader isn't going to have a clue how much space those activites take. I consider myself an above average geek and I don't know what an educated guess

    • Re: (Score:2, Offtopic)

      "up, for a mere $200K, you too can have every reality TV program and/or naughty picture at your fingertips"

      It would only be worth the $200K if that included the rights to all reality shows - so I could pull the plug on all of them...
  • by Doc Ruby ( 173196 ) on Tuesday August 29, 2006 @12:41PM (#16000607) Homepage Journal
    What's the best SW RAID to run against a cluster of these 3TB 1U appliances? That transparently offers swappable cluster units to apps designed to write to local filesystems?
    • by Firehed ( 942385 )
      Well, even JBOD would be stupid, let alone the RAID0 that some enthusiasts would try (which would probably saturate a quality gigabit link anyways). I think a hardware RAID5 approach for each cluster would be a very good idea, though each of those acting as a node of sorts in a massive RAID50 array would be good. I wouldn't want to lose an entire node due to a single drive, so I figure it would be best to find the bad node, replace its bad drive and rebuild that array, then pop that node back into place a
      • HW RAID is incompatible with the HW we're discussing, at least in power/size/cost specs.

        Repopulating the drives takes as long in SW as in HW, limited by the HD write speed. Saturating a GB network is also not a function of the RAID, whether HW or SW.

        You're talking about the bandwith constraints of moving local filesystems to network storage, which is another matter. Once the network and storage HW can accommodate the app bandwidth (and latency) requirements, SW RAID on a cluster of these cheap, cool, tiny a
      • Re:RAID (Score:5, Informative)

        by WuphonsReach ( 684551 ) on Tuesday August 29, 2006 @02:53PM (#16001492)
        Rebuild rate for a RAID1, 2-drive, 750GB SATA set is around 75MB/s. (Raw read/write rates for 750GB drives are around 75MB/s as well.) So figure 3 hours to rebuild a RAID1 array.

        Not sure what rebuild rates would be on a RAID5, probably about half of that? So 6 hours to rebuild the array?

        (That's using 750GB SATA drives with Software RAID on a PCIe motherboard.)

    • If you want to make this hardware into a single filesystem and you want scalable performance, Lustre is pretty much the only choice.

      (Someone suggested ZFS, but that would require a single file server that would potentially become a performance bottleneck.)
  • Bah (Score:4, Funny)

    by bunions ( 970377 ) on Tuesday August 29, 2006 @12:41PM (#16000611)
    I'm not interested in no mini-itx box unless it's wedged into an adorable, panda-like case. http://www.norhtec.com/products/panda/index.html [norhtec.com]
  • Pricing (Score:3, Informative)

    by jonesy16 ( 595988 ) on Tuesday August 29, 2006 @12:43PM (#16000632)
    Since it's not mentioned on their webpage or in the article, I searched for a listing of the price points and found the following.

    "The PetaBox nodes and racks are available now. Base pricing for the nodes (512K RAM, 10/100 interface, and no LCD) ranges from $1,595 (GB1000) to $3,395 (GB3000)." http://products.datamation.com/dms/sc/1156440622.h tml [datamation.com]

    The GB1000 is the 1TB node and the GB3000 is the 3TB node. I think they might mean 512MB of RAM base, but who knows. Sounds like it's a Fedora linux based product which makes me wonder what services it provides, they don't list. I would assume basic NFS/SMB/AFS services but there's no mention of backup / replication services, mirroring between twin nodes, etc that competitive products offer.
    • by rthille ( 8526 )
      Interesting. I just ordered the parts for my new server:
      $704.89 including tax & shipping.
      1.5 GHz C7 w/1000-base-t Mini-ITX 2 IDE & 2 SATA ports (would have liked 4, but whatever, I can get a PCI card)
      4-drive 1U 15" deep case [could be dual-racked I suppose]
      1GB RAM
      DVD writer [dual layer]

      The drives going in it are coming from my old server and others I've got lying around, which is why I wanted PATA & SATA.

      Even buying a SATA PCI adapter & 4 750GB drives I think I'd come out far cheaper.
      And I
      • Re: (Score:2, Informative)

        by jonesy16 ( 595988 )
        SATA doesn't have slave and master so that won't be a bandwidth issue. The drives are also quite an expense and you are reusing yours. The 750 GB drives go for $330 a piece. There's also the quality of the power supplies and cases (which can inflate the cost). Lastly, you're paying for the integration and the configuration of the OS. Even if they're using a free implementation of open source software such as Fedora, it's not a trivial matter (for some) to set up the remote administration and any other
        • by rthille ( 8526 )
          They aren't using SATA, they are using PATA. You can see the cables in this picture:
          http://capricorn-tech.com/images/mobo176.jpg [capricorn-tech.com]
          They're barely-visable at the top.
          Certainly they are a good deal to a company buying a rack full, but for someone like me who's doing it as a hobby, the $3k isn't worth it given I'm only going to have one (ok, maybe 2 :-) and will be setting up the system to my liking anyway...
          Still, I wonder if the Sun box that has the 48 drives (vertically) in 3U or 4U that was on slashdot not l
        • Rough parts list for a sizeable server that we're slowly building:

          $4200 (12) 750GB drives
          $0159 Thermaltake Armor VA8000BNS Black Chassis: 1.0mm SECC
          $0190 Thermaltake toughpower W0117RU ATX12V/ EPS12V 750W
          $0035 DVD-RW (BLACK)
          $0050 misc parts (fans, cables)
          $0182 MB-BA22658 AMD Athlon64 X2 4200+ AM2 (WINDSOR)
          $0200 XXXXXXXXXX Asus M2N32-SLI Deluxe motherboard (ATX)
          $0210 XXXXXXXXXX 1GBx2 ECC memory modules PC2 4200 DDR2 533
          $0009 XXXXXXXXXX Assemble & Test
          $0334 (2) INTEL PRO/1000 PT DUAL PORT EXPI9
  • by Bromskloss ( 750445 ) <auxiliary,address,for,privacy&gmail,com> on Tuesday August 29, 2006 @12:44PM (#16000638)
    less than 30 kW for an entire Petabyte
    What?! Power per storage capacity? The interesting figures are how much energy it takes to really do something, such as read or write, not just remember what was previously stored. I'm sure they can do the latter without power at all!
  • by WidescreenFreak ( 830043 ) on Tuesday August 29, 2006 @12:44PM (#16000639) Homepage Journal
    And, no, I'm not talking about the starship Enterprise, so can it with the "Star Trek" comments.

    Obviously, this is the kind of product that companies and perhaps even data centers will possibly take a very long and desiring look at. No doubt that's exactly what Capricorn is hoping for. 3.2Kw/hr is nothing compared to the power that's eaten up by a rack that's loaded with arrays and SCSI drives.

    My concern is with reliability. For the most part, the general attitiude is that SCSI, while much more expensive than IDE or SATA, is also more reliable with a larger MTBF. Whether that's really true or not is up for debate, but that's the general opinion that out there. Of course, there's also the general attitude that more spindles means more throughput and more reliability if in a proper RAID configuration. From what I've seen with other solutions, we can probably assume with a wide margin of safety that 120TB for this Capricorn system is RAID 0. If a 1U system only contains four drives and they're all independent RAID configurations, then say goodbye to 30 TB just to add a modicum of redundancy with RAID 5, whereas if there were more spindles, the amount of lost space would be greatly decreased even though there would be the increased chance of a failed drive.

    Looking at this system, my gut feel is that a more-spindle configuration might be a wiser move, unless the money saved in electricity goes to a better-than-average backup system. Maybe it's my bias towards SCSI/fibre channel, but I don't know that I can yet trust a low-spindle, IDE configuration to do the same thing in an enterprise environment.

    Just out of curiosity, has anyone out there in Slashdotland had good luck with enterprise IDE solutions? Who knows. Perhaps some success stories might change my pro-SCSI/fibre view.
    • Re: (Score:2, Informative)

      by jo42 ( 227475 )
      Low-end EMC SAN boxes use SATA: http://www.emc.com/products/platforms.jsp [emc.com]
      • But as you said, they're low-end. I can't imagine anyone actually going to their head of IT and putting "low end" and "mission critical data" in the same sentence.
        • Depends on what you call "mission critical", if the company relies so much on the data (Telecom and banks) that they will actually go bankrupt if the system is down for a day. Then I would go for the expensive and proven solution. A golden rule is: When you want the best, you got to pay the bill.

          But for a smaller company, cost savings are significant if you dare to take a chance.
          The biggest difference between SCSI and PATA configuration is throughput performance. A PATA RAID 5 is very likely to save your da
        • Nobody's positioning PATA/SATA FC drive units as "Low end" anymore. I don't have the detailed statistics, but these were just coming out in 2003, and now dominate the marketplace in terms of volume of drives sold into large enterprises.

          The PATA ones make me a little nervous, but SATA (especially the ones with command tag queueing and such) is generally just fine.

          Capricorn is the system build spinoff from the Internet Archive; a friend of mine who works there was doing some of the QA work on these units. H
    • by OverlordQ ( 264228 ) on Tuesday August 29, 2006 @01:02PM (#16000763) Journal
      I dont think it's RAID anything at all.

      The PetaBox TB120 says 120TB of space on 40 nodes. That's 3TB a node, and given 4 drives per node, that's 750GB drives.

      So basically the RAID selection is left up as an exercise to the reader, they're just marketing raw diskspace with a very low power consumption.
      • Re: (Score:3, Interesting)

        by TheRaven64 ( 641858 )
        Capricorn is the system build spinoff from the Internet Archive, and this is the commercialisation of the system they built to store... well, pretty much everything.

        They don't use RAID at all. They use RAIC (which is an acronym I just made up for a Redundant Array of Inexpensive Computers). Each individual node is a file server. Each file is distributed over a number of file servers. When a machine fails, they just swap in a new machine. It then grabs a load of files that aren't mirrored as much as th

    • Obviously, this is the kind of product that companies and perhaps even data centers will possibly take a very long and desiring look at.

      Uh...I'm not so sure. It can't be expanded (and only 4 slots is pretty cheesy), 750GB drives are commanding an insane price premium (300GB drives are under $100, and 750GB drives are about twice as expensive per GB at $400), and fewer drives = slower...why one would "look long and desiring" at this is beyond me when you can get 2+3U solutions with equal density, but far

      • I wouldn't consider the 750GB drives to have an insane price premium. At least, not compared to historical prices. They're down under the $0.50/GB mark (but not as cheap as the ~$0.28/GB price point of lower-capacity drives). If they were still up around $500, I'd have more of a complaint.

        Are they worth the price premium for reduced heat / power requirements? Maybe... especially if it lets you pack double the density into an existing enclosure without spilling over to another enclosure. (Enclosures p
    • Re: (Score:3, Interesting)

      by Tmack ( 593755 )
      For the most part, the general attitiude is that SCSI, while much more expensive than IDE or SATA, is also more reliable with a larger MTBF

      While this used to be true, modern drives are the same between IDE/SATA/SCSI except for the control board the drive is strapped to. The reason SCSI is still preferred over IDE/SATA in most cases is from this old belief, most devices for enterprise level storage are still built mainly around it, SCSI still offers more devices per controller (14 per cable, rather than 2 o

      • modern drives are the same between IDE/SATA/SCSI except for the control board the drive is strapped to.

        I thought the difference between SCSI/Fibre and everything else was in the quality control of the platters.

        Random IDE drives are pulled from each batch for in-depth testing, while every SCSI drive recieves in-depth testing.

        That's why SCSI drives can claim a higher MTBF, since all the drives are measured up to a certain standard. This is also why SCSI drives are more expensive, since each drive has to go th

    • Re: (Score:2, Interesting)

      by Genady ( 27988 )
      Just out of curiosity, has anyone out there in Slashdotland had good luck with enterprise IDE solutions? Who knows. Perhaps some success stories might change my pro-SCSI/fibre view.

      Yeah, kinda. We've got a tray of PATA in our EMC Clariion. Don't ask it to perform with multi-threaded I/O, and it's certainly slower than the FC stuff, but it works okay for test and backups. Can't say we've seen a higher failure rate on the disks than we have with the FC trays. I hear that the SATA stuff is much better about ha
    • Re: (Score:3, Informative)

      by flaming-opus ( 8186 )
      No it's not even close to enterprise ready! A basic dual-powersupply server with a hardware raid card and a raid5 of sata drives isn't really enterprise ready. Enterprise means no single point of failure. Redundant raid controllers, power supplies, storage networks, mirrored caches, remote administration and performance monitoring, remote snapshots or archiving. Enterprise is expensive, but for good reason.

      As for your question, enterprise IDE can only be realistically used for back-up or archiving purposes
    • Just out of curiosity, has anyone out there in Slashdotland had good luck with enterprise IDE solutions? Who knows. Perhaps some success stories might change my pro-SCSI/fibre view.

      As always, if you treat the disks properly, they're going to be almost as reliable as SCSI/FC. Just don't expect the same performance (you'll need roughly 2x the number of spindles... maybe more). But you get a lot of capacity for 1/3 to 1/4 the cost of SCSI.

      Make sure the drives have (a) quality power feeds and (b) proper
  • by Phishcast ( 673016 ) on Tuesday August 29, 2006 @12:46PM (#16000649)
    Technically this is a convenient way to stuff a lot of managed storage into a small space with low power consumption. Cool, but it's really nothing more than a bunch of servers in a single rack with big hard drives. If I've got a petabyte of storage to utilize I want to manage it as one large pool (or several large pools), not 40 servers, on each of which I need to run an OS and services which make a relatively small portion of that storage available. Where's my FC or iSCSI target ports?
    • Cool, but it's really nothing more than a bunch of servers in a single rack with big hard drives.

      Actually, it is or can be one big disk or "just" a JBOD. Yes, each box is just a regular server running Linux, but the setup can go like this: internal RAID5 or so within each box, then concatenate the drives with RAID 0 over each of the raid5.

      Its up to you. For at least one of the previous slashdot stories about this take a look here: http://linux.slashdot.org/article.pl?sid=05/06/22/ 0418253&from=rss [slashdot.org] B
    • by redhog ( 15207 )
      Network block devices / iSCSI / IDE-over-ethernet or whatever your favourite network storage for today is, and a front-end server to raid it all. You have endless possibilities. With that solution, you have a good hotswappability too - just hotswap a whole 1U machine!
  • by slykens ( 85844 ) on Tuesday August 29, 2006 @12:50PM (#16000670)
    At $1.50 per GB in a large install its about two and a half times what one could build on their own. 5U 24 drive racks go for about $2k each, add in $2k for mobo/controllers, and 24 750 GB drives at $330 ea and we're at $11,920 per 18 TB. You can fit eight of these in a rack so that's 144 TB per rack at a cost of $95,360. Add in for the rack itself, a good switch, and some miscellaneous expenses and call it an even $100k. That's a cost of $0.66/GB.

    It's not raid but it is a ton of storage space. Even if you back out one drive for parity and one for spare in each enclosure the cost per GB only goes up to about $0.72.
    • by CatOne ( 655161 )
      What's management going to be like on that homebrew system? Sure, BYO can be cheapest but allocationg/partitioning/whatever is going to take up substantially more of "someone's time," which even if they pay that person $40K/year the cost to a company is at least $80K/year all things considered.
      • by slykens ( 85844 )
        You assume that managing forty individual servers will be less costly than managing eight larger servers.

        Really, person-hours wise the cheapest way to do it is likely one big farking system with fibre channel or the like.
  • I've done such a thing: several 1U cases with 4drives each and a mini-itx mb. works great, but back then there wasn't a good block-sharing software (like iSCSI, gnbd or AoE), so i had to throw together an app interface to move files and assure redundancy and integrity with no RAID.

    now i'm moving away from this to a more SAN-like system (AoE for now, maybe iSCSI next year). i could reuse the hardware, but unfortunately the mini-itx mb had only PATA and 100BaseT. if there's a little mb with 4 SATA and GbEt
    • by rthille ( 8526 )
      There's a 4-SATA mini-itx board, but it's from commell (I think), not VIA, and it's a P4 board.
  • Maximum rez (8Kx6Kx60FPSx2eyes) video for 75 years is 54 petabytes [google.com], compressed to maybe 3-5PB. That's about 120KW at 80W:TB.

    The average American home consumes about 5KW, for about 2 people, unless it's storing their life experience data in 67 racks, 240KW, 48x their electric bill. They might only need half the power if they add storage as they add experience at 54TB:y. Maybe if we start now, we could get the power demands closer to human biological power consumption of about 0.12W by the time a new person i
    • Yeah, but have you seen all the freakin' error reports on those humans? The compression is super lossy, and degrades - sometimes catastrophically - over time. Not to mention that there's just no good way to back up a human and get even a small portion of the data to replicate.

      If I didn't know better, I'd say these "human" storage devices were designed and built by the content industry for the specific purpose of being endless consumers of media.
  • by dfghjk ( 711126 ) on Tuesday August 29, 2006 @01:04PM (#16000784)
    if the goal is low power i'd prefer to use more than 4 drives in the system. half the power budget is the motherboard, so an 8 drive chassis would result in a 25% reduction in power for larger installations. Clearly the focus here isn't on performance after all.

    If all I wanted was 4 drives why would I care? Why would I want a 1U rack? Why wouldn't I just stick them in my PC?
  • by Doc Ruby ( 173196 ) on Tuesday August 29, 2006 @01:22PM (#16000915) Homepage Journal
    They're using ITX motherboards to keep price/power down. If they used notebook HDs instead of the 3.5" 750GB ones, they'd get about 10% the storage density per host, 50% the price performance per GB, but much better power efficiency per GB. Is there a way to stuff 40 80GB notebook drives into an ITX host, for even better power efficiency at only double the price?
  • a typical power usage of an astounding 80 W per machine.

    This is astounding why? My PC-based fileserver typically idles at 60W and has two hard disks. With one disk powersaved, this drops to 45W, so if I took it up to 4 disks, I'd estimate it would only use 90W. OK, so its CPU doesn't quite run at 1GHz, but I bet it has more RAM to cache stuff with than these devices do.

    And it runs Linux. ;)
  • I've been looking for a Micro / Mini ITX solution for a while with a low Watt processor. I would like to find a *small* case that and fit disks in it to store 1-TB with RAID 1 (mirroring). There are lots of commercial solutions out there with cool cases, but I want to build my own with Linux. Then I want to build a few more and sync them using rsync. You can't do that with any of the commercial solutions I have seen. The case has always been the issue. I can find the software and the motherboard.

    JOhn
    • Same here. I am thinking of just buying a standard smallish tower case with a load of drive bays, and installing hard drive caddies in them. It seems like the easiest solution, but I also want a silent (fanless?) power supply.
  • by dlapine ( 131282 ) <<lapine> <at> <illinois.edu>> on Tuesday August 29, 2006 @02:02PM (#16001148) Homepage
    Here's the specs for the 3.0GB model. http://www.capricorn-tech.com/gb3000.html [capricorn-tech.com]

    Here's the Motherboard Info:
    Motherboard/Processor:
    * 1GHz VIA C3 CPU
    * VIA CLE266 Northbridge
    * VIA VT8237 Southbridge
    * DDR266 RAM - Up to 1GB
    * 2 USB 2.0 ports
    * 1 Serial port
    * 1 Parallel port
    * 1 VGA port
    * PS2 mouse & keyboard ports

    Anybody have performance numbers for these units? A 1GHZ CPU can be hard-pressed to run an OS, serve disk and support a gige connection at full throughput. I'd be weary of looking at these for a data center without knowing how fast they can serve out the disk over a single gige connection. In fact, I see a distinct lack of information about this unit functions as a "storage node". Are you buying a 1U, 3.0TB node on which you need to install an OS and fileserver? Doesn't look like it would have the horsepower to run an iSCSI driver in additional to software raid drivers and still produce any real transfer speeds.

    While a rack of these sounds nice in cost/wattage terms, it appears that you would have just purchased a cluster of storage nodes. A cluster of storage nodes with no way to present the available 120TB's as any kind of coherent storage space. You might be able to run Lustre, PVFS or GFS on them, if that's even possible, but that's a level of complexity the price and performance don't warrant.

    If you figure in the cost of a Storage Engineer and lack of performance, this looks less appealing at the full rack level. Doesn't mean some PHB's won't buy into the the whole "Cheap Cluster Disk!" theme though. I pity the sysadmins who get a 120TB of raw disk and 40 more nodes to admin.

  • Whoopdi-freaking-do. Other than the capacity, this describes the file server I've been running at home for the last 2 years. The title of this article might as well be "Capricorn makes obvious product using COTS components". Unless they're doing something novel in terms of management or presentation of large logical volumes that span nodes (which it doesn't look like they are) this is hardly newsworthy...
  • A comparison (Score:3, Informative)

    by Guspaz ( 556486 ) on Tuesday August 29, 2006 @06:17PM (#16003197)
    Capricorn's unit (750GB drives): 3TB per 1U
    Sun Fire X4500 (500GB drives): 24TB per 4U

    Capricorn TB per 42u rack: 126TB
    Sun Fire X4500 TB per 42u rack: 240TB

    Capricorn watts per rack (80w/unit): 3360w
    Sun Fire X4500 watts per rack (1500w/unit): 15000w

    Capricorn watts per PB: 26667W
    Sun Fire X4500 watts per PB: 62500W

    Capricorn cost per rack: ~ $200,000
    Sun Fire X4500 cost per rack: $470,995

    Capricorn cost per PB: ~ $ 1,560,000
    Sun Fire X4500 cost per PB: ~ $1,960,000

    So yes, Capricorn's solution provides lower power usage, but also lower density (And less processing power and redundancy I'd imagine). So it's a tradeoff. Lower the power bills, but raise the rent bill and the risk.

    It should be noted that for Sun's server, I'm using the 1500W rating of each of the redundant power supplies, the typical usage would actually be much less (just like how a PC with a 500w PSU might only use 300W under load). This also ignores processor power, as each Sun unit is a quad opteron. It also ignores RAID, as the Capricorn could do no more than 3 drive RAID5, while each Sun box could have a 48 drive RAIDZ or RAIDZ2, wasting a lot less for parity. And things might change if Sun put 750GB drives in their unit instead of 500GB drives. It's all about tradeoffs.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...