Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

Does ZFS Obsolete Expensive NAS/SANs? 578

hoggoth writes "As a common everyman who needs big, fast, reliable storage without a big budget, I have been following a number of emerging technologies and I think they have finally become usable in combination. Specifically, it appears to me that I can put together the little brother of a $50,000 NAS/SAN solution for under $3,000. Storage experts: please tell me why this is or isn't feasible." Read on for the details of this cheap storage solution.


Get a CoolerMaster Stacker enclosure like this one (just the hardware not the software) that can hold up to 12 SATA drives. Install OpenSolaris and create ZFS pools with RAID-Z for redundancy. Export some pools with Samba for use as a NAS. Export some pools with iSCSI for use as a SAN. Run it over Gigabit Ethernet. Fast, secure, reliable, easy to administer, and cheap. Usable from Windows, Mac, and Linux. As a bonus ZFS let's me create daily or hourly snapshots at almost no cost in disk space or time.

Total cost: 1.4 Terabytes: $2,000. 7.7 Terabytes: $4,200 (Just the cost of the enclosure and the drives). That's an order of magnitude less expensive than other solutions.

Add redundant power supplies, NIC cards, SATA cards, etc as your needs require.
This discussion has been archived. No new comments can be posted.

Does ZFS Obsolete Expensive NAS/SANs?

Comments Filter:
  • ZFS (Score:5, Informative)

    by Anonymous Coward on Wednesday May 30, 2007 @08:05AM (#19319859)
    Also should be noted that FreeBSD has added ZFS support to Current (v7). It's built on top of GEOM too so if you know what that is you can leverage that underneath zfs.
    • Re:ZFS (Score:4, Interesting)

      by ggendel ( 1061214 ) on Wednesday May 30, 2007 @08:08AM (#19319871)
      I think you'll a bit high. I put together a 5-500Gb Sata II disk setup with Raid-Z in a 5 disk enclosure for under $1000. I run it off my Sunfire v20z. That's 2 TBs for under 1k USD!
    • Ever heard of Starfish? It's a new distributed clustered file system:

      Starfish Distributed Filesystem [digitalbazaar.com]

      From the website:

      Starfish is a highly-available, fully decentralized, clustered storage file system. It provides a distributed POSIX-compliant storage device that can be mounted like any other drive under Linux or Mac OS X. The resulting fault-tolerant storage network can store files and directories, like a normal file system - but unlike a normal file system, it can handle multiple catastrophic disk a

  • Specifics please. (Score:5, Insightful)

    by PowerEdge ( 648673 ) on Wednesday May 30, 2007 @08:06AM (#19319863)
    Not enough specifics here. I am going to say do your thing. If it works, you're a hero and saved 47k. If it doesn't obfuscate and negotiate the 50k of storage down to 47k. Win for all.

    Unless you would like to give more specifics. Cause I am going to say in 99% of cases where you want fast, reliable, and cheap storage you only get to pick two.
    • by Ngarrang ( 1023425 ) on Wednesday May 30, 2007 @09:02AM (#19320343) Journal

      Unless you would like to give more specifics. Cause I am going to say in 99% of cases where you want fast, reliable, and cheap storage you only get to pick two.

      I disagree completely. Computer hardware is a commodity. The big box makers are afraid of this very kind of configuration which would blow them out of business if more people caught on to it. No, they use FUD to convince PHBs that because of the low cost, it cannot possibly be as good. Hot-swap and hot-spare are commodity technologies. But, please, feel free to continue the FUD, because it helps the bottom line.

      • by toleraen ( 831634 ) on Wednesday May 30, 2007 @09:57AM (#19320913)
        So "Mission Critical" is just a myth too, right?
        • Re:Specifics please. (Score:5, Informative)

          by Ngarrang ( 1023425 ) on Wednesday May 30, 2007 @10:28AM (#19321311) Journal
          Toleraen wrote, "So "Mission Critical" is just a myth too, right?"

          No system can compensate for bad management by people, but I digress.

          All data is critical. But, to say that your data is less safe with a system that cost $4700 than a system that cost $50,000 is fallacious without some heavy proof behind it. For now, I am going to ignore that a functional backup is part of "mission critical" and just address the online storage portion of the argument.

          Let's start with a server white box. Something with redundant power supplies, ethernet, etc. Put a mirrored boot drive in it. Install Linux. So far, the cost is fairly low. Add an external disk array, at least 15 slots, the ones with hot-swap, hot-spare, RAID 5, redundant power supplies and fill it with inexpensive (but large) SATA drives. Promise sells one, as do others. Attach to server, voila, a cheaper solution than EMC for serving up large amounts of disk space.

          What if a drive fails? The system recreates the data (it is RAID5, after all) onto a hot-spare. You remove the bad drive, insert new, run the administration. The uses won't even notice their MP3's and Elf Bowling game were ever in danger.

          For the people who believe strongly in really expensive storage solutions, please explain why. I would like to know if you also hold the same theory for your desktop PCs, because surely, a more expensive PC has to be better. Right?
          • Re:Specifics please. (Score:4, Informative)

            by lymond01 ( 314120 ) on Wednesday May 30, 2007 @11:00AM (#19321833)
            If you're buying from EMC or another large storage company, you do pay a premium. Generally, it's for simple configuration of the NAS or SAN using their proprietary software. You're also paying for warranty and support, something you don't get through NewEgg (you get it, but it's limited). If you're either a large company not wanting to pay a yearly salary for 3 or 4 admins to run your storage system, or a smaller company that doesn't have the technical know-how to do it yourself reliably (not everyone reads "Ask Slashdot" regularly), then the premium stores are a good way to go (if you have the money).

            It's the same reason we buy Dell. We could buy white boxes or parts from Newegg for all of our systems, but talk about a hassle when it comes to them needing hardware maintenance or just assembly. With the support Dell offers, we get a complete box that's been tested, we just need to reformat and install our own stuff. Something breaks, we make a 10 minute phone call and get a replacement the next day, with or without assisted installation. But we pay probably 30% more per box for that.
          • by wurp ( 51446 ) on Wednesday May 30, 2007 @11:28AM (#19322263) Homepage
            And what happens when the RAID controller fails and corrupts all of your drives?

            Because I've seen that happen more than once.

            I'm not saying the more expensive solution is better. I'm just saying that in my personal experience I've seen *more* data destroyed from RAID controller failure than from hard drive failure. I would love to find out the solution to that one.

            I do not claim to be a hardware expert or system administrator, so there may be a well known solution (don't buy 'brand X' RAID controllers). I just don't happen to know it.
            • Re: (Score:3, Interesting)

              by hpavc ( 129350 )
              Agreed, the controller just doesn't 'stop' it sort of goes on a rampage for a bit. Making you wish you went through another layer of abstraction or redundancy.
            • I have two things to say about that particular case: 1) ZFS maintains checksums of all data, so that will at least be noticed fast and 2) just use two controller cards and mirror them -- if one fails, you'll have valid data in the other half of the mirror.

              The more important question may be whether there are other, unknown sources of errors. Not many people would probably even think of a hard drive controller card failure when building a storage solution, and that's probably one of the major advantages of

            • Re: (Score:3, Informative)

              by kraut ( 2788 )
              Software RAID. Been running it for years for exactly that reason,

              Maybe not suitable for high performance situations, but I've not found it slow.
            • by this great guy ( 922511 ) on Thursday May 31, 2007 @01:48AM (#19333125)

              And what happens when the RAID controller fails and corrupts all of your drives?
              That's the whole point of ZFS: you don't need (and don't want) to use ZFS with a hardware RAID controller. Sun designed ZFS so that you just feed it with *disks*. ZFS takes care of the rest: volume management, corruption detection, software RAID, etc.
              Look at the high-end Sun Fire X4500 server ("Thumper") they released a few months ago: 48 drives in 4U with *no* hw RAID raid controller, Sun designed this server as the perfect machine to run ZFS.
          • Re:Specifics please. (Score:5, Interesting)

            by swordgeek ( 112599 ) on Wednesday May 30, 2007 @11:46AM (#19322569) Journal
            It's all a question of scale, and your scale is a bit skewed.

            The premium paid for higher-end storage is decidedly nonlinear. For marginally more reliable or faster storage, you pay about a factor of ten. One example I'm familiar with is Hitachi. We had a 64TB HDS array a few years ago that was worth roughly $2M. We could have purchased an equivalent amount of commodity storage for probably $200k at the time, but didn't. Why would we spend the extra money? Speed, configurability, expandability, and reliability.

            First of all, speed. That thing was loaded with 73GB 15k FCAL drives. RAID was in sets of four disks, with no two disks in a set sharing the same controller, backplane, or cache segment. Speaking of cache, the rule was 1GB/TB. so we had 64GB of fast, low-latency, fully mirrored cache on the thing. It was insanely fast, and (most importantly) didn't slow down under point load. One tool automatically ran on the array itself, looking for hotspots and reallocating data on the fly.

            Configurability: We could mirror data synchronously or asynchronously to our DR site, by filesystem, file, block, LUN, or byte. We could dynamically (re)allocate storage to multiple systems, and moving databases between machines was a breeze. Disk could be allocated from different pools (i.e. different performing drives could be installed), depending on requirements. Quality-of-Service restrictions could be put in place as well, although we never used them.

            Expandability: The beast had 32 pairs of FC connections, could support 96GB of internal mirrored cache, and I can't remember how much actual disk. The key wasn't the amount of disk we could put on it, so much as how well the bandwidth scaled--and it scaled well.

            Finally, the real key - Reliability. All connections were dual-pathed, with storage presented to a pair of smart FC switches which were zoned to present storage to various systems. We could lose three of the four power cables to the main unit (auxiliary disk cabinets only had two power connections each), and still run. We could lose any entire rack, and still run. We could lose any switch in our environment, and still run. We could lose two disks from the same RAID set and still run. When we lost a disk, the system would automatically suck up some cache to use for remirroring the data to multiple disks as fast as possible, and then after protecting it, would remirror back to a single logical device. In the event that we lost the entire device, we could run from our DR site synchronous mirror with less than a ten second failover.

            This sort of thing is massive overkill for most people and companies, but when someone is doing realtime commodities trading, (or banking, or stock exchanges, etc.) the protection and support are worth the extra money. You just can't build that sort of thing on your own for any less money, at the end of the day.
            • Finally, the real key - Reliability. All connections were dual-pathed, with storage presented to a pair of smart FC switches which were zoned to present storage to various systems. We could lose three of the four power cables ...

              True story:

              Two years ago next month, a clumsy plumber got a propane torch too close to a sprinkler head with the expected consequences: LOTS of water took the path of least resistance, where it finally filtered it's way into the basement data center, coming out right on top of our S
          • Re: (Score:3, Interesting)

            by Vancorps ( 746090 )

            Wow, talk about not understanding what a SAN actually is. A $4700 unit is no comparison to what you get for 50k from an HP or IBM SAN solution. The biggest factor, I started with 30tb now I need 100tb! Oh noes, I need to buy more units and create more volumes. With a SAN you fill up your fabric or iSCSI switch with more storage racks allowing you to mix and match, SATA, SCSI, FCA or anything else you like. Then replicate the whole deal off-site at the hardware level.

            Now mirrored OS drives? Are you mad? You

      • by zerocool^ ( 112121 ) on Wednesday May 30, 2007 @11:07AM (#19321931) Homepage Journal

        Solly Chollie.

        We have one of those "$50,000 SANs". Actually, with the expansion, ours cost closer to $110,000, but whatever. For one, it's got more drives (I think we're up to 30). For two, they're 10k RPM SCSI drives. For three, it's fibrechannel, and all the servers that it's connected to get full speed access to the drives. For four, we have a service level agreement with Dell/EMC (It's an EMC array, rebranded under Dell's name) that says if we have a dead drive, we get a dell service hobbit on site within 4 hours with a new drive. We also get high level support from them for things like firmware upgrades (we had to go through that recently to install the expansion).

        The grandparent is exactly correct:
        fast, reliable, and cheap storage - you only get to pick two.

        We picked fast and reliable. You picked reliable and cheap. There's nothing wrong with either decision, but please don't tell me that a $3000 solution would have done everything that we needed.

        You can't have all three.

        ~Wx
        • Re: (Score:3, Interesting)

          by mcrbids ( 148650 )
          We have one of those "$50,000 SANs". Actually, with the expansion, ours cost closer to $110,000, but whatever. For one, it's got more drives (I think we're up to 30). For two, they're 10k RPM SCSI drives. For three, it's fibrechannel, and all the servers that it's connected to get full speed access to the drives. For four, we have a service level agreement with Dell/EMC (It's an EMC array, rebranded under Dell's name) that says if we have a dead drive, we get a dell service hobbit on site within 4 hours wit
          • Re: (Score:3, Informative)

            by prgrmr ( 568806 )
            've steered clear of the SAN/NAS solution for once simple reason: it's a single point.

            If you honestly think a SAN, particularly an EMC SAN, is a single point of failure, then you don't understand SANs. Take a look at the high-end Clariion and Symmetrix models for redundancy and scalability. Then take a look at who some of EMC's bigger customers are.

            No, I don't work for EMC, but my employer has several mid-level EMC storage systems. They work. Reliably, quietly, and yes, they scale nicely too.
    • I'd like the following scenarios explained.

      RAID0 = bunch of hard drives strung together, look like one big drive. in the implementation I'm refering two the data is striped to a block size and written across each disk simeultaneously (or nearly). This is the fastest disk subsystem available but the most susceptible to failure. If one disk fails, you're toast.

      Does ZFS do anything in this situation? I have 'one big drive' presented to the operating system, the striping is abstracted at the hardware layer
    • by Score Whore ( 32328 ) on Wednesday May 30, 2007 @10:13AM (#19321087)
      No kidding. Without specific details there is no way to answer whether this is a good solution to his particular situation. However, even in the absence of details I can say this:

      1) That case has twelve spindles. You aren't going to get the same performance from a dozen drives as you get from an hundred.

      2) That system includes a small Celeron D processor with 512 MB RAM. You aren't going to get the same performance as you'll get from multiple dedicated RAID controllers with twenty+ gig of dedicated disk cache.

      3) Your single gigabit ethernet interface won't even come close to the performance of the three or four (or ten) 2 gigabit fibre channel adapters involved in most SAN arrays.

      4) Software iscsi initiators and targets aren't a replacement for dedicated adapters.

      5) The Hitachi array at work never goes down. Ever. Not for expansions. Not for drive replacements. Not for relayouts. The reliability of a PC and the opportunity to do online maintenance won't approach that of a real array.

      Don't get me wrong. That case makes me all tingly inside -- for personal use. But as a SAN replacement, fuck no. It's not the same thing. The original question just shows ignorance of what SANs are and the roles they fill in the data center.

      As a workgroup storage solution for a handful of end users on their desktops, that solution probably may be a good fit. As a storage solution for ten (or two hundred) business critical server systems, no way.
      • We have a large (geographically replicated) Hitachi disk array (as well as many NetApp boxes), mostly it works very well indeed.

        However 2-3 years ago we stumbled (very painfully!) across a firmware bug which took the primary Hitachi array down:

        As we (i.e. the Hitachi service reps) were upgrading the mirrored cache, an error hit the active half, and it turned out that the firmware would always check the mirror (a very good idea, right?) before falling back on re-reading the disk(s). However, the firmware err
    • Re: (Score:3, Informative)

      by TopSpin ( 753 ) *

      Not enough specifics here. I am going to say do your thing. If it works, you're a hero and saved 47k.

      Not really. The assertion that a 12 spindle NAS box with iSCSI costs 50k is the issue. That level of NAS/iSCSI hardware does not cost 50k. It may have, years ago, from Netapp or someone, but not today. Today such a box will cost around 10k with equivalent storage.

      A Netapp S500 with 12 disks and NAS/iSCSI features is a good example. Roughly 10k and you get Netapp's SMB/CIFS implementation (considered excellent), NFS, iSCSI, snapshots, etc. Slightly lower price points can be had through Adaptec Snap Se

  • by BigBuckHunter ( 722855 ) on Wednesday May 30, 2007 @08:08AM (#19319881)
    For quite a while now, it has been less expensive to build a DIY file server then to purchase NAS equipment. I personally build gateway/NAS products using Via c7/8 boards as they are low power, have hardware encryption, and are easy to work with under linux. Accessory companies even make back plane drive cages for this purpose that fit nicely into commodity PCs.

    BBH
    • What amazes me is all the talk of iSCSI, but almost no mention of AoE (ATA over Ethernet).

      What you have is a box that exports block devices out over layer 2. Another devices loads it as a block device, and can now treat it in whatever fashion it could deal with any other block device, so for example I have 2 "shelves" of Serial ATA drives going. I have a third box that I could either load linux on, using md to create raid sets, or what I've actually done is used the hardware on each of the two shelves, cr
  • by alen ( 225700 ) on Wednesday May 30, 2007 @08:09AM (#19319885)
    place where i work looked at one of these things from another company. did the math and it's too slow even over gigabit for database and exchange server. OK for regular file storage, but not for heavy I/O needs
    • by morgan_greywolf ( 835522 ) * on Wednesday May 30, 2007 @08:38AM (#19320121) Homepage Journal
      Precisely. The question in the title is a little bit like asking "Will large PC clusters obsolete mainframes?" or "Will Web applications obsolete traditional GUI applications?" The answer is, as always, "It depends on what you use it for." For high-performance databases or a high-traffic Exchange server, these things may not work well.

      I've seen plenty iSCSI of solutions coupled with NAS servers that get pretty good throughput in this price range that are already integrated and ready to go, but the bottom line is that if you want high-peformance, high-availability storage for I/O-intensive applications, you need a fiber SAN/NAS solution.

    • by Jim Hall ( 2985 ) on Wednesday May 30, 2007 @08:49AM (#19320205) Homepage

      I agree. At my work, we have a SAN ... low-end frames (SATA) to mid-range (FC+SATA) to high-end frames (FC.) We put a front-end on the low-end and mid-range storage using a NAS, so you can still access using the storage fabric or over IP delivery. Having a SAN was a good idea for us, as it allowed us to centralize our storage provisioning.

      I'm familiar with ZFS and the many cool features laid out in this Ask Slashdot. The simple answer is: ZFS isn't a good fit to replace expensive SAN/NASs. However, ZFS on a good server with good storage might be a way to replace an inexpensive SAN/NAS. Depending on your definition of "inexpensive." And if you don't mind the server being your single point of failure.

    • Re: (Score:3, Interesting)

      by flakier ( 177415 )
      I wonder if the Coraid ATA-over-Ethernet would be good enough? It ditches TCP/IP in favor of raw Ethernet frames so has much lower overhead than iSCSI and only major loss is no routing. http://www.coraid.com/ [coraid.com]

      BTW, I read recently that where 4Gb FC really excels is in large block sequential transfers and that small random access transfers are actually better over gigabit iSCSI. Check it out: http://searchstorage.techtarget.com/columnItem/0,2 94698,sid5_gci1161824,00.html [techtarget.com]

      Plus you really have to think about o
  • by Spazntwich ( 208070 ) on Wednesday May 30, 2007 @08:10AM (#19319891)
    Does the overuse of TLAs obfuscate the meaning of SDS?
  • Current issues (Score:5, Informative)

    by packetmon ( 977047 ) on Wednesday May 30, 2007 @08:12AM (#19319903) Homepage
    I've snipped out the worst reasons as per Wiki entry:

    • A file "fsync" will commit to disk all pending modifications on the filesystem. That is, an "fsync" on a file will flush out all deferred (cached) operations to the filesystem (not the pool) in which the file is located. This can make some fsync() slow when running alongside a workload which writes a lot of data to filesystem cache.
    • ZFS encourages creation of many filesystems inside the pool (for example, for quota control), but importing a pool with thousands of filesystems is a slow operation (can take minutes).
    • ZFS filesystem on-the-fly compression/decompression is single-threaded. So, only one CPU per zpool is used.
    • ZFS eats a lot of CPU when doing small writes (for example, a single byte). There are two root causes, currently being solved: a) Translating from znode to dnode is slower than necessary because ZFS doesn't use translation information it already has, and b) Current partial-block update code is very inefficient.
    • ZFS Copy-on-Write operation can degrade on-disk file layout (file fragmentation) when files are modified, decreasing performance.
    • ZFS blocksize is configurable per filesystem, currently 128KB by default. If your workload reads/writes data in fixed sizes (blocks), for example a database, you should (manually) configure ZFS blocksize equal to the application blocksize, for better performance and to conserve cache memory and disk bandwidth.
    • ZFS only offlines a faulty harddisk if it can't be opened. Read/write errors or slow/timeouted operations are not currently used in the faulty/spare logic.
    • When listing ZFS space usage, the "used" column only shows non-shared usage. So if some of your data is shared (for example, between snapshots), you don't know how much is there. You don't know, for example, which snapshot deletion would give you more free space.
    • Current ZFS compression/decompression code is very fast, but the compression ratio is not comparable to gzip or similar algorithms.
    • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Wednesday May 30, 2007 @08:56AM (#19320273) Journal
      Some of these issues looked familiar, so I thought I'd do a basic comparison:

      Reiser4 had the same problems with fsync -- basically, fsync called sync. This was because their sync is actually a damned good idea -- wait till you have to (memory pressure, sync call, whatever), then shove the entire tree that you're about to write as far left as it can go before writing. This meant awesome small-file performance -- as long as you have enough RAM, it's like working off a ramdisk, and when you flush, it packs them just about as tightly as you can with a filesystem. It also meant far less fragmentation -- allocate-on-flush, like XFS, but given a gig or two of RAM, a flush wasn't often.

      The downside: Packing files that tightly is going to fragment more in the long run. This is why it's common practice for defragmenters to insert "air holes". Also, the complexity of the sync process is probably why fsync sucked so much. (I wouldn't mind so much if it was smarter -- maybe sync a single file, but add any small files to make sure you fill up a block -- but syncing EVERYTHING was a mistake, or just plain lazy.) Worse, it causes reliability problems -- unless you sync (or fsync), you have no idea if your data will be written now, or two hours from now, or never (given enough RAM).

      (ZFS probably isn't as bad, given it's probably much easier to slice your storage up into smaller filesystems, one per task. But it's a valid gotcha -- without knowing that, I'd have just thrown most things into the same huge filesystem.)

      There's another problem with reliability: Basically, every fast journalling filesystem nowadays is going to do out-of-order write operations. Entirely too many hacks depend on ordered writes (ext3 default, I think) for reliability, because they use a simple scheme for file updating: Write to a new temporary file, then rename it on top of the old file. The problem is, with out-of-order writes, it could do the rename before writing the data, giving you a corrupt temporary file in place of the "real" one, and no way to go back, even if the rename is atomic. The only way to get around this with traditional UNIX semantics is to stick to ordered writes, or do an fsync before each rename, killing performance.

      I think the POSIX filesystem API is too simplistic and low-level to do this properly. On ordered filesystems, tempfile-then-rename does the Right Thing -- either everything gets written to disk properly, or not enough to hurt anything. Renames are generally atomic on journalled filesystems, so either you have the new file there after a crash, or you simply delete the tempfile. And there's no need to sync, especially if you're doing hundreds or thousands of these at once, as part of some larger operation. Often, it's not like this is crucial data that you need to be flushed out to disk RIGHT NOW, you just need to make sure that when it does get flushed, it's in the right order. You can do a sync call after the last of them is done.

      Problem is, there are tons of other write operations for which it makes a lot of sense to reorder things. In fact, some disks do that on a hardware level, intentionally -- nvidia calls it "native command queuing". Using "ordered mode" is just another hack, and its drawback is slowing down absolutely every operation just so the critical ones will work. But so many are critical, when you think about it -- doesn't vim use the same trick?

      What's needed is a transaction API -- yet another good idea that was planned for someday, maybe, in Reiser4. After all, single filesystem-metadata-level operations are generally guaranteed atomic, so I would guess most filesystems are able to handle complex transactions -- we just need a way for the program to specify it.

      The fragmentation issue I see as a simple tradeoff: Packing stuff tightly saves you space and gives you performance, but increases fragmentation. Running a defragger (or "repacker") every once in awhile would have been nice. Problem is, they never got one written. Common UNIX (and Mac) philosoph
  • Real SANs do more (Score:5, Informative)

    by PIPBoy3000 ( 619296 ) on Wednesday May 30, 2007 @08:13AM (#19319909)
    For starters, our SAN uses extremely fast connectivity. It sounds like you're moving your disk I/O over the network, which is a fairly significant bottleneck (even Gb). We also have the flexibility of multiple tiers - 1st tier being expensive, fast disks, and 2nd tier being cheaper IDE drives. I imagine you can fake that a variety of ways, but it's built in. Finally, there's the enclosure itself, with redundant power and such.

    Still, I bet you could do what you want on the cheap. Being in health care, response time and availability really are life-and-death, but many other industries don't need to spend the extra. Best of luck.
  • by Tester ( 591 ) <olivier.crete@oc ... .ca minus author> on Wednesday May 30, 2007 @08:14AM (#19319913) Homepage
    A good 20k$ RAID array does much more. First, it doesn't use cheap SATA drives, but Fiberchannel Drivers or even SAS drives which are tested to a higher level of quality (each disk costs like 500$ or more..). And those cheap SATA drives also react much more poorly to non-sequential access (like when you have multiple users). They are unusable for serious file serving. You can never compare RAID arrays that use SATA/IDE to ones that use enterprise drives like FC/SCSI/etc, because the drives are quite different.

    Then you have the other features like dual redundant everything: controllers, power supplies, etc. Then you have thermal capabilities of rack-mount solutions that often are different from SATA, etc, etc.
    • by ZorinLynx ( 31751 ) on Wednesday May 30, 2007 @08:25AM (#19320017) Homepage
      These overpriced drives aren't all that much different from SATA drives. They're a bit faster, but a HELL of a lot more expensive, and not worth paying more than double per gig.

      We have a Sun X4500 which uses 48 500GB SATA drives and ZFS to produce about 20TB of redundant storage. The performance we have seen from this machine is amazing. We're talking hundreds of gigabytes per second and no noticeable stalling on concurrent accesses.

      Google has found that SATA drives don't fail noticeably more often than SAS/SCSI drives, but even if they did, having several hot spares means it doesn't matter that much.

      SATA is a great disk standard. You get a lot more bang for your buck overall.
      • Re: (Score:3, Insightful)

        by Lumpy ( 12016 )
        Show me how you build a Raid 50 of 32 sata or ide drives.

        also show me a SINGLE sata or ide drive that can touch the data io rates of a u320 scsi drive with 15K spindle speeds.

        Low end consumer drive cant do the high end stuff. Dont even try to convince anyone of this. guess what, those uses are not anywher near strange for big companies. witha giant SQL db you want... no you NEED the fastest drives you can get your hands on and that is SCSI or Fiberchannel.
        • Re: (Score:3, Insightful)

          by RedHat Rocky ( 94208 )
          Comparing drive to drive, I agree with you; 15k wins.

          However, the price point on 15k drives is such a comparison for a single drive vs multiple drives is reasonable. The basis is $$/GB, not drive A vs drive B.

          Ask Google how they get their throughput on their terrabyte datasets. Hint: it's not due to 15k drives.
        • Re: (Score:3, Interesting)

          by drsmithy ( 35869 )

          Show me how you build a Raid 50 of 32 sata or ide drives.

          Get yourself a nice big rackmount case and some 8 or 16 port SATA controllers.

          also show me a SINGLE sata or ide drive that can touch the data io rates of a u320 scsi drive with 15K spindle speeds.

          Of course, the number of single drives you can buy for the cost of that single 15k drive will likely make a reasonable showing...

          Low end consumer drive cant do the high end stuff. Dont even try to convince anyone of this. guess what, those uses are not

      • Re: (Score:3, Informative)

        by NSIM ( 953498 )

        We're talking hundreds of gigabytes per second and no noticeable stalling on concurrent accesses.


        In which case you're talking complete rubbish, "hundreds of gigabytes per second" just one GB/sec would need 4x2Gbit FC links all exceeding their peak theoretical throughput :-) Hundreds of MB/sec I can beleive (just about assuming the right access patterns)

        • Re: (Score:3, Interesting)

          by flaming-opus ( 8186 )
          I'm sure the origional writer meant hundreds of megs per second, which is a reasonably good rate for 24 sata drives. Most 3/4U raid enclosures can get this level of performance, particularly for reads. I can get about 330MB/s out of a 14 drive Apple Xraid, and almost 400M/s out of a 24 drive engenio, but that's with a lot of careful tuning.

          Hundreds of gigs per second is limited to the very largest GPFS or Lustre configurations. The fastest filesystem throughput I'm aware of, anywhere in the world, is about
    • Re: (Score:3, Insightful)

      by Firethorn ( 177587 )
      Sure, a good $20k NAS RAID does more. Question is, is it really needed?

      I mean, you could deploy 10 times as many of these as you could your array, giving me 10 times the storage.

      Depending on what you do, it has to hit the network sometime. For example - we have a big expensive SAN solution. What's it used for? As a massive shared drive.

      For a fraction the price we could of put a similarly sized one of these in every organization. Sure, it wouldn't be able to serve quite as much data, but most of our stu
    • by tgatliff ( 311583 ) on Wednesday May 30, 2007 @08:32AM (#19320073)
      It is not my intention to offend, but I alway love it when I hear the dreaded marketing phrase of hardware "tested to a higher level of quality".

      I work in the world of hardware manufacturing, and I can tell you that this "magical" more testing process simply does not exist. Hardware failures are always expensive, and we do anything we can to prevent them. To do this, we build burn in procedures based on what most call the 90% rule, but you really cannot guarantee more reliability beyond that. Better device design at that point is what will determine reliability beyond that point. Any person who says differently either does not completely understand individual test harness processes or does not understand how burn in procedures work.

      In short, more money is not nessesarily better. More volume designs typically are, though...
      • Absolutely true (Score:3, Interesting)

        by Flying pig ( 925874 )
        You failed to mention that all stress testing increases the probability of failure. To a considerable extent, you can only design in robustness and quality and hope. It used to depress me quite a lot (when I was involved in such things) that many American companies (and others, I hasten to add) just fundamentally did not get ISO 9000 and statistical process control, which is quite different from post-production testing. As a German engineer once remarked to me, (some years ago) "Rolls-Royce cars do not have
  • No (Score:5, Informative)

    by iamredjazz ( 672892 ) on Wednesday May 30, 2007 @08:19AM (#19319955)
    Speaking from personal experience - This file system is far from ready. It can kernel panic and reboot after minor IO errors, we were hosed by it, and probably won't ever revisit it. This phenomenon can be repeated with a usb device, you might want to try it before you hype it. Try a google search on it and see what you think...there is no fsck or repair, once it's hosed, it's hosed, the recovery is to go to tape. http://www.google.com/search?hl=en&q=zfs+io+error+ kernel+panic&btnG=Google+Search [google.com]
    • Re: (Score:3, Informative)

      Yes, it is easy to panic a system with ZFS, but such situations are also easily avoided. Furthermore, they will not lead to data corruption. If you are aware of the causes, it is no more than a minor inconvenience.

      For example, ZFS will panic when you lose enough data or devices that the pool is no longer functional. If you take care and use a replicated pool though, this is unlikely to ever happen. Even if it does, all it requires is that you reattach the devices and reboot. If the disks truly are dead
  • Reliable? (Score:5, Informative)

    by Jjeff1 ( 636051 ) on Wednesday May 30, 2007 @08:21AM (#19319979)
    Businesses buy SANs to consolidate storage, placing all their eggs in one basket. They need redundant everything, which this doesn't have. Additionally, SATA drives are not as reliable long term as SCSI. Compare the data sheets for Seagate drives, they don't even mention MTBF on the SATA sheet [seagate.com].
    Businesses also want service and support. They want the system to phone home when a drive starts getting errors, so a tech shows up at their door with a new drive before they even notice there are problems. They want to have highly trained tech support available 24/7 and parts available within 4 hours for as long as they own the SAN.
    Finally, the performance of this solution almost certainly pales as compared to a real SAN. These are all things that a home grown solution doesn't offer. Saving 47K on a SAN is great, unless it breaks 3 years from now and your company is out of business 3 days waiting for a replacement motherboard off Ebay.
    That being said, everything has a cost associated with it. If management is ok with saving actual money in the short term by giving up long term reliability and performance, then go for it. But by all means, get a rep from EMC or HP in so the decision makers completely understand what they're buying.
    • Re: (Score:3, Insightful)

      by Ruie ( 30480 )
      A few points:
      • Reliability: if your solution costs $3K instead of $47K, just buy 2 or three. Nothing beats system redundancy, provided the components are decent.
      • Saving money - in 3 years just buy a new system. You will need more storage anyway.
      • SAS versus SATA: the only company that I know of that makes 300G 15K rpm drives is Seagate. They cost $1000 a piece. Compare to $129-$169 per Western Digital 400G enterprise drives. For multi-terabyte arrays it makes a lot of sense to get SATA disks and put the mone
    • Re: (Score:3, Insightful)

      by LWATCDR ( 28044 )
      Again it depends.
      At $3000 for everything it would be logical to stock a spare motherboard, Power supply, NIC, controller cards, and A few Hard drives plus a few hot spares.
      With SMART it isn't hard and other sensor packages it wouldn't be hard to build in monitoring.
      It could be done but it would be a project. So yes it could be done but for a business buying a SAN will be more logical if for no other reason than time. It would take time to set up a system. It could be cheaper and have just as little downtime
    • Re:Reliable? (Score:5, Informative)

      by Znork ( 31774 ) on Wednesday May 30, 2007 @10:14AM (#19321095)
      "Additionally, SATA drives are not as reliable long term as SCSI."

      The CMU study by Bianca Schroeder and Garth A. Gibson would suggest otherwise. In fact, there was no significant difference in replacement rates between SCSI, FC, or SATA drives.

      "They want the system to phone home when a drive starts getting errors"

      Of course, the other recent study by Google showed that predictive health checks may be of limited value as an indicator of impending catastrophic disk failure.

      Basically, empirical research has shown that the SAN storage vendors are screwing their customers every day of the week.

      "Saving 47K on a SAN is great, unless it breaks 3 years from now"

      Of course, saving 47K on a SAN means you can easily triple your redundancy, still save money, and when it breaks, you have two more copies.

      At the same time, the guy spending the extra 47k on an 'Enterprise Class Ultra Reliable SAN', will get the same breakage 3 years from now, he wont have been able to afford all those redundant copies, and as he examines his contract with the SAN vendor, he notes that they actually dont promise anything.

      "But by all means, get a rep from EMC or HP in so the decision makers completely understand what they're buying."

      Premium grade bovine manure with (fake) gold flakes?

      Really, handing the decision makers several scientific papers and a few google search strings would leave them much better equipped to make a rational decision.

      Having several years experience with the kind of systems you're talking about I can just say, I've experienced several situations where, if we didnt have system-level redundancy, we would have suffered not only system downtime but actual data loss on expensive 'enterprise grade' SANs. That experience, as well as the research, has left me somewhat sceptical towards the claims of SAN vendors.
  • No but... (Score:4, Informative)

    by Junta ( 36770 ) on Wednesday May 30, 2007 @08:22AM (#19319989)
    ZFS does not obsolete NAS/SAN. However, for many many many instances, DIY fileservers have been more appropriate than SAN or NAS situations for many concepts long before ZFS came along, and ZFS has done little to change that situation (though adminning ZFS is more straightfoward and in some ways more efficient than the traditional, disparate strategies to achieve the same thing).

    I haven't gotten the point of standalone NAS boxes. They always were not fundamentally different from a traditional server, but with a premium price attached. I may not have seen the high-end stuff, howerver.

    SAN is an entirely different situation all together. You could have ZFS implemented on top of a SAN-backed block device (though I don't know if ZFS has any provisions to make this desirable). SAN is about solid performance to a number of nodes with extreme availability in mind. Most of the time in a SAN, every hard drive would be a member of a RAID, with each drive having two paths to power and to two RAID controllers in the chassis, each RAID controller having two uplinks to either two hosts or two FC switches, and each host either having two uplinks to the two different controllers or to two FC switches. Obviously, this gets pricey for good reason which may or may not be applicable to your purposes (frequently not), but the point of most SAN situations is no single-point of failure. For simple operation of multiple nodes on a common block device, HA is used to decide which single node owns/mounts any FS at a given time. Other times, a SAN filesystem like GPFS is used to mount the block device concurrenlty among many nodes, for active-active behavior.

    For the common case of 'decently' availble storage, a robust server with RAID arrays has for a long time been more appropriate for the majority of uses.
  • ZFS is great, but... (Score:4, Informative)

    by Etherized ( 1038092 ) on Wednesday May 30, 2007 @08:42AM (#19320147)
    It's no NetApp - yet. One thing to realize is that iSCSI target isn't even in Solaris proper yet - you have to run Solaris Express or OpenSolaris for the functionality. That may be fine for some people, but it's a deal-breaker for most companies - you're really going to place all those TB of data on a system that's basically unsupported? I'm sure Sun would lend you a hand for enough money, but running essentially a pre-release version of Solaris is a non-starter where real business is concerned. Even when iSCSI target makes it into Solaris 10 - which should be in the next release - are you really comfortable running critical services off of essentially the first release of the technology? Furthermore, while ZFS is amazingly simple to manage in comparison to any other UNIX filesystem/volume manager, it still requires you to know how to properly administer a Solaris box in order to use it. Even GUI-centric sysadmins are generally able to muddle through the interface on a Filer, but ZFS comes with a full-fledged OS that requires proper maintenance. Your Windows admins may be fine with a NetApp - especially with all that marvelous support you get from them - but ask them to maintain a Solaris box and you're asking for trouble. Not to mention, since it's a real, general purpose server OS, you'll have to maintain patches just like you do on the rest of your servers - and the supported method for patching Solaris is *still* to drop to single user mode and reboot afterwards (yes, I know that's not necessarily *required*). Also, "zfs send" is no real replacement for snapmirrors. And while ZFS snapshots are functionally equivalent to NetApp snapshots, there is no method for automatic creation and management of them - it's up to the admin to create any snapshotting scheme you want to implement. Don't get me wrong - I love ZFS and I use it wherever it makes sense to do so. It may even be acceptable as a "poor man's Filer" right now, assuming you don't need iSCSI or any of the more advanced features of a NetApp. In fact, it's a really great solution for home or small office fileservers, where you just need a bunch of network storage on the cheap - assuming, of course, that you already have a Solaris sysadmin at your home or small office. Just don't fool yourself, Filer it ain't - at least not yet.
  • by Wdomburg ( 141264 ) on Wednesday May 30, 2007 @08:46AM (#19320187)
    This doesn't strike me as having much to do with ZFS at all. You've been able to do a home grown NAS / SAN box for years on the cheap using commodity equipment. Take ZFS out of the picture and you just need to use a hardware raid controller or a block level RAID (like dmraid on Linux or geom on FreeBSD). There are even canned solutions for this, like OpenFiler [openfiler.com].

    That being said, this sort of solution may or may not be appropriate, depending on site needs. Sometimes support is worth it.

    You're also grossly overestimating the cost of an entry-level iSCSI SAN solution. Even going with EMC, hardly the cheapest of vendors, you can pick up a 6TB solution for about $15k, not $50k. Go with a second tier vendor and you can cut that number in half.
  • by pyite ( 140350 ) on Wednesday May 30, 2007 @08:57AM (#19320285)
    I guess this setup could replace some people's need for a turnkey NAS solution. But your thinking it could replace SAN solutions shows you haven't looked into SAN too much. To start, there's a reason Fibre Channel is way more popular than iSCSI. The financial services company I work for has about 3 petabytes of SAN storage, and not a drop of it is iSCSI. Storage Area Networks are special built for a purpose. They typically have multiple fabrics for redundancy, special purpose hardware (we use Cisco Andiamo, i.e., the 9500 series), and a special purpose layer 2 protocol (Fibre Channel). iSCSI adds the overhead of TCP/IP. TCP does a really nice job of making sure you don't drop packets, i.e. layer 3 chunks of data, but at the expense of possibly dropping frames, i.e. layer 2 data. The nature of TCP just does this, as it basically ramps up data sending until it breaks, then slows down, rinse and repeat. This also has the effect of increasing latency. Sometimes this is okay, people use FCIP (Fibre Channel over IP), for example. But, sometimes it's not. Fibre Channel does not drop frames. In addition, Fibre Channel supports cool things like SRDF [wikipedia.org] which can provide atomic writes in two physically separate arrays. (We have arrays 100 km away from each other that get written basically simultaneously and the host doesn't think its write is good until both arrays have written it.) So, like I said, this might be good for some uses, but not for any sort of significant SAN deployment.

  • by Lethyos ( 408045 ) on Wednesday May 30, 2007 @08:58AM (#19320297) Journal

    Google have a great solution [storagemojo.com] that focuses on the “cheap” part without compromising much the latter two. If you have not read up on the Google Filesystem, definitely take the time to. At the very least, it seems to call into question the need to shell out tens of thousands for high-end storage solutions that promise reliability in proportion to the dollar.

  • No (Score:3, Insightful)

    by drsmithy ( 35869 ) <drsmithy&gmail,com> on Wednesday May 30, 2007 @09:07AM (#19320389)

    Potentially it will obselete low-end NAS/SAN hardware (eg: Dell/EMC AX150i, StoreVault S500) in the next couple of years, for companies who are prepared to expend the additional people time in rolling their own and managing it (a not insignificant cost - easily making up $thousands or more a year). There's a lot of value in being able to take an array out of a box, plug it in, go to a web interface and click a few buttons, then forget it exists.

    However, your DIY project isn't going to come close to the performance, reliability and scalability of even an off the shelf mid-range SAN/NAS device using FC drives, multiple redundant controllers and power supplies - even if the front end is iSCSI.

    Not to mention the manageability and support aspects. When you're in a position to drop $50k on a storage solution, you're in a position to be losing major money when something breaks, which is where that 24x7x2hr support contract comes into play, and hunting around on forums or running down to the corner store for some hardware components just isn't an option.

    ZFS also still has some reliability aspects to work out - eg: hot spares. Plus there isn't a non-OpenSolaris release that offers iSCSI target support yet AFAIK.

    I've looked into this sort of thing myself, for both home and work - and while it's quite sufficient for my needs at home, IMHO it needs 1 - 2 years to mature before it's going to be a serious alternative in the low-end NAS space.

  • by Britz ( 170620 ) on Wednesday May 30, 2007 @09:10AM (#19320399)
    Linux has more perfomance testing on x86 than OpenSolaris (so you are not as likely to run into a bad bottleneck). On Linux you can create a RAID-1,-4,-5 and -6 under Multiple Device Driver Support in the kernel. You can then use mkraid to include all the drives you want. This code in not new at all. It was stable in 2.4, maybe even in 2.2

    After that you just create a filesystem on top of the raid. If you don't like ext3 or don't trust it, there is always xfs. I had some rough times with reiserfs, xfs, and ext3 and for all the experience I had I would go xfs for long running server environments (and now get flamed for this little bit, use ext3 all you want).

    The advantage is that you use very well tested code.

    The problem comes with hotswapping. I don't know if the drivers are up to that yet. But I also highly doubt that OpenSolaris SATA drivers for some low price chip in a low price storage box can deal with hotswapping. So Linux might be faster on that one.

    That is a setup I would compare to a plug'n play SAN solution. And it totally depends on the environment. If the Linux box goes down for some reason for a couple hours/days, how much will that cost you? If it is more than twice the SAN-solution, you might just buy the SAN and if it fails just pull the disks and put them in the new one. I dunno if that would work on Linux.
  • ZFS ftw! (Score:3, Interesting)

    by GuyverDH ( 232921 ) on Wednesday May 30, 2007 @09:42AM (#19320693)
    I've been using ZFS for quite some time, and have yet to have any form of failure or data corruption. I've used it with simple JBOD, SAN attached, even USB attached drives - it just plain works. With Solaris 10 U3, and the latest revs of OpenSolaris, you have access to RAIDZ2 - which gives you double parity, even more protection. Snapshotting can be scripted to run as often as you like. I keep 2 months worth of snapshots every 5 minutes day in and day out. You can disassemble the system, scramble the drives, reattach and bring the system back up and all will be well. ZFS will just re-assemble the pool and continue. You can replace drives on the fly. Let's say you make your 12 disk encloser with 750GB drives. Now, 2 years later you want to replace them with 2TB drives. Simply use the zfs replace command to replace each drive, wait for it to re-silver and rebuild the data on the replaced drive, then move on to the next drive. As you replace them, the pool will grow automatically. This would grow your (assuming 12x750GB in a RAIDZ2) 7.5TB pool to a 20TB pool, without downtime. With OpenSolaris on x86, you can even boot off of ZFS now, so use ZFS to mirror your boot disk with a like drive and you should be good for quite some time... Using something like Sun's Thumper system, you can get a 12TB system for less than 30k (for those who have something akin to a budget) ZFS, it's fast, safe, secure - and I enjoy working with it (as I don't have to do much with it).

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...