Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

RAID Controller Shoot-Out 88

mikemuch writes "ExtremeTech has a comparison with benchmarks of three RAID controllers from Adaptec, LSI Logic, and Promise, and along the way gives you a little refresher course on RAID in general and why you want to use it: faster throughput, longer uptime, and improved data security. Motherboard RAID controllers do well when there's 'very little or no load on the CPU, I/O bus, and memory bandwidth. But with heavy traffic and processor loads, the limitations of the shared bus and the benefits of intelligent RAID's integrated IOP and memory cache have a more significant impact.'"
This discussion has been archived. No new comments can be posted.

RAID Controller Shoot-Out

Comments Filter:
  • by thebdj ( 768618 ) on Thursday June 15, 2006 @09:42AM (#15539364) Journal
    On-board RAID is good enough for most everyone.
    • The vast majority of onboard RAID "controllers" I've encountered so far have been little more than a software driver. And a Windows-only one at that.
      • by labratuk ( 204918 ) on Thursday June 15, 2006 @01:56PM (#15541754)
        ...and a proprietary striping format, so when you have controller problems you have to use the same vendor's software & hardware to recover your data.
        • Which means that on Friday at 4PM when your RAID controller smokes, you either have an on-site spare or you start panicing. Maybe you'll be lucky to find someone who has one in stock and can ship for Saturday delivery.

          If you can get onsite service contracts in your area this is a very good selling point for them. If you're > 100 miles outside of a "major" metro area, good luck.

          I've never had an SMP server slow down with software RAID-1 mirrors that I could notice.
    • Yeah, for all but RAID5. That you need a real controller for. Onboard will suck up way too much CPU.
      • A bit of a sweeping generalisation don't you think. Just because a chip is stuck to the board it doesn't mean it's going to slurp up alot of CPU. It's got to be better than software if anything.
        • Perhaps, but I'm just speaking from experience. I used to have an on-board RAID5 setup, and I was less than satisfied with my read and write speads, not to mention the CPU usage. I bought an Areca RAID card, and poof, it works flawlessly at great speeds. However, I do agree that for anything other than RAID5/6, controller cards are crazy overkill.
      • Yeah, for all but RAID5. That you need a real controller for. Onboard will suck up way too much CPU.

        I don't know why the meme that software (or pseudo-hardware) RAID5 "sucks up" CPU cycles continues to propogate.

        A 300Mhz Pentium 2 has a RAID5 checksumming speed (in Linux) of about 800M/s. So at the more realistic speeds of 50 - 75M/s your average PC's RAID5 array can write at before the bus and physical drives hit their limits and on any remotely modern CPU, the processing overhead of checksumming is ins

        • I don't know why the meme that software (or pseudo-hardware) RAID5 "sucks up" CPU cycles continues to propogate.

          It's because anandtech and tomshardware "experts" (like the grandparent) believe everything they read in a forum post. This is also the reason everyone believes Via chipsets suck even though they haven't in 5 years.
    • I have ITE8210 onboard PATA RAID. It is crap. It's crap under Windows, and it's even crappier under Linux, under which it is not at all supported. (There's a source-code driver from ITE; it doesn't work.)
    • As long as it has 640k memory on board.
  • by Anonymous Coward
    Print version here [extremetech.com]

    Annoying "next page" articles...
  • Anyone know of a hardware raid card with a external SATA port that supports the port multiplier feature? I have only been able to find software raid solutions.
  • Rebuilding? (Score:3, Interesting)

    by merdaccia ( 695940 ) on Thursday June 15, 2006 @10:12AM (#15539620)

    There's a stigma associated with host-based controllers that trying to rebuild an array with them is tantamount to masochism. I think it comes from the fact that an intelligent controller can rebuild an array through BIOS-only intervention.

    Anyone care to shed some light on how rebuilding arrays compares when using intelligent vs host-based controllers?

    • Re:Rebuilding? (Score:3, Informative)

      by pla ( 258480 )
      Anyone care to shed some light on how rebuilding arrays compares when using intelligent vs host-based controllers?

      Sure.

      All the BIOS RAID interfaces I've seen (mostly MegaRaid and Adaptec) suck hard. About as friendly as a cobra, and slightly more dangerous if you do the wrong thing.

      Software RAID interfaces can do better - But few actually do.


      However, I wouldn't suggest choosing one or the other based on the friendlyness of rebuilding - Whichever you choose, when you eventually need to replace a d
    • Re:Rebuilding? (Score:3, Informative)

      by (startx) ( 37027 )
      I've got 2 250GB WD SATA drives RAID-1'd with an LSI MegaRAID SATA150-2 controller. It's an add-in controller, but I've got no idea how "intelligent" it is. I just had to rebuild the array (unplugged one drive for several days without knowing it) and it took 70 HOURS. From the BIOS interface for the card. Masochism indeed.
      • 70 hours! Holy crap. Even my ATA RAID5 array wouldn't take that long (one failed 80GB drive in a 4x80GB RAID5 array -- rebuild took ~20 hours).

        That's just more fodder toward buying a used filer (NetApp) running RAID4 and WAFL...

        (Warning: Shameless work plug and great stock tip above.)
  • by LWATCDR ( 28044 ) on Thursday June 15, 2006 @10:19AM (#15539679) Homepage Journal
    Linux, BSD, or Solaris?
    How about calling it the Windows RAID controller shoot out?
    ExtremeTech should just change it's name to Mainstream tech and get it over with.
    • Yup (Score:3, Insightful)

      by metamatic ( 202216 )
      A review of SATA RAID controllers that have open source Linux drivers would be very useful to me.

      The ExtremeTech article was a complete waste of time.
      • Heck I am even easier to please. Just include ANY OS that isn't Windows for a start. Linux, BSD, and or Solaris drivers are a plus. Open source drivers are a big plus. Support in the Kernel is a huge plus.
        Do they support Hot Swap enclosures?
        How long to rebuild after a drive replacement?
        Yea pretty useless.
      • Re:Yup (Score:3, Informative)

        by bkeeler ( 29897 )
        This article [tweakers.net] is pretty good, though a few months old now. I bought an Areca ARC-1120 based on this review, and have been very happy with it. 100% GPLed driver. Wish they were a little easier to find in online stores though.
        • As far as I know Newegg has carried Areca cards for at least a year, didn't know Newegg was hard to find.
  • *nix RAID Support (Score:5, Informative)

    by pilot1 ( 610480 ) on Thursday June 15, 2006 @10:28AM (#15539757)
    According to the OpenBSD i386 supported hardware website [openbsd.org], out of the cards reviewed, only Adaptec and LSI cards are compatible with OpenBSD.
    However, Adaptec has refused to provide documentation so that the OpenBSD project may improve the drivers.
    "Note: In the past year Adaptec has lied to us repeatedly about forthcoming documentation so that RAID support for these (rather buggy) raid controllers could be stabilized, improved, and managed. As a result, we do not recommend the Adaptec cards for use."
    Other *nix variants might support the Adaptec and Promise cards a little more, but the hardware fully supported by OpenBSD is generally well-supported across all *nix variants.
    Out of the cards reviewed, only the one from LSI is worth buying. Adaptec may have a little support, but it's not a good idea to purchase any RAID cards from them until they start providing better documentation.
    • 3ware was not in the article but I do know from experience that they support linux. I'm running a 9500S-4LP in my server right now under Gentoo.
    • by Anonymous Coward
      I'll second the advice on Adaptec. I have the ATA Raid 5 cards. The software they provide for Windows is ok. The BIOS software is ok. The Linux software? 2.4 kernel software was a nightmare for anything other than Red Hat, SCO, and possibly one more distro. Everything else was a fucking nightmare of patching the kernel and rebuilding, then hoping it worked. Debian, it didn't. As for 2.6, forget it. No support for 2.6 yet, don't think there's any planned unless you are buying a current SATA card. T
  • by Anonymous Coward
    LSI has a much newer (and presumably) faster SATA/300 card now. If you were buying a new RAID card, that's what you would buy. Why did they use the old generation?

    Where's Areca? They're the performance leader in this market, and their pricing is now in line with competition.

    Where's 3Ware?

    What about other host-based RAID solutions? Broadcom? Marvell?

    Don't even get me started about what they tested. This is just not a serious RAID review. I strongly urge folks who are interested in this subject to Google for
    • Exactly.

      The article may be an advertisement disguised as an article. Possibly they don't want to benchmark 3Ware because it would win. Judging from the article, possibly this is an Adaptec ad.

      --
      U.S. Taxpayer Karma: If you contribute money to kill people, expect your own quality of life to diminish.
  • by Theovon ( 109752 ) on Thursday June 15, 2006 @10:38AM (#15539849)
    I've used some RAID controllers myself, and I have friends with a lot of experience with them. A key factor in what makes a good RAID controller is not throughput. It's long-term reliability. How long can you hammer your RAID array before you get unrecoverable corruption? A RAID array is supposed to prevent that, but if you have some weird bug in your RAID controller, or it's susceptible to EM interference from surrounding components, you will get data corruption. And I don't mean for reads; I mean that the data gets corrupted on the way from memory to the disk (at least that's our theory), where no RAID controller can protect you.

    Of ATA controllers, our experience shows that 3ware controllers are the least unreliable. That is, they generally suck, because they have demonstrated performance problems and other weird failures that 3ware couldn't help us resolve, but they suffer from the least data corruption.

    For whatever reason, the on-board controllers are the worst. They seem nice and perform well enough, but they have the highest rate of data corruption.

    It may or may not surprise you that software RAID is relatively reliable. With a RAID1, you'd think you're twice as likely to corrupt data on writes, because you have to send the same data twice to two different drives. Sure, having them both bad is unlikely, but at a later time, how do you know which copy of a given sector is correct? But we think that removing an unreliable hardware RAID controller from the data path and just having the relatively simple ATA controller in the way reduces chances of a problem. Just a guess.

    If you want truly reliable hardware RAID, you need to spend your life savings on an industrial-strength SCSI RAID controller.

    The moral of the story is that there's really no such thing as 100% reliable data storage. If you want speed and don't care about reliability, RAID0 is for you. Other RAID levels add redundancy, which is nice in theory, but add hardware complexity that offsets some of the advantage. For my critical data, I store to CD and DVD ROM. And I make multiple copies of those, because those aren't all that reliable either.
    • 3Ware has been a mixed bag for us. The 7xxx/8xxx series are running in a number of our production servers (the little kind - the kind slated to become VMWare images in a year or three). They weren't exactly top performers, but they have been very reliable and easy to manage, good with notification of problems, and a breeze to rebuild. We had a bad run in one time with one of the original 9000 series models dropping zeros to the disk and causing us some bad database corruption (and it was a pain in the ass t
      • by ErikTheRed ( 162431 ) on Thursday June 15, 2006 @01:51PM (#15541697) Homepage
        I finally broke down and bought an Areca card for one of my home-office servers (I had read some nice reviews and wanted to test one myself before recommending it). Seems reliable (at least from my single, lonely sample point) - it handled a drive failure perfectly (that is, it caught ugly S.M.A.R.T. statistics and notified us before the drive actually failed completely) - and it's very fast. Their Linux driver is BLOB-Free [openbsd.org], well-commented and 100% GPL. Prices are reasonable, but it'd be nice if they were available through mainstream distribution (Ingram, TechData, etc) - not yet, apparently.
        • I, too, have an Areca. I've had it about 8 months now, and it's been a good performer so far.

          I recently installed Ubuntu Dapper Drake onto it with no problems. The supplied kernel had the driver already, so no need to fiddle about patching kernel source and building the driver.
    • You forgot to mention that software RAID is portable across different hardware, and in most cases the way it lays out the data on the disks is well-documented. If a hardware RAID controller dies you have to find an identical one or you're screwed.
    • Personally, I've grown very fond of Software RAID. Especially mdadm.

      I've dealt with both (hardware RAID, hardware RAID that is really software RAID and Linux Software RAID) and for a smaller company, Software RAID wins out. I don't have to worry about driver issues, I don't have to worry about keeping 2 extra RAID controllers on-hand in case the first one fries, and I can move the disks to another machine with different hardware and still get the RAID back up and running.

      Software RAID seems very flexi
    • I call 100% Shenanigans. Ok, you said "most ATA RAID..." so I call 99% Shenanigans.

      I built a sizeable (by 2003 standards) 4-drive RAID5 array using a cheap P3 motherboard and an Adaptec 2400A RAID controller on Feb. 16, 2003. It's still running swimmingly, serving up my media and user directories, and has never had any glitches -- not even a failed disk. Data corruption? None. If you have data corruption on a RAID5 controller, you need to seriously reconsider your parts list. (Not saying the ATA-2400A
  • by Necroman ( 61604 ) on Thursday June 15, 2006 @10:40AM (#15539865)
    This isn't a shootout, it an advertisement for the cards. They are at different price levels, and ExtreemTech is just showing the difference in performance you get for spending more money. Wow!

    If they tested multiple series of the Adapatec, LSI Logic, and some 3ware cards, I would be more impressed, but this just seems like an all out advertisement to me.
  • I've got a coraid array that can saturate the host PCI bus running on ATA-over-Ethernet technology, which is faster & simpler than SCSI-over-IP. Performance comparable to my giant expensive fiber channel SAN at a tiny fraction of the cost.

    These guys are behind the technology: http://www.coraid.com/ [coraid.com]

    If you don't like Open Source, you won't like it yet. Wait a few years and there will be a version you'll like, the economics of it are compelling. But right now you need to be able to write your own init
    • I will admit that I have not researched ATA over Eth. all that much but, my current system's storage cost was: $288 for the 3ware 9500S-4LP and (4) Maxtor 300GB sata2 drives at $105 (shipped oem). Which comes in under $710. So, if you figure I'm running raid 5, the actual cost is ($710/900) $0.79 per GB of redudant data, which I feel, will be hard to beat (price wise) by other storage methods.

      From http://linuxdevices.com/news/NS3189760067.html [linuxdevices.com]

      Coraid lists 5MB/sec sustained throughput

      You can get thi

    • You've got to be kidding. a gigabit ethernet connected drive will, at the theoretical limit, deliver 125MB/s or so of data. Here's a quick listing of PCI Bus speeds:
      • PCI 33MHz 32 bit bus = 133 MB/s
      • PCI 66MHz 32 bit bus = 266 MB/s
      • PCI 66MHz 64 bit = 533 MB/s
      • PCI-X 133MHz = 1067 MB/s
      • PCI Express x8 at 2500 MB/s
      • PCI Express x16 at 5000 MB/s


        • There's some nice stats located here [ieiworld.com]

          Now, I don't know about you, but a single Gb/s ethernet port is going to have a hard time filling up any of those busses except the oldest one
      • No, I'm not kidding. Keep in mind I run on empirical, not theoretical. I'm not sure why you think it's reasonable to compare "a well designed SCSI system" which you admit requires multiple cards and channels with a single-port AoE system.

        I recently replaced a ~2 TB duplexed u320 SCSI RAID array with a ~8 TB AoE array. Same hosts - same OS - no changes except from SCSI RAID to AoE RAID. My bus is PCI-133 and I have two GB ethernet ports on the motherboard, four more on two Intel PCI cards (not all of tho
  • Do any of those RAID cards support deleting an unwanted logical drive without deleting all of the logical drives? That's been a royal pain for me with both the Adaptec raid controllers and LSI's Megaraid controllers. What use is it if you can't remove old smaller disks and add new larger disks without rebuilding the entire system?

    Also, have they gotten any better about continuing to run during single-disk failures? With the megaraid cards I've used its about 50/50 whether the raid subsystem crashes during a
    • I'm confused what you mean when you say:

      Do any of those RAID cards support deleting an unwanted logical drive without deleting all of the logical drives?

      The partitioning should be done by whatever operating system that you are running. A true hardware raid will convince the OS that you have a single hard drive (depending on how you configure the controller, I guess). As far as upgrading to larger drives, I would imagine that with my 3ware card, I could add a single drive and rebuild the array (via the

      • In the RAID adapter's setup program you create logical drives which are presented to the OS as if they were physical drives. These logical drives are generally either 1 physical drive, 2 drives configured in a RAID 1 or 3 or more drives configured in a RAID 5.

        Many naive users throw all drives in a single massive RAID 5 logical drive and then partition that drive at the OS level. I call that naive because when you set up the array that way you get very poor speed performance: IO from all running programs has
        • Wouldn't having the os be a raid5 partition (assuing you are using raid5 for the other logical drives) be a better solution. The overall speed should be faster due to the striped effect over raid 1 and still be redundant? I guess I do not see how running 3 logical drives (os, swap, storage) in the controller would be faster then 1 logical drive in the controller and 3 file systems (partitions in the os level). Maybe it's just not clicking.
          • It depends on whether you have enough HDs. Because of the drive access patterns of most OSes, it's better for performance to RAID 1 OS drives. It's also better to have your swap file on a seperate physical HD, or RAID0 set (Depends upon how much you intend to swap - we set ours up on the RAID 1 system drive, with a very small fixed size, as we shouldn't be paging in the first place).

            RAID5 doesn't gain any performance until you hit 4 or more drives. RAID5 on three drives is usually a performance penalty. Gen
          • Let me create an example. I'll use Linux terminology but it would be the same for any OS.

            Lets say I have 8 20-gig hard disks. I want to build a web server with a MySQL backend. The mysqld process will be heavily disk bound on files that reside in /var/lib/mysql. The web pages themselves will reside in /var/www.

            Option 1:

            8-way RAID 5, 140 gigs usable. /dev/sda.
            /dev/sda1 = /boot
            /dev/sda2 = swap
            /dev/sda3 = /
            /dev/sda5 = /var/www
            /dev/sda6 = /var/lib/mysql

            Option 2:

            8-way RAID 5, 140 gigs usable. /dev/sda.
            /dev
        • Generally, you want to set up a RAID 1 logical drive for the OS and either RAID 1, 1+0 or 5 logical drives for the various I/O-bound applications running on the system. RAID 1 or 1+0 is typically faster than RAID 5 while RAID 5 offers more usable disk space.

          Multiple logical drives over the same physical disks - especially at different RAID levels - usually hurt performance because of the different access patterns to the drives and the way OSes try to optimise the order of (physical) disk operations.

          This i

          • This isn't possible with hardware RAID, or at least none that I've used. An entire disk is used as part of an array.
            • This isn't possible with hardware RAID, or at least none that I've used. An entire disk is used as part of an array.

              Most Adaptec controllers I've used can do it.

              • I'm guessing that would be IDE controllers? I haven't seen a SCSI RAID controller that works on anything less than the physical drive as the smallest granular unit. While I haven't played with the newest IDE RAID controllers, the ones from a couple of years ago were all based on whole disks.

                I should also note that I concluded that all the low-end IDE controllers are a waste of money compared to software RAID available with any decent OS. Once you get into the realm of hardware IDE RAID controllers that star
                • I'm guessing that would be IDE controllers? I haven't seen a SCSI RAID controller that works on anything less than the physical drive as the smallest granular unit. While I haven't played with the newest IDE RAID controllers, the ones from a couple of years ago were all based on whole disks.

                  Actually I can recall seeing this option on way more SCSI than IDE/SATA RAID controllers. Most IDE/SATA controllers I've used (eg: 3ware) only let you specify drives to create and array from and don't then allow you to

                  • Actually I can recall seeing this option on way more SCSI than IDE/SATA RAID controllers. Most IDE/SATA controllers I've used (eg: 3ware) only let you specify drives to create and array from and don't then allow you to choose the size of that array. SCSI RAID controllers tend to be more advanced (for obvious reasons).

                    My experience is a little dated: LSI/Mylex extremeRaid and 250 series, AMI MegaRaid (several, all older), the Dell PeRC controller circa 2002 or so). On the IDE front it's all IDE, no SATA: Pro

      • I'm confused what you mean when you say:

        He means if you define multiple RAID arrays on the card ("logical drives") can you delete them individually.

  • by ErikTheRed ( 162431 ) on Thursday June 15, 2006 @12:52PM (#15541092) Homepage
    Ok, I realize it's a bad metaphor because you actually can throw a motherboard quite a distance. But here's another example of where things can go horribly wrong: How do they handle error conditions? On my desktop system, I'm running RAID-0 (with WD Raptor drives) for speed. Yes, I know what I'm doing (famous last words). No, I don't store any important data on my desktop (it's on a RAID-5 array on a server). Originally, I was using the Silicon Image 3114R on-board RAID controller included on my Asus A8N-SLI "Premium" motherboard. Eventually one of the drives died. The SI3114R responded to the problem by freezing and becoming unresponsive when a disk error occurred. Under DOS, Linux, or WinXP - the problem is not OS specific. The rest of the system works fine, but once it hits an error the SI3114R just stops working and returns nothing but errors to the OS. Now, since Asus doesn't update the SI3114R BIOS in their mobo BIOS updates (and I'm too lazy to hack my own), I don't know whether it's bad silicon, bad BIOS, or a bad design (my guess would be the latter). Accessing the drive's S.M.A.R.T. data indicates that the warning numbers were screamingly bad and probably were for some time.

    So apparently the SI3114R doesn't monitor S.M.A.R.T. data, and it's error-handling capabilities fall somewhere between "shitty" and "non-existant". No big deal for me; I was only inconvenienced by having to re-install operating systems and applications.

    The moral of this long-winded story is that you generally get what you pay for. This isn't the first bad experience I've had with on-board RAID controllers. If your data is important, then spend the appropriate money (think in terms of data replacement cost), do the appropriate research, and invest in a RAID setup that's right for your situation. If your protected data consists of anything more important than your Oblivion saved games, your mobo's RAID controller (or the $39 Fry's special) is probably the wrong choice.

    And if anyone cares to know, I'm now using the NVRAID on the mobo (we'll eventually see if it handles failures more gracefully), and I use an Areca ARC-1110 [areca.us] on my server. I can attest that the Areca card does handle failures extremely well, albeit noisily.
    • by Jeffrey Baker ( 6191 ) on Thursday June 15, 2006 @01:06PM (#15541192)
      I second the Areca recommendation. Their cards are very capable of detecting a failed disk, taking it offline, mailing the operator, and sounding the buzzer, all without skipping a beat as far as the host operating system can tell. And their RAID engine is bleeding fast, too. I just wish the kernel folks would try harder to get their driver into the mainline. Areca is the rare example of a manufacturer who undertook the cost to write their own Linux driver and release it under the GPL, and the kernel maintainers have spent more than a year whining and bitching about how the code doesn't fit in their 80-column terminals.
  • I was expecting some hardware to actually be shot [engadget.com]
  • RAID in PCs has been a hot topic and I suspect a lot of marketing has gone into products which the vast majority of people have no real use for (having a RAID setup being bragging rights).

    If you really do *need* a RAID setup, it seems stupid to ignore the SCSI angle on things, because SCSI RAID controllers are much more mature in features/performance/reliability and obviously aimed at a market which is less tolerant of cheap'n'nasty.

    I know that SCSI is a lot more cost and, compared to SATA, not all that muc

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...