Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Dependable SCSI RAID Controllers for Linux?

Cliff posted more than 12 years ago | from the mission-critical-hardware dept.

Hardware 63

PalmKiller asks: "I have been using DPT (now Adaptec) SmartRaid controllers for years with great results in Linux, but with the advent of the Adaptec assimilation they became more unreliable in current kernels (in the 2.2 or 2.4 tree) least for the DPT SmartRaid V and Adaptec branded equivalents, ie: the kernel panics under know when you try to use it for anything but an idle uptime box. The crashes have something to do with 5 minute long command queues creating havoc in the kernel. I have a few SmartRaid IV controllers that plug away without issue, but they use a different driver. I suspect the programmers and people who knew how to program the Adaptec/DPT controllers got lost in the buyout, or perhaps driver quality control took a dive. I would greatly appreciate other Slashdot readers opinions on a good replacement that is available in the US."

"I have been considering ICP Vortex RZ and RS series and AMI Megaraid as possibles, along with the Mylex line of controllers. I would like some opinions, praises and even nightmare stories on any of these. I am not wanting to invest $350-$1500 per controller on another nightmare like Adaptec/DPT line. It should be obvious but cost is not primary, reliability and to a lesser degree performance are the key issues. In addition I run my controllers in RAID 5 with a hot spare, so suggestions should be for controllers that can do that RAID mode and ones that can be administered from a running Linux system so I can do hot swapping. I would also like controllers whose manufacturer keeps current patches available for the stock kernel tree or is in the kernel tree (for both 2.2 and 2.4, I use 2.2 mostly due to issues with 2.4) as I never use a canned kernel after the install is done. If you run Windows or some other truthfully Adaptec supported OS look for a few *good* DPT or Adaptec controllers on eBay when the swap-out is all over."

Sorry! There are no comments related to the filter you selected.

Mylex controllers are junk (5, Interesting)

krow (129804) | more than 12 years ago | (#2892009)

I would blame a really large portion of Slashdot's downtime, and the recent downtime with Freshmeat, on those controllers. Outside of a Megadrive that I used at the Virtual Hospital [] , those are probably some of the worst pieces of Hardware I have ever ran into.
I would never recommend that anyone ever use those cards. Flaky hardware is one issue, but those cards have consistently been the root of a lot of sleepless nights for me fixing the mess that they have caused.

Re:Mylex controllers are junk (1)

Stone Rhino (532581) | more than 12 years ago | (#2899562)

Thats nice. he said nothing about mylex. thank you for listening to my nitpick.

All corrections are incorrect (1)

Spamboi (179761) | more than 12 years ago | (#2899589)

I have been considering ICP Vortex RZ and RS series and AMI Megaraid as possibles, along with the Mylex line of controllers. I would like some opinions, praises and even nightmare stories on any of these.

The parent post would seem to be an opinion, and perhaps even a nightmare story, about the Mylex line of controllers. Way to read, read-boy.

Re:All corrections are incorrect (1)

Stone Rhino (532581) | more than 12 years ago | (#2899602)

sorry, missed that, heh.

Re:Mylex controllers are junk (1)

PalmKiller (174161) | more than 12 years ago | (#2899640)

OK, I will be scratching that mylex line off my list immediantly :).

Re:Mylex controllers are junk (2)

GiMP (10923) | more than 12 years ago | (#2899661)

Links to where anyone at slashdot or freshmeat claimed that their problems were caused by Mylex controllers? Or should I just take your word on it?

Re:Mylex controllers are junk (2)

krow (129804) | more than 11 years ago | (#2901508)

Seeing how I am the guy who maintains the databases I would assume my word is enough :)

Probaby could have a couple of sysadmins come in and offer their opinion since they have to stay up too.

Re:Mylex controllers are junk (1)

Ringlord (82097) | more than 12 years ago | (#2899782)

I have used a Mylex adapter for a quite busy web- and mailserver for 1,5 years. It works very well.

Re:Mylex controllers are junk (1)

PalmKiller (174161) | more than 12 years ago | (#2899794)

Which mylex raid card are you using?

Re:Mylex controllers are junk (1)

Ringlord (82097) | more than 11 years ago | (#2900692)

It is a DAC960PG PCI controller.

Re:Mylex controllers are junk (2)

Tet (2721) | more than 12 years ago | (#2899874)

Actually, I have completely the opposite experience. The Mylex controllers I've used have never failed me once. I'd recommend them to anyone. That said, my personal opinion is that a RAID controller card is the wrong way to go anyway. Just get an external standalone RAID box. CLARiiONs used to be the best of the bunch, but they've headed up market now, leaving the SCSI arena to Baydel, BoxHill, etc.

Re:Mylex controllers are junk (2)

ansible (9585) | more than 11 years ago | (#2902678)

I've been using Mylex RAID cards (mostly the AcceleRAID 150 and 250s) for over 2 years without any problems. Very solid.

Many, if not most of the problems I've heard about with RAID and SCSI in general are cable-related. If you're experiencing problems, check to see if you're using the correct type of SCSI cable, that it's not too long, and you're using the correct type of terminators (preferably forced-perfect).

Though I've been able to work out SCSI-related problems in the past, but don't think I want to deal with that anymore. The next low-end server I build will probably use IDE-RAID. If I have to build a high-end server, I'd rather use FC-AL.

Re:Mylex controllers are junk (2)

1010011010 (53039) | more than 12 years ago | (#2908808)

Mylex and DPT are both crap.

I had the firmware fail in a mylex controller in a database server. Of course, the busier, and therefore more important, databases were the ones with the most garbage written over their files. This was on NT.

I know of no good raid controllers. Look for a scsi-to-scsi setup rather than pci-to-scsi, though.

Re:Mylex controllers are junk (3, Informative)

Quixote (154172) | more than 11 years ago | (#2900133)

We've had issues with Mylex too. Here's our setup: a P-III 1GHz box with a Mylex eXtremeRAID 2000; an Adaptec 2940UW and a 2944UW. One of the Adaptecs is hooked to a Sun A1000 (which has a "Symbios" RAID controller built in, but we're using as a JBOD).

Anyways: when we hooked up the A1000 (our Sun server died), the system suddenly became flaky! We boot from a standalone SCSI disk, so booting wasn't a problem. But the Mylex would lose its settings; half the disks in one of the trays wouldn't show up, etc. We spent days trying to figure it out, but to no avail.After repeated messages to Mylex support, we get the solution: disable the BIOS on the Mylex. It turns out that the Symbios RAID controller in the A1000 was confusing the Mylex BIOS! Even though the A1000 was on a separate Adaptec controller. Go figure.

Re:Mylex controllers are junk (0)

Anonymous Coward | more than 11 years ago | (#2904182)

Let's see: three different SCSI controllers from two different manufacturers in the same box.

Sounds like you were asking for trouble.

Re:Mylex controllers are junk (1)

dbullock (32532) | more than 11 years ago | (#2901613)

I spent five years with a system integrator building RAID systems.

We started off with Mylex and eventually had to stop using them due to deteriorating quality of the boards.

Krow is dead on. Mylex is junk.

Re:Mylex controllers work fine for me (2)

Local Loop (55555) | more than 11 years ago | (#2904150)

I've been using a Mylex Acceleraid 150 for just about a year now with zero problems. The box
had about 235 days of uptime until I shut it down to add memory.

What precisely was your problem with Mylex?

Re:Mylex controllers work fine for me (2)

krow (129804) | more than 12 years ago | (#2904304)

Random kernel crashes, forgets about disks... just about every kind of error that I have seen.

Let me be the first (-1, Troll)

Anonymous Coward | more than 12 years ago | (#2899608)

Let me be the first to say "I don't give a fuck!" and "Please shove hot grits down the front of my pants please!"

One wank or two? []

Fat people need sex too! []

SCSI is DEAD (-1, Flamebait)

Anonymous Coward | more than 12 years ago | (#2899618)

StorageReview officially confirms: SCSI is dying

Yet another crippling bombshell hit the beleaguered SCSI community when recently IDC confirmed that SCSI accounts for less than a fraction of 1 percent of all drives. Coming on the heels of the latest StorageReview survey which plainly states that SCSI has lost more market share, this news serves to reinforce what we've known all along. SCSI is collapsing in complete disarray, as further exemplified by failing dead last in the recent Sys Admin comprehensive networking test.

You don't need to be Davin Shearer to predict SCSI's future. The hand writing is on the wall: SCSI faces a bleak future. In fact there won't be any future at all for SCSI because SCSI is dying. Things are looking very bad for SCSI. As many of us are already aware, SCSI continues to lose market share. Red ink flows like a river of blood. UltraSCSI is the most endangered of them all, having lost 93% of its users.

Let's keep to the facts and look at the numbers.

SCSI leader Eugene Ra states that there are 7000 users of SCSI. How many users of UltraSCSI are there? Let's see. The number of SCSI versus UltraSCSI posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 UltraSCSI users. fibre channel SCSI posts on Usenet are about half of the volume of UltraSCSI posts. Therefore there are about 700 users of fiber channel SCSI. A recent article put SCSI2 at about 80 percent of the *BSD market. Therefore there are (7000+1400+700)*4 = 36400 SCSI users. This is consistent with the number of SCSI Usenet posts.

Due to the troubles of SCSI, abysmal sales and so on, Western Digital went out of business and was taken over by Seagate who sell another troubled Drive. Now Seagate is also dead, its corpse turned over to yet another charnel house.

All major surveys show that SCSI has steadily declined in market share. SCSI is very sick and its long term survival prospects are very dim. If SCSI is to survive at all it will be among RAID hobbyist dabblers. SCSI continues to decay. Nothing short of a miracle could save it at this point in time. For all practical purposes, SCSI is dead.

FACT: SCSI is dead, long live ATA

Re:SCSI is DEAD (0, Flamebait)

PalmKiller (174161) | more than 12 years ago | (#2899647)

Well I don't post on usenet, I decided to ask slashdot. Perhaps SCSI users don't have time to read usenet, I know I have not read a newsgroup except in digest form in about 3 years, much less posted.

Re:SCSI is DEAD (0)

Anonymous Coward | more than 11 years ago | (#2902455)


Re:SCSI is DEAD (3)

GiMP (10923) | more than 12 years ago | (#2899680)

So what is replacing it? I know you did not just suggest that IDE is actually worth anything.

I would really like to see you run servers on IDE.. HaHa. You are not just a troll, but a stupid one.

SCSI will not die any time soon. If it does, it will be replaced by Fibre Channel.. You couldn't pay me to use an IDE disk anymore, except maybe to boot legacy (x86) hardware with so I can boot from an NFS sever.. for a workstation.

For a server, IDE has no place. Half-duplex, cpu intensive, unreliable, do I need to say more? Oh, and incrediably limited in the number of disks. Raid5 array of IDE? yeah right. You can only have 8 IDE disks in a system, all of which use interrupts.

Re:SCSI is DEAD (1)

Toraz Chryx (467835) | more than 12 years ago | (#2899823)

fyi, you can have a couple of <a href=" ddetail.html?prodkey=AAR-2400A&cat=%2fTechnolo gy%2fRAID%2fATA+RAID">these</a> in a single system..

but yeah, one of <a href=" ddetail.html?prodkey=ASR-3410S&cat=%2fTechnolo gy%2fRAID%2fTwo+and+Four-Channel+SCSI+RAID">THE SE</a> is preferable.. especially for a server environment :)

Re:SCSI is DEAD (1)

Toraz Chryx (467835) | more than 12 years ago | (#2899827)

well, THAT fucked up... links == SCSI RAID [] IDE RAID []

Re:SCSI is DEAD (2, Interesting)

megabeck42 (45659) | more than 11 years ago | (#2900241)

Aight, sorry - I do agree with you, and have a stack of IBM DDYS disks to show for it - BUT, you said some things that were wrong:

Just about all PCI ide controllers can use DMA, which cuts the cpu intensiveness down to almost the scsi-level.

Further, Not all drives have to have their own individual interrupt - this depends on the ide chip and how they are arranged on the pci daughterboard, or on the motherboard. (interrupt sharing, etc.) Promise chips will use one interrupt for two interfaces.

SCSI does offer a whole slew of advantages, disconnect, command queueing, etc. These are advantages in a RAID setup. IDE does suck, but not because it's cpu intensive or gobbles interrupts.

It really depends on what you're doing... (2, Insightful)

Sxooter (29722) | more than 11 years ago | (#2903546)

While I'll agree that SCSI is superior for most applications, IDE is no slouch nowadays.

On one of our production servers we have twin 18 Gig 10krpm Ultrawide SCSI drives for the database, and a pai rof 80 Gig IDE drives for the static data like web content.

The pair of U2W SCSI drives in a RAID1 can be read at about 48 Megs a second by bonnie, while the pair of 80 gig IDEs can be read at about 28 Megs a second.

pgbench, a little benchmarking program for postgresql, gets about 150 to 200 transactions per second on the dual SCSI drives, while it gets about 100 to 120 on the dual IDE drives.

the problem is, even under it's heaviest loads, that machine never handles more than 10 or 20 transactions every second. Both sets of drives are plenty fast enough to hand the load.

For servers that need hundreds of gigabytes of storage but only have to provide static storage for a medium, to small group, the money you'd spend on SCSI is probably better spent on other options for that server.

For a database server handling hundreds of concurrent users, SCSI (via electrical cables) is a good choice, but maybe a SCSI over FC-AL setup would be needed.

Engineering isn't about which component is the absolute best, it's about which component makes the most sense for what you're doing.

Re:SCSI is DEAD (2)

Inoshiro (71693) | more than 12 years ago | (#2910675)

SCSI is SCSI wether it be fibrechannel or the 50/68 pin connector.

Fibrechannel is just another connector.

AMI sold the Megaraid divison to LSI (2, Informative)

Toraz Chryx (467835) | more than 12 years ago | (#2899664)

an LSI Megaraid should work quite nicely in Linux, I have an older AMI Megaraid Enterprise 1200 (model 428) with a couple of small disks as my linux disks (and one of them is my windows swapfile :)

never had any problems with it whatsoever.

Re:AMI sold the Megaraid divison to LSI (1)

PalmKiller (174161) | more than 12 years ago | (#2899681)

I appreciate this you letting me know this as I have a lot of storage that needs good raid controllers. I was more or less down to the AMI/LSI and the ICP/Intel controllers at this point. Its interesting that a good bit of the raid controller companies got bought over the last few years, its also interesting seeing the hard drive company buyouts.

Re:AMI sold the Megaraid divison to LSI (2)

rasjani (97395) | more than 11 years ago | (#2903305)

My company bought a bunch of intel server boxes loaded with ami megaraid 5XX (cant remember what) controllers and i must say that atleast in solaris, they suck big time. (Due to drivers i think). Best uptime for such box is around 3-5 days, depending on hd usage. Things start falling done that ssh stops responding first and then, after an hour or so, all other services die too (like apache,tomcat,jserv or mysql).

Well, rest of the company is sun advocates but as im part of another division, ive taken liberty to install linux on production and now, ive got these machines running over 2 months and i cant ever remember logging into the boxes after i installed them... (Allthought i had to tweak the setup first, server's scsibackplane didnt support maximum speed, and also, someone reported that current driver might not work well in 64BIT mode so i dropped it to 32BIT..)

So, atleast, megaraids seem to work for me but buy one and do your tests and make your own choise =)

IBM ServeRaid (3, Informative)

Ringlord (82097) | more than 12 years ago | (#2899786)

We have two IBM Netfinity servers that use IBM ServerRAID 3L. The cards are not that great as they only have 4 MB of cache, but they run reliably under 2.4.13.

The drivers are maintained in the kernel, so there is now patching or downloading of drivers.

I think IBM has other models that come with more cache, so you could try calling them.

Re:IBM ServeRaid (1)

PalmKiller (174161) | more than 12 years ago | (#2899817)

I just looked at them on their website, they have a 4x with 64M cache and battery backup installed for $900 street.

Re:IBM ServeRaid (3, Informative)

velkro (11) | more than 12 years ago | (#2899883)

Disclaimer: I'm an ex-IBMer, who worked in the Linux services area.

I've used the 3[hl] and 4[hl] series of ServeRaids for over a year under Linux (both 2.2.x and 2.4.x kernels) with decent results. I currently have about 15 IBM x340's with ServeRaid 4l's running in production for nearly a year - no problems so far, however I did avoid early 2.4.x kernels (only upgraded after 2.4.7). I've suffered through failed drives and whatnot without datalos.

If you can find the ipsutils.rpm out there you can manage it from the commandline, otherwise the Java-based ServeRaid manager will let you do everything the Windows tools to under Linux.

Re:IBM ServeRaid (1)

hfcs (22012) | more than 11 years ago | (#2900572)

Ditto on the ServerRaid's. I've got one running in a Netfinity box doing Oracle and it's been problem free since it arrived.


What about ATA RAID 5? (3, Interesting)

dschuetz (10924) | more than 11 years ago | (#2900102)

Does anyone have any thoughts about IDE raid, especially the offerings from Promise Technology [] ? They've got cards that do RAID 5 with regular IDE drives, including hot failover capability. They've also got subsystems that put a full 8 disks into a RAID array, but presents it to the controller as a single SCSI device.

Advantages: Cheap drives.
Disadvantages: Speed, maybe, though since it's all going directly into the PCI bus, I'm not sure this is an issue.

Anyone used these? Comments? I figure with their SuperTrax controller and a bunch of 80 or 100-G drives, you could have half a terabyte in your basement for under two grand.

Re:What about ATA RAID 5? (2, Interesting)

Strog (129969) | more than 11 years ago | (#2900769)

The Promise and Highpoint controllers are actually soft-RAID meaning they use the host CPU. 3ware has a good hardware IDE RAID up to 8 drives but they seemed to stop selling them execpt in their sub-systems. Everything I've heard about the Linux drivers is that they are good and I know the FreeBSD drivers are rock-solid. I think the Adaptec controller is hardware too but am not 100% on that.

Re:What about ATA RAID 5? (1)

questionlp (58365) | more than 11 years ago | (#2900877)

The Adaptec 2400A ATA RAID controller is a hardware based solution from Adaptec (more info on the product can be found here [] . The 1200A is the soft-RAID controller that you were mentioning.

On the Promise side, the SuperTrak SX6000 is their hardware ATA RAID solution (the PDF datasheet can be found here [] . The older version of the SuperTrack SX6000, the S/T66, is also a hardware ATA RAID controller. The FastTrak series are their soft-RAID controller series.

I'm personally looking at the 3Ware offerings (as the FreeBSD 4.x kernel has support for it, I believe in the default kernel) and possibly the Adaptec 2400A.

Re:What about ATA RAID 5? (2, Informative)

Chronoforge (21594) | more than 11 years ago | (#2901280)

Actually, 3ware has reversed their decision, and is again selling the IDE RAID cards.

Re:What about ATA RAID 5? (2)

ansible (9585) | more than 11 years ago | (#2902594)

Unfortunately for use small guys, 3ware is discontinuing their 32-bit PCI cards, in favor of 64-bit.

Well, if I ever decide to build a server based on IDE RAID, maybe I'll buy a 64-bit mobo.

3ware Experience (0)

mccormick (40772) | more than 12 years ago | (#2904396)

I've been running with a 3ware Escalade 6200 (ATA/66 -- discontinued now) for little over a year now. I haven't had a single problem. I've been running the series 2.4.x most of that time and even in the early releases it was very stable.

I recently built a server with an Escalade 7410 (four drives, RAID 10) and I think 3ware made the right decision to go with the 64bit PCI support. It's a pretty sweet setup, and again, great support under Linux (which is what I use.) The 3ware Escalade line comes highly recommended from me, but there are a few things on my wishlist.
  • Upgradable cache memory. Not just 1 or 2MB with the $100 difference between the 7x10 and 7x50 series that is standard.
  • Different Port Configuration. I recently found that in a 4U server case (with a 7410) having 4 drives (stacked horizontally) with the vertically placed IDE connectors on the Escalade board itself a challenge to wire. Those wide parallel ATA cables just got in the way too much. I eventually choose to buy some very convienient rouded IDE cables (from Bigfoot Computing [] .)
  • ATA/133 Support. I imagine this will come with time, but again, I'm looking forward to improved performance (I wonder, will they be called 8xx0s?)
I don't deny the very positive influence SCSI has had on the industry (recent experiences with 400Mbit/s IEEE 1394 based networking have been great, and 1394/Firewire/iLink is essentially a serial-based SCSI protocol and physical media) but I definately have reaped the benefeits of ATA/IDE based RAID.

Compaq (4, Informative)

JediTrainer (314273) | more than 11 years ago | (#2900129)

Don't discount the Compaq line of SmartArray controllers. I've been using one for 2 years without a hitch. Supports everything you need them to do (I'm using the Smart/2P controller in my server). Never had a single problem with it. You can find these on eBay really cheap too.

Re:Compaq (3, Informative)

Chuck Milam (1998) | more than 11 years ago | (#2900447)

I would agree. Compaq SmartArray driver support has come a long way in the past 18 months: I had a Compaq to put linux on, and 18 months ago, I couldn't get the SmartArray controller to work well as mouch more than just a straight SCSI controller. So, I switched to an ICP Vortex. 18 months later, while doing some hardware upgrades, I switched the SmartArray card back in, and went to Compaq's web site for flash updates and driver information. Amazing what a difference a year and a half makes--went from little to no information to "Oh my god! There are so many support options and Linux drivers I'm not sure where to start!"

The SmartArray works great. The little lights now light up on the drives (ya know, green, yellow and "uh-oh"). Heh.

Re:Compaq (2)

Drakino (10965) | more than 11 years ago | (#2902732)

Compaq also has an online configuration tool that runs under Linux now. With it, hot plug support and LVM, it's possible to add a hard drive, and add it to the filesysystem with no reboots.

I'm not sure how well these controllers work outside a Compaq server though, I have never tried.

Adaptec 29xx/39xx (1)

Bravo_Two_Zero (516479) | more than 11 years ago | (#2900239)

As always, your mileage may vary...

I'm having a lot of success with my Adaptec 29xx (2940 for SE like CD or external SE device, 2944 for LVD) and 39xx series cards. We don't use anything else in any of our operating systems (unless they are built-in to a motherboard). Granted, I'm not stressing my systems 24 hours a day... more like a few hours spread out over a regular business day.

I'm sure there are plenty who will readily disagree, but I don't think I've found, end-to-end, better hardware for SCSI controllers. Sure, getting the AAA-133 RAID controllers to work can be a challenge, but we've been nothing but happy with the rest.

We also have a lot of success with Mylex RAID controllers on several critical production boxes, though those are not *nix machines (NT 4.0 SP6).

fwiw, we pulled the DPT cards we have and replaced them with Adaptecs.

Compaq is worth a look (2, Informative)

un4given (114183) | more than 11 years ago | (#2900296)

Compaq's SMART array controllers work well under Linux, and support RAID 0, 1, 5, and 0+1 with hot spares. I have been running RedHat 7.1 on one of these for months, in a production environment, with great success.

ICP Vortex (2, Informative)

IpSo_ (21711) | more than 11 years ago | (#2900665)

I work for a medium sized web hosting company which sells dedicated/managed servers to customers. We will only put ICP Vortex cards in them. These are the only cards we put in our own servers as well, I would say we have at least 40 of them in our datacenter and they work great. Not to mention if a drive fails you can easily hear them beeping from outside the datacenter, even with all the server/air conditioner noise.

Great cards, great speed, and a not so bad price. They work flawlessly in Linux and Windows.

Re:ICP Vortex (3, Informative)

Zurk (37028) | more than 11 years ago | (#2900799)

yep. i'll second that. ICP vortex also has linux utilities to control the cards from within linux instead of booting to DOS. i've never had any problems with vortex controllers. just make sure you use a stick of good (crucial or somesuch) ECC SDRAM for the cache memory. dont spare any expense for the cache memory. BTW, ive run them on AlphaLinux with the 164LX boards and SRM bioses. works great. A 21264 Alpha, a gig of RAM with a ICP Vortex card and 256 Megs of ECC cache on the card connected to an 8 drive arra cant be beat.

Re:ICP Vortex (1)

hbackert (45117) | more than 12 years ago | (#2905118)

I can second this too.

We used several ICP controllers with 2 to 7 disks (RAID-1, RAID-5, with and without hot spare) and they worked well. Mostly using Windows NT, but I built one with Linux and an Oracle DB (several GB of data) on Linux (2.2 at that time) andwe did some stress tests (about 10 users connecting doing full table scans, updating large amounts of data) and while the RAID array was working really hard, the box was entirely stable. Even simulating a drive failure did not cause data loss. And the Linux support is great, aswell as the support in general (but that was before Intel bought them, so now it might be either worse or asgood as it was before).

Two options to consider (1)

depeche (109781) | more than 11 years ago | (#2900782)

I have used Mylex RAID controllers very successfully with the older kernels on Linux (I run debian-stable) and on FreeBSD. Hot-Swap worked fine, and the on-board BIOS could be used for all configurations, plus there was adequate information from the kernel on RAID state. So, unless they have become significantly worse recently, I would at least consider Mylex.

But, you might want to consider one of the alternatives like RaidTec or its ilk. These are large boxes with RAID controllers built in and capacity for a fair number of disk enclosures. The RaidTec, for instance, can take 512GB+ (maybe 768GB+ now) and has options for redundant controllers, either fiber channel or SCSI. Just shows up as drive space. I haven't yet had a RaidTec unit up with Linux, but they claim it's fine. There are many others, with the EMC units being at the top of the cost heap.

Can you run a test? (2)

bluGill (862) | more than 11 years ago | (#2900807)

Since cost is not your primary worry, can you run a test? Get one of each, configure each with several harddrives and see what happens to each under load. While there is a large difference between the way system behave when running 50GB over 7 harddrives and the way systems work when running with 50TB in a real raid system, anything that doesn't work with the smaller systems will fail in the large ones too.

Adaptec works for me (1)

wang33 (531044) | more than 11 years ago | (#2901232)

we just put together a system here at work with a dual athlon xp tyan board and a 64bit adaptec raid card rinning raid 5 + 1 hot spare, and have not had any trouble what soever suse 7.3 detected it all on its own no driver disk needed. I am not sure of the exact model number (and iam not going to go pull the cover off one of our main production servers) tho dmesg gives this Loading Adaptec I2O RAID: Version 2.4 Build 5. I dunno why this poster has had such trouble maybe he should upgrade his distro to suse :-)

Re:Adaptec works for me (0)

Anonymous Coward | more than 11 years ago | (#2902951)

Interesting, since I do have suse running on one machine, or did for a while. Again these adaptec (or dpt...same animal) raid cards run great on light load (I had 200 days on a light load system) but once you hammer it to the point the i/o system has to work hard, it deadlocks. It is a known problem by other users too...adaptec supposedly tried to fix it, but its still an issue it seems.

Re:Adaptec works for me (1)

PalmKiller (174161) | more than 11 years ago | (#2903056)

Hehe me is palmkiller, how in the world did I suddenly become a coward :)

IDE Raid Controllers (1)

scotti (222754) | more than 11 years ago | (#2901260)

We just bought a system that has a usable Terabyte of disk space and payed about $7500 for it.

We're using 10 Maxtor 130gig drives on 2 3ware [] 7510 Controllers. We could still put 6 more drives on the two controllers.

Re:IDE Raid Controllers (1)

scotti (222754) | more than 11 years ago | (#2901471)

Small correction to my posting. The controller is a 7810 and not a 7510. :)

Re:IDE Raid Controllers (2)

Wakko Warner (324) | more than 12 years ago | (#2905023)

I use the 6800 at home for about 300 gigs of RAID-5 storage. I use FreeBSD as the OS, though, for this particular machine. (Linux seems to be 3ware's preferred OS, however.) So far, things have been fine. Unfortunately, the first card I was sent was DOA (seemed to have cache problems.) The second one worked fine, and is still in the system working happily. I'm not sure i'd recommend these cards for HA systems, though, for a couple of reasons:

Can you buy hot-swappable IDE enclosures? I've never seen any.

Performance-wise, these cards aren't top-notch. They have a very small amount of cache. Modern SCSI RAID cards take DIMMs and can be easily upgraded to more cache if necessary. These things have soldered-on memory.

For mass storage, they're great. For high-performance mass-storage, I'd still look to SCSI. Where else can you get 15000 RPM drives with 5-year warranties?

- A.P.

Re:IDE Raid Controllers (1)

scotti (222754) | more than 12 years ago | (#2905048)

You can get hot swap IDE enclosures. We got ours from California PC [] .

Maxtor?!? (1)

itwerx (165526) | more than 12 years ago | (#2935186)

Maxtor (and Western Digital) are pretty bad these days. We quit using them in our servers at all awhile back (I work for a VAR). IBM and Seagate (with a couple of notable individual model exceptions) are the best. Quantum being middle-of-the-road of course, though that may change since Maxtor bought them a year or so ago.

Stability issues?! (2)

tzanger (1575) | more than 11 years ago | (#2903194)

I wonder if your hardware isn't going bad. I too run DPT SmartRAID controllers. A 2654U2 to be exact (2-channel version) in a 440LX P2/266 (we're network bound, not CPU bound) which is used for fileserving about 50 people in a file-heavy office environment. Before that, it was a SmartRAID V with the hardware cache/RAID card (which is in use ino another, heavier-hit webserver).

Zero stability issues on both. The 2654U2 has a 5-drive RAID5 + hot spare (UW2 SCA drives) on one channel and a 6x24 DDS-3 on another. I've done some pretty I/O intense things on this controller (including rebuilding the array during office hours) with no problems at all. This is on kernel 2.4.17. The SmartRAID V is on a 2.2.14 system which has about 50 colocated web and mail sites (it does a pretty good job of keeping the T1 busy). It runs a RAID1+0 array with really old Seagate Baracuda SCSI-1 drives and a single external DDS-3 for backup. Again, zero stability issues. I'd buy these again without hesitating.

Perhaps you need to delve deeper into the problem. The 2654U2 did not like the original P90 system the server used to be; we had bad issues there and the tech basically said that the original PCI spec was not good enough for the card. Upgrade the motherboard and all was fine. If you're running 2.4, make sure you're in the .16/.17 kernels, as earlier 2.4 kernels had issues with all manner of things, but not specifically the DPT I2O drivers, IIRC.

Both of these systems run the kernel drivers and use the dptutil software that DPT used to have (which you're right, has gone the way of the dodo after Adaptec's assimilation of DPT); what specifically can you do to cause problems? I don't think it's the card/drivers in general but if you give me a test or two I can run to see if I'm affected as well we might be able to fix this.

Slashdot's braindead lameness filter is not letting me post my dptutil -L output. Sorry.

Re:Stability issues?! (1)

PalmKiller (174161) | more than 11 years ago | (#2903769)

Is is interesting that you have no problems. I have tried 3 mainboards (a dual pent III, and two athlon boards) and two seperate 2654U2 controllers (well one is a 2654U1) on one of my test machines. I have not had too many times that the lockups happen predictably, but I do have one senerio that is repeatable on the test machine. Compile a kernel four times in a row with no more that a couple seconds pause between... I use

make dep;make clean;make bzImage; make clean; make bzImage; make clean; make bzImage; make clean; make bzImage (make dep only the first time), and then usually in about 3-4 seconds after that, its locked, if not doing a ls -lR / > /tmp/listing.txt will finish it off. This happens on 2.2.19 and 2.4.17 kernels and even 2.4.18preX ones. This has to be done at the console or using screen or simlar as telneting in slows it down to the point that it don't kill it.

Let me know if this kills you box!

Controllers? Who need controllers? (2, Informative)

Sxooter (29722) | more than 12 years ago | (#2904778)

There are three great options to get your servers out of the RAID-controller business. One is NAS (Network Attached Storage), the second is using native SCSI or IDE controllers with RAID provided by your OS. And lastly, you can buy a box that already is a RAID but just looks like one big fat drive and plug it in.

At work run all our linux boxen at work with kernel mirroring and it uses almost NO CPU even under pretty heavy parallel load. Great for the base OS with SCSI or IDE, since the only thing they'll do once they boot is swap to these. Striping your swap space across multiple drives really helps when a server starts running low on memory.

I have mirror sets running at 48 Megabytes a second on two year old 18 Gig 10k SCSI drives for streaming output, and can provide very good performance under parallel load as a database disk set.

I've never had the kernel RAID drivers act flakey since I started using them over two years ago, and I've done various things like hot insert a raid disk in both RAID 1 and RAID 5 (both were pretty easy to do.) and typed the respected, yet undocumented --really-xxxxx (xxxxx=a 5 letter word not mentioned here!) flag a few times.

A friend is in the process of building NAS servers in 2U units with multiple IDE cards and ~500 Gigs of storage for ~$3500 or so. SCSI versions would be a bit more, bigger, and probably need more cooling, but be faster too. Right now the IDE ones are fast enough with a RAID 5 configuration.

The IDE ones can flood a 100 Base-TX connection, so performance isn't really an issue for anything on less than gigabit, and even then the IDEs will use up a goodly chunk of that.

The external RAIDS are often the fastest for databases, offering fibre optic connections. they're not cheap, but if you're running EBay's database, cheap isn't the point anymore. :-)

If you have to have a RAID card, I can recommend the AMI Megaraid 428, which used, on Ebay, goes for $100 right now. Not that fast (I never got more than 20 Megabytes a second from one) but very solid and reliable, and they can hold up to 45 SCSI hard drives if you can afford the cooling and electrical for them. Plus the first channel looks like a regular SCSI card to anything other than a hard drive, like a tape drive or CDROM, so you don't need another SCSI card if you want a tape drive to back it up.

While the Megaraid site no longer has configuration software available, this site:

points to this site: nd ex.asp?fileid=R28825

on Dell where you can find management software for the MegaRAID controllers.

ICP Vortex good (2)

billmaly (212308) | more than 12 years ago | (#2904889)

In my albeit limited experience w. RAID controllers, you cannot buy a better controller than ICP Vortex. Support is good, and they make a solid product. And you pay for it!

IDE RAID is fine for workstation's and home far as I am concerned, it has no business in a corporate server environment. Anyone who tells you differnet is shaving pennies, and hasn't a clue. Of course, your opinion may differ!
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?