Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Experiences w/ Software RAID 5 Under Linux?

Cliff posted more than 9 years ago | from the how-well-has-it-worked-for-you dept.

Data Storage 541

MagnusDredd asks: "I am trying to build a large home drive array on the cheap. I have 8 Maxtor 250G Hard Drives that I got at Fry's Electronics for $120 apiece. I have an old 500Mhz machine that I can re-purpose to sit in the corner and serve files. I plan on running Slackware on the machine, there will be no X11, or much other than SMB, NFS, etc. I have worked with hardware arrays, but have no experience with software RAIDs. Since I am about to trust a bunch of files to this array (not only mine but I'm storing files for friends as well), I am concerned with reliability. How stable is the current RAID 5 support in Linux? How hard is it to rebuild an array? How well does the hot spare work? Will it rebuild using the spare automatically if it detects a drive has failed?"

Sorry! There are no comments related to the filter you selected.

Advice: Get lots of RAM (0, Informative)

Anonymous Coward | more than 9 years ago | (#10675185)

Writing large files to an eight-drive RAID-5 arary will be butt slow unless you have a LOT of RAM.

The idea is that in order to write data to any sector on one of the drives, the sectors from six of the other drives need to be read, all XOR'd together, and then the result written to the remaining drive.

In theory, this could be done simultaneously--read from all drives at once. In practice, Software RAID and ATA isn't so good at that kind of thing. (Good hardware RAID is a different story.)

So the idea is that those six reads will take a reasonable amount of time, every time there is a write. If you have a lot of RAM, and/or don't write really large files, it won't be a problem because all the data can be cached in RAM and the reading/writing involving the disks can be done later, at the OS's leisure. However, if you don't have a lot of RAM, or copy really big files, you'll have performance issues.

You may not notice this for a little while, until your array starts filling up, because some implementations (not sure about the Linux software one) optimize it so that they assume unused sectors are filled with a known value, so they don't actually read from drives where the sectors haven't been written to yet (they keep a big table in memory). This is a GREAT optimization. But over time, it will get slower and slower.

So my advice to you is to install a lot of RAM in this system, whatever the motherboard allows. At least one gigabyte, but preferably two or more.

Re:Advice: Get lots of RAM (5, Interesting)

mortonda (5175) | more than 9 years ago | (#10675297)

The idea is that in order to write data to any sector on one of the drives, the sectors from six of the other drives need to be read, all XOR'd together, and then the result written to the remaining drive.

Your logic eludes me. The blocks do not need to be read, as we are in the process of writing. We already have the data, because we are writing, so why would we re-read the data?

Furthermore, block sizes default to 4k, though you could go to 8k or 32k block size. At any rate, you don't need a gig of RAM to handle this.

Finally, XOR is not that expensive of an operation, and a 500Mhz CPU is going to be able to handle that faster that any but the most expensive controller cards.

So unless you are actually a RAID kernel developer, I don't buy your story.

Re:Advice: Get lots of RAM (0)

Anonymous Coward | more than 9 years ago | (#10675338)

Hmm, nope, I think that a 3ware controller with its risc processor can push more XOR operations but I agree with you on the amount of ram situation. Why??

Re:Advice: Get lots of RAM (5, Informative)

Anonymous Coward | more than 9 years ago | (#10675334)

Another piece of advice would be, since you have eight identical drives, to use only seven drives in the RAID array, and keep the eighth one out of the array entirely, either outside the computer in an antistatic bag or as a "hot" spare--installed but idle.

When one of the drives fails--and one of the drives will fail--this will allow you to swap in the replacement drive immediately, before another drive fails. (Remember, if two drives fail in a RAID-5 array, you lose data.) You can then return the defective drive, get a replacement from Maxtor, and when that one arrives FedEx in a few days, that one will be your new "spare."

You can either keep your spare drive unused, outside the computer, or keep this spare "hot"--in the computer, connected and ready to go, but unused by the array or anything else, and have the array fall over to it automatically when a drive fails.

Both ways offer advantages. If you keep the drive out of the computer, since you need to shut down to remove the bad drive, you can install the spare drive at that time. If you were to keep the drive "hot" in the meantime, your extra "new" drive has been spinning for months or years, and exposed needlessly to heat. Which increase its probability of failure, making it essentially as likely to fail as all your other drives that have been running the whole time.

However, keeping the spare "hot" means that the array can be rebuilt sooner, in some cases automatically before you know there is a problem. This can reduce the possibility of data loss. You will have to reboot twice--once to remove the defectie drive to return to Maxtor, and once when the replacement arrives to install it as the new hot spare.

Which of those two choices is a judgement call, but it's absolutely critical to have a spare drive on hand.

Re:Advice: Get lots of RAM (1)

Jah-Wren Ryel (80510) | more than 9 years ago | (#10675358)

If you copy (or just write) really big files, then there should be no reason for the raid-5 driver to read all the other sectors in the parity chunk (don't know official terminology for it offhand) since, if done right, you will be writing all of the sectors with brand new information anyway.

Re:Advice: Get lots of RAM (2, Interesting)

GigsVT (208848) | more than 9 years ago | (#10675373)

I agree, the parent post is just a troll to see how gullible the moderators are. Apparently he proved his point. :)

Re:Advice: Get lots of RAM (4, Informative)

Anonymous Coward | more than 9 years ago | (#10675359)

The idea is that in order to write data to any sector on one of the drives, the sectors from six of the other drives need to be read, all XOR'd together, and then the result written to the remaining drive.
Um. No. Not if the RAID5 implementation is reasonably sane. Assuming all the drives are in good working order, all the software has to do is read the original block off the drive; the parity block off the appropriate drive; XOR the two values together, and XOR the new data; and you have the new parity block.

IOW: Two reads, and two writes. Not six reads and two writes. But yes, large amounts of RAM is a good idea. Of course, if a drive goes south, everything goes out the window and your performance will be shot until you replace the dud drive and everything resyncs.

Please! (0, Insightful)

Anonymous Coward | more than 9 years ago | (#10675186)

Do yourself a favour and buy some more or less cheap hardware RAID controllers. You won't regret it. Software RAID is nothing more than "showing it's possible".

Re:Please! (0)

Anonymous Coward | more than 9 years ago | (#10675306)

Yup, I agree. I have the same system but I use a hardware controller (3ware 7506) and I would never go back to software raid.

Avoid cheap raid controllers (3, Insightful)

Alan Cox (27532) | more than 9 years ago | (#10675316)

The cheap raid controllers are almost always software raid and not worth it. If performance is critical some of the higher end SATA and SCSI raid stuff is worth it, but a lot of that sucks too so do benches, take recommendations and don't believe in brand names...

Re:Please! (0)

Anonymous Coward | more than 9 years ago | (#10675339)

I'll agree. About a month ago, the local CompUSA had 240 gig drives on sale for $120. I bought 3 expecting a long-term software raid5 solution for all my storage needs. In the process of copying files over to the raid after setup, it started butchering inodes.
There were certain directories on the raid that couldn't be read (or removed). This was quite recently on a p4 machine (2.6.8 kernel, i think).
Can't say this is the norm, but after loosing a few fine albums to The Great Raid Migration Of '04, I haven't been able to trust linux software raids.

Re:Please! (5, Informative)

Gherald (682277) | more than 9 years ago | (#10675346)

> Do yourself a favour and buy some more or less cheap hardware RAID controllers. You won't regret it. Software RAID is nothing more than "showing it's possible".

There is no such thing as a "cheap" hardware RAID 5 controller. Well there is, but they'll still set you back at least $120 and are crap.

There are RAID controllers from highpoint and promise, et al that are card-based, but they are still CPU bound (that is where the XOR really takes place). So they're really nothing more than a controller with a driver that does the calculations in the CPU. These cards are good for booting windows to a software RAID (since that is essentially what they are) but not good for anything else.

Most motherboards especially those with only 2 RAID ports (whether IDE or SATA) are software-based, as well. The nvidia nforce3 250 is one of the few notable exceptions.

But the bottom line here is: Linux Software RAID 5 is a logical approach if simple redundant mass storage is your main concern, and will save you at least $120. Also note that for RAID 0/1 it doesn't really matter if you go hardware or software since they aren't very processor intensive anyway. Pure software RAID 0/1 seems to be easier to set up in Linux (less mucking around with drivers) so it often makes sense to go with it for that reason alone.

Re:Please! (0)

Anonymous Coward | more than 9 years ago | (#10675411)

Absolutely incorrect. Promise SX4/SX4000 provides hardware XOR capability on the controller. XOR'ing never has to touch the host processor.

Ok. (0, Flamebait)

Adouma (826526) | more than 9 years ago | (#10675189)

I am concerned with reliability

Well then you probably shouldn't be using Maxtor HDDs. I've had 3 fail on me each in seperate builds before I just said "screw it" and bought a better HDD.

A google search for "Maxtor Sucks" yields 15,500 hits. You get what you pay for.

Re:Ok. (2)

satoshi1 (794000) | more than 9 years ago | (#10675225)

And are all 15,500 sites about how maxtor drives suck? I guarentee you the aren't. I've been using maxtor drives for as long as I can remember, and not one has failed. I still have a maxtor from five years ago and it's still running fine.

Re:Ok. (0)

Anonymous Coward | more than 9 years ago | (#10675271)

I've had one Maxtor drive fail on me, and one of my friends has had at least 2 fail. I'm pretty happy though - Maxtor replaced my 80gb drive for a 250gb :)

Re:Ok. (0, Flamebait)

zaqattack911 (532040) | more than 9 years ago | (#10675256)

UUuuh try googling that phrase with quotes around it genius...

maxtor sucks actually gives 233 search results.. in the world :)

That pretty much means fuckall for your argument.

Re:Ok. (1)

pairo (519657) | more than 9 years ago | (#10675274)

Results 1 - 10 of about 85,800 for slashdot sucks.
You get what you pay for. :-)

How about (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10675190)

experiencing my cock in your ANUS!!!

Re:How about (0)

Anonymous Coward | more than 9 years ago | (#10675210)

i guess nobody'd notice anything

Re:How about (0)

Anonymous Coward | more than 9 years ago | (#10675266)


Works great (4, Informative)

AIX-Hood (682681) | more than 9 years ago | (#10675195)

Been doing this with 5 Maxtor Firewire 250gig drives for a good while, and regular ide drives for years before that. It's always been very stable and has had no problems with drives going bad as long as you replaced them quickly. I moved to firewire though, because it was much easier to see which drive went bad out of the set, and you could hot swap them.

Re:Works great (1)

doublebackslash (702979) | more than 9 years ago | (#10675257)

I agree, haven't had any probs, save for 1 drive crash, and i have migrated the array twice to new boxes.
I love the idea of firewire, too, it makes perfect sense, 'cause if you are gonna have raid reliability, then you might as well have hot-swap. (note to self: save up for firewire enclosures).

I would have 2 years of uptime, but NOOOOO 1 national power outage and 1 drive crash (perfect recovery). Uptime 71 days =( [I WAS at 260 at one point]
just make sure that the drives are up to that much spin-time, and mix brands so that 2 crashes at once are less than likely. Once you can afford it get a hot spare or two for backups for when one does fail.

Re:Works great (5, Interesting)

k.ellsworth (692902) | more than 9 years ago | (#10675350)

Normally a drive crash anonunces itself some time before... use the smartctl tool.
that tool checks the SMART info on the disk about posible failures..

I do a lot of software raids and with smartctl, no drive crash has ever surprised me. i always had the time to get a spare disc and replace it on the array before something unfunny happened.

do a smartctl -t short /dev/hda every week and a -t long every month or so ...

read the online page of it:

A example of a failing disc: es/MAXT OR-10.txt
a example of the same type of disc but with no errors: ples/MAXT OR-0.txt

Software raid works perfect on linux... and combined with LVM the things gets even better

Re:Works great (1)

Gherald (682277) | more than 9 years ago | (#10675259)

You can hot swap SATA. Deffiately the way to go nowdays, seeing as they are only $1-5 more expensive than their IDE counterparts.

Re:Works great (0)

Anonymous Coward | more than 9 years ago | (#10675317)

Problem with SATA: the shielding of the connectors isn't specified at all in the standard. I.e. not really for drives not in your case. I.e. hot swapping gets harder again.

Re:Works great (2, Insightful)

Gherald (682277) | more than 9 years ago | (#10675381)

SATA is meant to be used internally, yes.

OP said he switched to fireware for hot swapping reasons alone, that is why I mentioned SATA as an alternative.

If you're beant on having an external RAID 5, you're probably safest going with a DIY gigabit ethernet NAS.

Re:Works great (1)

dnoyeb (547705) | more than 9 years ago | (#10675294)

Are you making one big raid partition on each drive and is you booting from this raid partition?

Or do you have still a seperate boot partition, and then the raid partition is seperate and maybe even another seperate swap partition?

I am hoping to do raid but it would be MUCH nicer if all i had was 1 big partition. Or more specifically, if using RAID did not force me into a custom partitioning scheme. Possible?

LVM (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10675198)

This LVM HOWTO [] might help you somewhat, not sure if it's RAID, but it's related!

raid? (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10675200)


Stick with hardware RAID (2, Informative)

chrispyman (710460) | more than 9 years ago | (#10675201)

Generally for situations where you really need to make sure the data stays safe, I'd just stick with hardware. If you can spend that much on some harddrives, I don't see why you can't spend the money on hardware.

Though from what I hear, software RAID on Linux works decently.

Re:Stick with hardware RAID (5, Insightful)

Anonymous Coward | more than 9 years ago | (#10675218)

Actually, a big disadvantage to hardware RAID is what happens if your controller fails.

Consider--your ATA RAID controller dies three years down the road. What if the manufacturer no longer makes it?

Suddenly, you've got nearly 2 TB of data that is completely unreadable by normal controllers, and you can't replace the broken one! Oops!

Software RAID under Linux provides a distinct advantage, because it will always work with regular off-the-shelf hardware. A dead ATA controller can be replaced with any other ATA controller, or the drives can be taken out entirely and put in ANY other computer.

Re:Stick with hardware RAID, mod this up! (2, Interesting)

Anonymous Coward | more than 9 years ago | (#10675263)

So true. In many cases HW RAID doesn't offer any advantage over software RAIDs and it's only a one new part that can broke and cost $$$ and time to replace.

Moderators, mod this up!

Re:Stick with hardware RAID (0)

Anonymous Coward | more than 9 years ago | (#10675333)

Not true. Of course it is standardized how RAID stores the data on individual drives. I.e. you can simply replace one controller with any other one which supports the same RAID level.

Or, have two backup RAID controllers. (4, Informative)

Futurepower(R) (558542) | more than 9 years ago | (#10675345)

This is a VERY big issue. We've found that Promise Technology RAID controllers have problems, and the company doesn't give tech. support when the problems are difficult, in our experience.

Government data compares Democrat and Republican economics. []

Re:Stick with hardware RAID (1)

dnoyeb (547705) | more than 9 years ago | (#10675268)

Is hardware supposed to be better? If so, why?

From what I read software is just as good as hardware RAID these days, and sometimes better. But its only what I read, i dont have first hand info.

Re:Stick with hardware RAID (1, Informative)

kgasso (60204) | more than 9 years ago | (#10675284)

> Generally for situations where you really need to make sure
> the data stays safe, I'd just stick with hardware. If you can
> spend that much on some harddrives, I don't see why you can't
> spend the money on hardware.

Truer words were never spoken. I don't know the status of the more recent software RAID implmentation in Linux, but I do know that bugs in the old one send 2 arrays in 2 different mission critical servers of ours down in a hailstorm of fire and brimstone.

We had one drive get booted from the array for having corrupted data, so the load on the other drives shot up a bit. We think that the increased load made the software RAID driver start lagging in writes to the disks, causing more corruption on another drive, until we were down to a steaming pile of rubble.

Happened 2 seperate times on 2 different machines, as well. We're sticking to hardware from now on.

Re:Stick with hardware RAID (2, Informative)

mortonda (5175) | more than 9 years ago | (#10675314)

RAID 5 hardware tends to be rather expensive, and most RAID hardware tends to be "pseudo hardware", the drivers for the raid card make the CPU do the actual work anyway. Your 500Mhz CPU is faster than all but the most expensive RAID controllers anyway.

Stick with Linux RAID. It knows how to do it better.

Re:Stick with hardware RAID (2, Interesting)

fleabag (445654) | more than 9 years ago | (#10675322)

I would support the sentiment.

Back when I was using a PII-450 as a file server, I tried out software RAID on 3 x 80 Gb IDE disks. It mostly worked fine - except when it didn't. Generally problems happened when the box was under heavy load - one of the disks would be marked bad, and a painful rebuild would ensue. Once two disks were marked bad - I follwed the terrifying instructions in the "RAID How-To", and got all my data back. That was the last straw for me...I decided that I didn't have time to watch rebuilds all night. Note that this may have been caused by my crummy Promise TX-100 cards, I never bothered to investigate.

I got an Adaptec 2400 IDE controller, and it hasn't blinked for two years. One drive failure, and the swap in worked fine.

If the data is important to you - go hardware. If you want to lean something, and have the time to play, then sofware is OK. Just run frequent backups! If the data is really important to you, buy two identical controllers, and keep one in the box for when the other craps out. Having a perfect raidset, with no controller to read them, would be annoying.

Re:Stick with hardware RAID (1, Insightful)

Anonymous Coward | more than 9 years ago | (#10675324)

make sure the data stays safe

Um, perhaps my understanding is wrong, but isn't RAID5 intended solely for reliability (that is, for making the storage system tolerant of a single drive failure, and thus increase its mean uptime). If you want the data to stay safe then use a backup, not a RAID.

In general (not replying you your otherwise quite correct post, please don't feel browbeaten) I really wonder
a) why anyone would need the additional uptime in an in-home setting and
b) what the point of a generic IDE raid5 is anyway. When one drive dies, the system keeps running with the hotspare. On a commercial array (or using hot-pluggable storage like firewire) you can pull out the bad drive, put in a new one, and the system rebuilds that as the hotspare, all without any loss of service. But with regular ATA (and I guess SATA, although I'm not so sure) you can't hotswap, so you have to powerdown the array to swap in the new drive - at which point the reliability you got from RAID5 is gone. Hmm, well, I suppose it's less downtime than you'd have restoring from backups, but it's questionable if that's worth the ongoing performance hit the RAID5 (even a hardware one) would cause.

Re:Stick with hardware RAID (0)

Anonymous Coward | more than 9 years ago | (#10675332)

-1 redundant [] my dear!

Re:Stick with hardware RAID (5, Informative)

kcbrown (7426) | more than 9 years ago | (#10675392)

Generally for situations where you really need to make sure the data stays safe, I'd just stick with hardware. If you can spend that much on some harddrives, I don't see why you can't spend the money on hardware.

I disagree with this. Here's why: the most important thing is your data. Hardware RAID works fine until the controller dies. Once that happens, you must replace it with the same type of controller, or your data is basically gone, because each manufacturer uses its own proprietary way of storing the RAID metadata.

Software RAID doesn't have that problem. If a controller dies, you can buy a completely different one and it just won't matter: the data on your disk is at this point just blocks that are addressable with a new controller in the same way that they were before.

Another advantage is that software RAID allows you to use any kind of disk as a RAID element. If you can put a partition on it, you can use it (as long as the partition meets the size constraints). So you can build a RAID set out of, e.g., a standard IDE drive and a serial ATA drive. The kernel doesn't care -- it's just a block device as far as it's concerned. The end result is that you can spread the risk of failure not just across drives but across controllers as well.

That kind of flexibility simply doesn't exist in hardware RAID. In my opinion, it's worth a lot.

That said, hardware RAID does have its advantages -- good implementations offload some of the computing burden from the CPU, and really good ones will deal with hotswapping disks automatically. But keep in mind that dynamic configuration of the hardware RAID device (operations such as telling it what to do with the disk you just swapped into it) is something that has to be supported by the operating system driver itself and a set of utilities designed to work specifically with that driver. Otherwise you have to take the entire system down in order to do such reconfiguration (most hardware RAID cards have a BIOS utility for such things).

Oh, one other advantage in favor of software RAID: it allows you to take advantage of Moore's Law much more easily. Replace the motherboard/CPU in your system and suddenly your RAID can be faster. Whether it is or not depends on whether or not your previous rig was capable of saturating the disks. With hardware RAID, if the controller isn't capable of saturating the disks out of the box, then you'll never get the maximum performance possible out of the disks you connect to it, even if you have the fastest motherboard/CPU combination on the planet.

One word (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10675202)


Don't go with 3ware (0, Troll)

Codename_V (813328) | more than 9 years ago | (#10675206)

I don't know much about software RAID, but one thing I do know, don't go with 3ware.

Re:Don't go with 3ware (0)

Anonymous Coward | more than 9 years ago | (#10675227)


Re:Don't go with 3ware (0)

Anonymous Coward | more than 9 years ago | (#10675236)

Just curious, why?

I've had great experience with 3Ware. I trust quite a bit of data to the 8000LP2 cards in pizza box IBM servers.


Re:Don't go with 3ware (1)

Moridineas (213502) | more than 9 years ago | (#10675240)

It must be obvious that you donn't know much about software RAID, because 3ware isn't software raid--they make hardware.

And they make quite good cards too, that are highly supported in linux,freebsd, etc. (I have an 8 port SATA raid card in use atm)

Re:Don't go with 3ware (2, Interesting)

GigsVT (208848) | more than 9 years ago | (#10675244)

Have you tried the 9500 series? It looks much nicer than their older offerings.

We've run several 7810s, 7850s in the past, totalling quite a few terabytes. All in all it's not too awfully bad, but the cards do seem to have trouble with dropping drives that don't seem to have any real problems (they recertify with the manufacturer's utility often with no errors).

If you go 3ware though, get the hot swap drive cages from 3ware. They are expensive, but it makes it much nicer.

Re:Don't go with 3ware (3, Insightful)

suwain_2 (260792) | more than 9 years ago | (#10675300)

This is the worst possible type of advice. Do you have any reason for not using them? Maybe you've bought dozens and they've all blown up and burnt your house down, which would be a good reason to not buy 3Ware. Maybe you work for a competitor.

For all I know, you could have a very good reason. But if you tell someone to make sure to to stay away from something, you should provide a reason. Especially if it's something that seems to have a really good reputation.

Re:Don't go with 3ware (1)

fire-eyes (522894) | more than 9 years ago | (#10675406)

This is slashdot. Not a legitimate tech forum.

*DO* go with 3ware (2, Informative)

Alan Cox (27532) | more than 9 years ago | (#10675330)

Except for the early 7000 series they are good cards and have decent performance too. I'm very very happy with the 3ware I have even though its one of the quite early designs.

Re:Don't go with 3ware (2, Informative)

Dop (123) | more than 9 years ago | (#10675384)

We've had two different 3ware hardware RAID cards without any problems in the last 3 years.

I've done software RAID as well using Promise IDE controllers. Fortunately for us we never had a drive fail in the software RAID so I can't comment on how difficult it is to recover from a failure.

Interestingly enough, we ran some fairly intense iozone tests on both the hardware and software RAIDs with very little difference in performance (maybe that's why the parent poster doesn't like the 3ware stuff). But... we also ran these same tests with a fibre-channel SAN disk, again with very little performance difference.

Maybe it was a Bus limitation... I didn't have time to investigate it any further.

stick with hardware (2, Insightful)

Anonymous Coward | more than 9 years ago | (#10675207)

Take it from me, stick with a hardware raid 5, reliablity is thru the roof, and cards are now around 300-500 for one with 128 mb of ram. Ince you spent 960 dollars on the harddrives, you might as well trust their organization to something of equal quality.

my 2 cents

Did you read the RAID-Howto (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10675209)

or just glance at it while freebasing? All of your questions are awnsered in that one document.

Re:Did you read the RAID-Howto (2, Insightful)

tokki (604363) | more than 9 years ago | (#10675265)

The guy could be looking for people's experiences rather in additional to any technical documentation, which is not only smart, but the hallmark of a responsible sysadmin (where knee-jerk comments tend typically aren't).

don't use ext2 (1)

gp310ad (77471) | more than 9 years ago | (#10675213)

unless you want to wait forever for fsck

Re:don't use ext2 (4, Funny)

CrazyGringo (672487) | more than 9 years ago | (#10675282)

If you're a geek, it's a given that you'll have to wait forever for fsck.

hmmm (0, Offtopic)

johansalk (818687) | more than 9 years ago | (#10675216)

I really really really wish to know what he's serving that'd need such amount of gigabytes.

Re:hmmm (0)

Anonymous Coward | more than 9 years ago | (#10675235)

pr0n, of course. And for his friends, no less.

Re:hmmm (1)

ScrewMaster (602015) | more than 9 years ago | (#10675255)


Re:hmmm (0, Informative)

Anonymous Coward | more than 9 years ago | (#10675273)

There are alot of things that can take up gigabytes of space. For instance:

scat porn
japanese tentacle rape dating simulators
video captures of his nubile, young 14-year old neigbor undressing, oh to view her budding sexuality she is but a flower of femminine innocence

Re:hmmm (1)

HiyaPower (131263) | more than 9 years ago | (#10675275)

Try a dvd collection. 250 dvds at 8 gb per = 2 TB. I've had around 2 TB on a couple of my machines for a long time for that purpose.

Re:hmmm (1)

Lusa (153265) | more than 9 years ago | (#10675288)

pr0n cache

Re:hmmm (0)

Anonymous Coward | more than 9 years ago | (#10675341)


Re:hmmm (1)

Deraj DeZine (726641) | more than 9 years ago | (#10675352)


Re:hmmm (0)

Anonymous Coward | more than 9 years ago | (#10675371)


GNAA INTERVIEWS EX-Linux Zealot. (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10675220)

Tonight, The Geeks and Nerds Association of America (GNAA) interviews an ex-Linux zealot (ELZ). He, after over two years of Linux zealotry finally eXPerianced the light and installed Windows XP on his computer. Each night we will bring you a new question and answer.

GNAA: Why did you to decide to install Windows XP?
ELZ : Because Linux suxorz coxorz.

GNAA: Thank you. We will be bringing you a new question and answer each night! Stay tuned!

Here is a better question (3, Insightful)

PrvtBurrito (557287) | more than 9 years ago | (#10675222)

Is there a good resource for hardware/software RAID support on linux? Tech support is always a challenge and we have a number of 3ware 8way and 12way powered by 250gb drives. We often have lots of mysterious drops on the array that require reboots or even rebuilding the array. Royal pain in the ass.

Re:Here is a better question (4, Interesting)

GigsVT (208848) | more than 9 years ago | (#10675320)

I just posted in another thread about 3ware and mysterious drops of seemingly good drives. Even with the ultra-paranoid drive dropping, we have never lost data on 3ware.

Other than that, 3ware has been decent for us. We are about to put into service a new 9500 series 12 port SATA card.

I wish I could say our ACNC SATA to SCSI RAIDs have been as reliable. We have three ACNC units, two of them went weird after we did a firmware upgrade that tech support told us to do, lost the array.

We call tech support and they say "oh we didn't remember to tell you when you upgrade from the version you are on, you will lose your arrays".

Bottleneck (1)

FiReaNGeL (312636) | more than 9 years ago | (#10675224)

To me, a cheap 500 Mhz computer (who probably have 64-128 megs of ram, I guess) is gonna choke on 8 250 GB hardrives on Software RAID 5. Some other posters suggested buying an hardware controller, and I agree. If you host stuff for your friends, you can always charge them a little extra to compensate.

Bottleneck is not CPU (2, Insightful)

mortonda (5175) | more than 9 years ago | (#10675336)

If all it does is serve files, it should do fine. The 500Mhz is not going to be a factor at all, in fact, the CPU will be idle most of the time. The real thing to optimize in a file server is the ATA bus speed and hard drive latency.

Performance Tips (4, Informative)

Alan Cox (27532) | more than 9 years ago | (#10675353)

There are a few things that really help in some cases, but RAM isn't always one of them.

If you've got a lot of data that is read/re-read or written/re-read by clients then RAM really helps, streaming stuff which doesn't get many repeat accesses (eg running a movie editing suite) it might not help at all

For performance its often worth sacrificing a bit of space and going RAID 1. Again depends if you need the space first or performance first.

Obviously don't put two drives of a raid set on the same IDE controller as master/slave or it'll suck. Also if you can find a mainboard with multiple PCI busses that helps.

Finally be aware that if you put more than a couple of add on IDE controllers on the same PCI bus it'll suck - thats one of the big problems with software raid 5 versus hardware which is less of a problem with raid 1 - you are doing a lot of repeated PCI bus copies and that hurts the speed of drives today.

I use raid1 everywhere, disks may be cheap but you have to treat them as unreliable nowdays.

ahem (0)

Anonymous Coward | more than 9 years ago | (#10675232)

file:///usr/doc/raidtools-1.00.3/Software-RAID.HOW TO/Software-RAID.HOWTO.txt

Where I used to work. (3, Informative)

suso (153703) | more than 9 years ago | (#10675233)

I used to work at Kiva Networking [] and we used hardware raid 5 on some machines and software raid 1 and raid 5 on others. Maybe it was just me, but the software raid 5 disks always seemed to last longer. Never much problems with it. In fact, we had more problems getting the hardware raid controller to work with Linux or with buggyness than anything.

Re:Where I used to work. (1)

suso (153703) | more than 9 years ago | (#10675253)

By the way, these "machines" that I'm talking about are relatively heavily used mail servers, DNS servers, etc.

Works Great! (1)

knitterb (103829) | more than 9 years ago | (#10675246)

I have 4 IDE Maxtor 200G drives on two Promise controllers and it's really very stable. I've done this for about the last 4 years for a home network share, with very very good luck.

After setting it all up, I encourage you to attempt to pull a cord from a drive, and boot, and make sure you know how to recover. Nothing can compare to the knowledge of knowing what to do in a serious failure.

My raid5: /dev/md0 576877984 506579124 40995112 93% /data

knitterb@machine%> cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 hdk1[3] hdi1[2] hdg1[1] hde1[0]
586075008 blocks level 5, 32k chunk, algorithm 2 [4/4] [UUUU]

unused devices:
0.000u 0.000s 0:00.00 0.0% 0+0k 0+0io 88pf+0w

Good luck! :)

Software raid (0)

Anonymous Coward | more than 9 years ago | (#10675261)

Just be carefull about what you use as your boot disk. You don't want the diskc containing /boot to be mixed in with the raid.

I'm doing this as well, my experiences: (2, Informative)

pythas (75383) | more than 9 years ago | (#10675269)

I have a smaller array, but it's been largely trouble free.

However, when a drive did drop off line, unless things are on their own channel, it seems to knock off the entire IDE channel as well. It ended up taking the better part of a day to get everything back online again, without any data loss.

It even seems like any time there's an IDE hiccup, you can knock your array offline.

It's definitely cheaper than hardware RAID, and I haven't noticed any performance problems, but sometimes the stability of good old SCSI raid is something I miss. :(

Devil in the... (1)

buddha42 (539539) | more than 9 years ago | (#10675277)

Does anyone know if software raid-5 in linux uses a chunk of memory specifically for the write cache? Most hardware raid5 cards have memory for this so that the card can tell the OS everything is done being written and then flush it out of cache at its convinience. If software raid-5 actually relies on the writes to be finished, it would be drastically slower (for writes).

Re:Devil in the... (2, Informative)

mortonda (5175) | more than 9 years ago | (#10675409)

the card can tell the OS everything is done being written and then flush it out of cache at its convinience.

Which is absolutely horrible. This violates protocol - mail MTA's demand that data is written to disk before they acknowlege delivery. They get this from the confirmation from the kernel, but if the disk array lies about it, a power failure could lose data even though the kernel assumed it had bee synced properly.

Depends on the HDD interface (1)

mattbee (17533) | more than 9 years ago | (#10675278)

As an ISP building servers for our own use, we've never had problems with Linux RAID-5 but plenty of problems with recalcitrant disc interfaces that won't easily support hot-swapping and/or clean reporting of drive failures. Even 3Ware cards (until relatively recent firmware revisions) could cause system crashes before they cleanly reported a drive as failed with Linux RAID-5 sitting on top (yes, Linux software RAID-5 is much faster than the hardware RAID-5 with their 8000 series cards). So my advice is just to test a failure: try unplugging a drive from a test array while it's running and see what happens. The "plug & play" aspect might not work as advertised but you shouldn't lose any data through it (we've not).

Be careful with "hotplugging" (1)

mortonda (5175) | more than 9 years ago | (#10675355)

IDE interfaces, and some SATA interfaces, are not designed to be hotplugged. There are special electrical circuits on hot pluggable scsi drives.

Once you yank the plug on an IDE, you *must* power down the system before plugging it back in. Sometimes yanking the plug hangs the kernel.

Power down the system, pull a drive, and then start it up. It should detect the missing drive on bootup. Alternately, follow the instruction on the LINUX RAID HOWTO. You have read that, haven't you?

Works Great! (2, Informative)

ogre7299 (229737) | more than 9 years ago | (#10675281)

I've been using RAID 5 with a 3 18 GB SCSI drive setup for about 6 months now, it works very fast and reliably.

The best advice I can give is to make sure each drive has its own channel if you are on standard ATA, you didn't specify SATA or regular ATA. If you're using SATA then all the drives get their own channel by design. If you have more than one IDE device on a channel in a RAID, performance will suffer because IDE can't write to both devices on the channel simultaneously.

software raid it fine (0)

Anonymous Coward | more than 9 years ago | (#10675289)

its money the cheap raid controllers are worthless if you want blinding performance buy the good stuff read 400 or 500 bucks as for the software raid its fine it works its fairly painless but it is no substitute for a good backup. the performance difference between little 'promises' and software is nil.

Vinum with FreeBSD (3, Informative)

Anonymous Coward | more than 9 years ago | (#10675290)

While it's not Linux, I've been using Vinum with FreeBSD for about 3 years with RAID 5 and have never had any problems. My current box is an old VIA 600MHz C3 with FreeBSD 4.8 and a measly 128MB of RAM. As far as benchmarks go, my RAID seems to blow away all of the cheapy hardware cards performacewise as well.

BTW, I switched from Linux to FreeBSD for the server years ago for the stability.

Go hardware... (1)

loony (37622) | more than 9 years ago | (#10675293)

If you got the money, go hardware... I have two systems that are identical with the exception that in one I broke one of the ide connectors of the 3ware controller and had to go software raid instead. I have 6 120GB disks in each, on separate channels... There is no problem with the stability and both setups survived at least 2 disk failures each.

Unfortunately the performance is another issue - I get three times the throughput on the 3ware controller. So the $300 I spent on the controller (used) was well worth it...


NOOOOOOO!!!! (-1, Flamebait)

onyxruby (118189) | more than 9 years ago | (#10675305)

Software RAID, are you f'n nuts? Seriously, why do you want to do that? Check out ebay, pricewatch or froggle and buy yourself a nice 3ware IDE raid controller. They have Linux support to boot. Dont bother with Promise, their equipment is crap. Your data, and that of your friends is worth more than the cost of tryig to recover from a software RAID failure. Do it right, get the controller, and let it handled everything for you. Software RAID is a half ass measure that will only cause you heartache.

Setting up software RAID is really easy.. (0)

Anonymous Coward | more than 9 years ago | (#10675307)

Enable RAID support in kernel. And install raidtools. Then raithotadd, raidhotremove, raidstart, raidremove, cat /dev/mdstat, /etc/raidtab , 0xFD (Linux raid auto) fdisk partition type are the keywords (on Debian).

Don't run software raid... (0)

TyrranzzX (617713) | more than 9 years ago | (#10675308)

It's a poor solution for raid, since if the OS goes, there goes your raid. If you use hardware, at least it'll autodetect.

Mabye I'm wrong and the people have a good solution for it, but still. Personally, if it's a 500mhz box, I'm guessing you're going to be using ATA133 PCI cards along with it, and frankly, raiding those won't net you much more speed. SCSI is what gets you the speed, but as it is you're looking at probably a max of 50 or 60MBps output.

I'd just stick a 100mbps card on it, hook up all 8 drives via 2 IDE ATA133 PCI (2 controllers a card) cards, with the OS installed on a raided partition spread across 4 of the disks, and backed up onto 4 other drives. Setup a cheap batch script that backs up all the files onto a second set of disks when the machine isn't being used, or use a backup program, as well as scanning for data errors/loss every week or so. It works if all it's doing is serving files. Additionally, if you set the "primary" drive as master, and the "backup" drive as slave, and you only use the primary drives for serving data, you'll have no problem.

Really, I'd think Raid Lv5 for 8 drives would be a bit excessive and/or inefficient, considering the PCI bus only gives so much bandwidth. Now, if we were talkin SCSI raid with 64 bit, 66mhz PCI, then your solution would be a good one.

Don't screw around - hardware is better. (5, Informative)

ErikTheRed (162431) | more than 9 years ago | (#10675309)

Software raid is fine for simple configurations, but if you want to "do it right" - especially considering that you just dropped about a kilobuck on HDDs, go Hardware. A good, reasonably priced true hardware RAID controller that will fit the bill for you is the 3Ware Escalade 7506-8. It has 8 IDE ports, 1 for each drive - you don't want to run two RAID drives in master/slave mode off of a single IDE port; it will play hell with your I/O performance. It's true hardware raid, so you don't have to worry about big CPU overhead and being able to boot with a failed drive (a major disadvantage to software RAID if your boot partition is on a RAID volume, certain RAID-1 configurations excepted). You can buy them for under $450. price [] is $423.48 (I have no relationship with them other than I've noticed that their prices tend to be decent).

One Drive per controller (3, Insightful)

mcleodnine (141832) | more than 9 years ago | (#10675348)

Don't hang a pair of drives off each controller. Get a truckload of PCI ATA cards or a card with multiple controllers. Don't slave a drive. (No, I do NOT know what the correct PC term is for this).

Also, give 'mdadm' a whirl - a little nicer to use than the legacy raidtools-1.x (Neil's stuff really rocks!)

Software RAID5 has been working extrememly well for us, but it is NOT a replacement for a real backup strategy.

Hot spares (2, Informative)

lathama (639499) | more than 9 years ago | (#10675351)

Declare at least one hot spare. I would declare two for your setup but YMMV.

nr-spare-disks 1

device /dev/hdh1
spare-disk 0

RAID5 (2, Interesting)

mikewelter (526625) | more than 9 years ago | (#10675361)

What will you connect eight drives to? Four PCI ATA controllers? I have eight 200GB drives on my data server using a 3Ware RAID controller, and it has worked wonderfully for 18+ months. I have had a drive fail (due to insufficient cooling), and the system didn't even hiccup. I have a software RAID system at a client's location. Whenever there is a power failure, the system comes back up nicely. However, because of the abnormal shutdown, the software RAID tries to recover one of the disks. This absolutely eats the processor for 16 hours. 98-100% utilization. Fiddling the /proc parameters is no help. I think this is a bug--what could it be doing for 16 hours?

RAID-5 with 6x 300GB Maxtors (1)

EvilMonkeySlayer (826044) | more than 9 years ago | (#10675364)

I built a fileserver for work on the cheap, it's been up for about a month now without any problems.

You have two choices with software RAID under linux, raidtools and mdadm.

Raidtools is a bunch of tools for setting up, looking after etc a software RAID under linux and mdadm is essentially the replacement to Raidtools, everything is built into the single mdadm program.
I personally find Raidtools to be easier to use at the command line, however mdadm is the future. Also of note, if you want to set it up easilly without the hassle just use Webmin, its RAID setup is a breeze.

RAID isn't totally reliable (1)

Anubis333 (103791) | more than 9 years ago | (#10675372)

If you are concerned with reliability, don't store everything in one device that can get taken out by fire/electricity/anything. Parity also isn't very reliable, I had two drives go bad in my RAID 5 and I was screwed. Also, Maxtor drives arent reliable, not like SCSI drives.

And as far as IDE drives, Seagate makes 200GB drives with 8mb cache, 5 year warranties, and they were at Frys last week for 50 bucks.. (rebate)

My Configuration - RAID5, Mandrake 10, Athlon 2000 (1)

Luciq (697883) | more than 9 years ago | (#10675378)

I have a very similar setup, but I'm currently only using three hard drives. I'm running Mandrake 10 on an Athlon XP 2000 with three Western Digital 250GB drives. That gives me 500GB of usable space, which I'll probably upgrade to ~1TB (5 drives) in January. So far I've had no problems. I have a UPS, but I've pulled the plug several times for testing purposes with no ill effects (I tested the setup for several weeks before putting real data on it). I think a big reason this setup works for me is that the server is never under much stress (serving 3-4 household PCs), and it's only used as a file server. I use computers for everything. Work projects, personal projects, audio/video editing, etc. I never delete anything and would be devastated by a data significant data loss. So I use RAID5 with monthly backups (unfortunately backups are still important), and all important data is saved on the storage server. I'd recommend against sticking 8 drives in a box. If you're set on using 8, you might want to consider putting a few in external enclosures. Make sure you keep cool air moving across them to avoid shortening the lifespan of your drives. Currently, I'm only using about 250GB on my 500GB server, with most of the space being consumed by data from my audio/video projects. Unless you're actually going to use 1750GB of space, I suggest using five drives or so in the server while using the remaining three for backups. Also as was mentioned in a previous post, your system may not be beefy enough to handle an 8 drive RAID5 array. Let us know what you end up doing.

My Vote: Use Hardware for RAID 5 setups (1)

Whizzmo2 (654390) | more than 9 years ago | (#10675379)

As other posters have mentioned, software raid is fine for RAID 0, 1, 0+1. As you get to RAID 3 [] ,RAID 5 [] , and RAID 6 [] , however, your processing requirements go up quite a bit.

A SATA RAID 5 card [] with hardware XOR engine and a DIMM slot for cache might be a cost-effective option for you. (Goes for ~$180 on Pricewatch [] , or ~$240 on Dealtime [] )

Oh, and I would have goine with HGST [] , Western Digital [] , or Seagate [] for your drives... but I suppose hardware failure is what RAID 5 is for :)

CONFIGURE IT RIGHT!! small parts... (5, Informative)

brak (18623) | more than 9 years ago | (#10675389)

You will get responses from people with good and bad experiences, but they are all jaded by their small particular case. After seeing what can happen with dozens of machines (8 drive and 4 drive) running Linux software RAID5, here is some concrete advice.

First, ensure that all of the drives are IDE masters. Don't double up slaves and masters.

Secondly, DON'T create gigantic partitions on each oft he 250's and then RAID them together, you will get bitten, and bitten hard.

Here's the skinny...

1) Ensure that your motherboard/IDE controllers will return SMART status information. Make sure you install the smartmon tools, configure them to run weekly self tests, and ensure you have smartd running so that you get alerted to potentially failing drives ahead of time.

2) Partition your 250GB drives into 40 GB partitions. Then use RAID5 to pull together the partitions across the drives. If you want a giant volume, create a Linear RAID group of all of the RAID5 groups you created and create the filesystem on top of that.

Here's why, this is the juice.

To keep it simple, let's say there are 20 secotrs per drive. When a drive gets an uncorrectable error on a sector, it will be kicked out of the array. By partitioning the drive into 5 or 6 partitions, let's say hd(a,c,e,g,i,k,l)1 are in one of the RAID5 groups, which contain sectors 1-4 (out of the fake 20 we made up earlier)

If sector 2 goes bad on /dev/hda1, Linux software RAID5 will kick /dev/hda1 out of the array. Now, it's likely that sector 11 might be bad on /dev/hdc. If you hadn't divided up the partitions, you would lose a second disk out of the array during a rebuild.

By partitioning the disks you localize the failures a little, thus creating a more likely recovery scenario.

You wind up with a few RAID5 sets that are more resilient to multiple drive failures.

If you are using a hot spare, your rebuild time will also be less, at least for the RAID5 set that failed.

I hope this makes sense.

My advice to you is to bite the bullet and simply mirror the disks. That way, no matter how badly they fail you'll have some chance of getting some of the data off.

OS X software raid (1)

v1 (525388) | more than 9 years ago | (#10675393)

After some initial hicups with the system in Mac OS 10.2, the new 10.3 seems to have a usable software RAID solution. It supports just about any method of connecting the drive to the computer (short of network) and supports most any kind of storage device. It doesn't support RAID5 yet though, just stripe and mirror. I've got four mirrored volumes up on my server, and have had few problems with them. Rebuilding must be done with the volume offline. It's not a perfect solution yet, but it's a nice free alternative for your OS X systems.

I've also tried CharismacRAID, and omg... STAY AWAY from this. After the array crashed for the third time for no apparent reason, I was experimenting with the software to test its reliability when it proceeded to try to LOW LEVEL REFORMAT my BOOT DRIVE. (yes, while I was booted up on it!) It actually managed to zero the partition table and boot blocks before asking the OS to lay down a new partition, at which point the OS thankfully gave it the bird. Kudos to Disk Warrior for being able to salvage the volume, and good riddance to CharismacRAID. (aka "AnubisRAID" fyi)

First Mistake (1)

fire-eyes (522894) | more than 9 years ago | (#10675395)

The first mistake is Maxtor drives. Worst drives i've seen, and I am talking present time.

Yes you have five, but do you feel like replacing quite a few of them over a few years?

I run software raid1 here at home, on a 2.6 kernel, and I can say that is solid. It appears software raid in general in linux is quite solid.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?