Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

RAID Vs. JBOD Vs. Standard HDDs

kdawson posted more than 7 years ago | from the media-center-storage dept.

Data Storage 555

Ravengbc writes "I am in the process of planning and buying some hardware to build a media center/media server. While there are still quite a few things on it that I haven't decided on, such as motherboard/processor, and windows XP vs. Linux, right now my debate is about storage. I'm wanting to have as much storage as possible, but redundancy seems to be important too." Read on for this reader's questions about the tradeoffs among straight HDDs, RAID 5, and JBOD.

At first I was thinking about just putting in a bunch HDDs. Then I started thinking about doing a RAID array, looking at RAID 5. However, some of the stuff I was initially told about RAID 5, I am now learning is not true. Some of the limitations I'm learning about: RAID 5 drives are limited to the size of the smallest drive in the array. And the way things are looking, even if I gradually replace all of the drives with larger ones, the array will still read the original size. For example, say I have 3x500gb drives in RAID 5 and over time replace all of them with 1TB drives. Instead of reading one big 3tb drive, it will still read 1.5tb. Is this true? I also considered using JBOD simply because I can use different size HDDs and have them all appear to be one large one, but there is no redundancy with this, which has me leaning away from it. If y'all were building a system for this purpose, how many drives and what size drives would you use and would you do some form of RAID, or what?

cancel ×

555 comments

Sorry! There are no comments related to the filter you selected.

Two words: RAID 0 (5, Funny)

Richard McBeef (1092673) | more than 7 years ago | (#19389577)

Nothing can possibly go wrong. Especially if you use, like, 10 disks.

Re:Two words: RAID 0 (1, Funny)

erik umenhofer (782) | more than 7 years ago | (#19389633)

Place it under a rain gutter as well. Uptime and data retention increase 59.544%.

Re:Two words: RAID 0 (2, Funny)

Richard McBeef (1092673) | more than 7 years ago | (#19389679)

Uptime and data retention increase 59.544%.

Actually, it's 59.5449999%. So it's 5 nines no matter how you look at it.

Re:Two words: RAID 0 (0)

Anonymous Coward | more than 7 years ago | (#19389699)

my two words:
Xbox 360. If your debaiting windows vs linux. go with windows. the size(loudness) vs perfomance issue will disappear as you now can invest in your big personal computer. most media center experiences are awesome at first but die down quickly. This way if you change your mind in a month. hey you still using all that power personaly as opposed to it idleing away.

Nuh-Uh (5, Funny)

rustalot42684 (1055008) | more than 7 years ago | (#19389827)

You're wrong. What if:
  • a psychopath hires a hitman to destroy his media center? The hitman comes in, destroys all 10 drives with a large axe, and leaves.
  • a crazed velociraptor claws open the case and destroys all 10 drives, then mauls him.
  • His power supply suffers extreme spontaneous combustion and explodes all 10 drives.
  • Steve Ballmer is angered by that fucking pussy Eric Schmidt and throws a chair which flies across the country and smashes into his computer.
  • a meteor crashes into the house and destroys all the drives, but leaves everything else untouched.
  • he becomes prone to sleep-sysadmining and accidentally formats them all.
  • His house is the target of a nuclear attack. (Didn't think of that one, did you, bitches?)

Raid 0 won't protect you, man!

Re:Nuh-Uh (3, Funny)

rustalot42684 (1055008) | more than 7 years ago | (#19389873)

And, of course, by RAID 0, I mean RAID 1. Must click 'Preview' next time.
Perhaps a Post-It note on the monitor to remind myself.

Re:Nuh-Uh (1)

Red Flayer (890720) | more than 7 years ago | (#19390095)

Oh yeah?

What if a giant hive mind sends its fearsome formians to chew up his system in order to feed the silicon cravings of the queen ant?

Well, guess what, ants don't like RAID of any flavor, no more than they like Black Flag. But Henry Rollins has nothing to do with this.

Re:Two words: RAID 0 (5, Funny)

CajunArson (465943) | more than 7 years ago | (#19389883)

Two words: RAID 0 Nothing can possibly go wrong. Especially if you use, like, 10 disks.

For the love of God and all that's holy will someone mod this 'Funny' instead of Informative? I get the joke, but there's always somebody who won't!
(Then again... maybe people who won't oughta make a 10 disk RAID 0, hell mod it insightful sucka!)

Re:Two words: RAID 0 (1)

JWSmythe (446288) | more than 7 years ago | (#19390147)

MMmm.. a 10 disk RAID 0. It's probably good for video production work. I don't want to be standing by when THAT fails. :)

    "But, all my work was on there!"

    HAHAHAHAAAA

Re:Two words: RAID 0 (1, Funny)

jollyreaper (513215) | more than 7 years ago | (#19390293)

So long as we're giving crackhead advice.....

D00D!!! The best mote l33t way to back this shiznit up is to do a pkzip span across multiple floppies. Use 5.25 low density to show you know old school. Backing up a terrabyte should only take you, what, long enough that the first disks are dying of bit rot before the last disks are done? RAWK!!!!!

PS you can increase the write speed of your hard drives if you pry off the tops and shoot in some WD-40. If your priest blesses it, you get WWJD-40 and that's even better!

go for RAID-5 (1)

geedra (1009933) | more than 7 years ago | (#19389621)

I would go RAID 5. But, let's face it, you're gonna have to bite the bullet on this one...either get the bigger disks you want now, or plan on rebuilding the array down the road (and losing all your data, unless you have another mass storage device that can hold it).

Re:go for RAID-5 (1)

paradxum (67051) | more than 7 years ago | (#19389721)

I'd suggest getting a RocketRaid Card.

You can expand the array without losing/migrating data off of the array. You are still limited to the size of the smallest disk in RAID 5, but it helps alot.

Also, you can use LVM on linux to help with mapping the space.

I have done this (for a personal File server.)

Re:go for RAID-5 (2, Informative)

nxtw (866177) | more than 7 years ago | (#19390219)

I'd suggest getting a RocketRaid Card.

Why? Linux software RAID (md) does a fine job with excellent performance, assuming you are not saturating the PCI bus (solution: use PCI Express or PCI-X instead). With sufficent bus bandwidth, software RAID outperforms the majority of soft RAID (rocketraid) and hardware RAID controllers.

You can expand the array without losing/migrating data off of the array. You are still limited to the size of the smallest disk in RAID 5, but it helps alot.

You can do this with md without having to deal with quirky RAID hardware that leaves you in the cold if you have a controller failure.

Re:go for RAID-5 (1)

Richard McBeef (1092673) | more than 7 years ago | (#19389735)

plan on rebuilding the array down the road (and losing all your data, unless you have another mass storage device that can hold it).

Eh, heh. What? Seriously. What the fuck are you talking about? Lose a disk in a RAID 5 array AND lose your data? What blog did you read that on?

Re:go for RAID-5 (1)

geedra (1009933) | more than 7 years ago | (#19389749)

Read it again. I said he'll have to rebuild the array if he wants to go to bigger drives and actually utilize the disk space.

Re:go for RAID-5 (1)

geedra (1009933) | more than 7 years ago | (#19389779)

I should have said re-stripe instead of rebuild. Does that make more sense?

Re:go for RAID-5 (1)

Richard McBeef (1092673) | more than 7 years ago | (#19389799)

Sorry. When I read "rebuild", I only think of rebuilding an array with a defective disk, not starting from scratch as it were.

LVM is your friend (2, Informative)

jmorris42 (1458) | more than 7 years ago | (#19389807)

> either get the bigger disks you want now, or plan on rebuilding the array down the road

Not at all, these days one does have better options than rebuilding a blank array. Read up on LVM, it is powerful stuff.

Replace the drives in the array one at a time, allowing time for the array to rebuild. Then you can grow the volume to make use of the extra capacity. Yes it will require some planning and will probably take a week to slowly merge in the new set of drives, but it sure beats a bare metal restore because you can still be recording and watching video while all this rebuilding and resizing is happening.

Don't really know how much of the above applies to Windows, haven't seriously used it in a decade; so sometone else will have to supply details on it's volume management flexibility.

Re:go for RAID-5 (2, Informative)

sciguy125 (791065) | more than 7 years ago | (#19389833)

My understanding is that linux allows you to grow an array. Yes, it will, I just checked the manpage for mdadm. For RAID 5, and probably similar for others, the procedure is to "fail" each of your disks by unplugging them then replacing them with the bigger disks (this part isn't in the man page). You just have to make sure that you allow to RAID to rebuild after each "failure". Once they're all replaced, you can resize the array to fill up the new, larger drives. Then, of course, you'll have to resize your filesystem. During this process, however, your RAID will be vulerable to real disk failures.

Re:go for RAID-5 (2, Interesting)

bl8n8r (649187) | more than 7 years ago | (#19390139)

> I would go RAID 5.

Yep. And if you boot something like Knoppix you can keep the OS on cdrom and storage on the raid device. Samba config goes on a usb key. I have two servers in a corporate environment running software raid 5 and booting knoppix. Updates are nearly impossible, but you can keep the updates on the usb key (tzdata) and untar right over the top of UNIONFS after boot. Either that or just download a fresh Knoppix version (I've gone through 3 versions now). The software raid in Linux is surprisingly stable. I had one drive go bad on one of the servers a couple months ago. mdadm emailed me, I informed the dept. of the downtime, and at the end of the day I replaced the drive and rebuilt the array. Everything worked like the howto said. Very nice.

Re:go for RAID-5 (1)

bluephone (200451) | more than 7 years ago | (#19390321)

He wouldn't have to lose it. Either build the new RAID and copy striaght over, or do it the unfun way. When he buys the new disks, copy some of the data from the RAID to one disk until it's full, repeat until you have a copy of all the data. Reformat the RAID disks to normal drives, recopy the data back to the now un-RAIDed old disks. Recreate the RAID with the new ones, and re-recopy the data back. Not fun, but it works.

Do some research first? (5, Informative)

bi_boy (630968) | more than 7 years ago | (#19389639)

Wikipedia has a very informative article regarding RAID and the various levels, in fact here it is. http://en.wikipedia.org/wiki/RAID [wikipedia.org]

Re:Do some research first? (3, Informative)

Anonymous Coward | more than 7 years ago | (#19390315)

While you're there, check out how http://en.wikipedia.org/wiki/ZFS [wikipedia.org] can make most of the issues other posters are point out irrelevant, or at least nothing to be worried about.

While Solaris might be a dirty word among the Slashdot crowd, if all the OP needs is a way to store a bunch of files, ZFS is an excellent solution. Check out http://www.opensolaris.org/os/community/zfs/whatis / [opensolaris.org] and in particular the demos linked on the left side.

Then, if you're still not convinced how appropriate ZFS might be for a somewhat clueless user, read about how it can save your ass from flaky hardware and data corruption: http://blogs.sun.com/elowe/entry/zfs_saves_the_day _ta [sun.com]

Duh (5, Insightful)

phasm42 (588479) | more than 7 years ago | (#19389647)

I'm wanting to have as much storage as possible, but redundancy seems to be important too.
Given that RAID 0 and JBOD give you no redundancy, RAID 5 is the only one you listed that has redundancy.

That said, RAID is not a replacement for proper backup. RAID is just a first line of defense to avoid downtime.

Re:Duh (3, Insightful)

geedra (1009933) | more than 7 years ago | (#19389685)

That said, RAID is not a replacement for proper backup. RAID is just a first line of defense to avoid downtime.

A good point. Consider, though, that most people don't run terabyte-size tape backup at home. It's not like it's business critical data, so RAID-5 is probably sufficient.

Re:Duh (2, Informative)

TheRaven64 (641858) | more than 7 years ago | (#19389767)

ZFS could be the solution to both problems. You can mix different RAID types across the same volume (no redundancy for unimportant stuff, like /tmp, mirror really important stuff, and RAID-Z the rest). It can also do snapshots, which gets rid of a big part of the reason for needing backups on top of RAID (accidental deletion). Of course, a virus, or kernel bug could still wipe out your data, so you still need backup stuff, you're just less likely to need to pull out the backups.

Re:Duh (2, Insightful)

xigxag (167441) | more than 7 years ago | (#19389831)

There was just an Slashdot discussion on ZFS a few days ago [slashdot.org] . The OP should read it, it pretty much covers his/her questions.

Re:Duh (1)

cibyr (898667) | more than 7 years ago | (#19390323)

I'll also put my voice behind ZFS. It's the coolest storage-related thing I've heard of in ages, and makes me wish I had a spare file server to install Solaris on.

Re:Duh (1)

Professor_UNIX (867045) | more than 7 years ago | (#19390021)

That said, RAID is not a replacement for proper backup. RAID is just a first line of defense to avoid downtime.
RAID-5 is "good enough" for home use though. If you're paranoid then build a second box that just backs up the first via rsync or rdiff-backup. The second box doesn't necessarily even have to have a RAID array, you could LVM a bunch of disks together. If the backup array dies then oh well, just install a new drive and rsync from your production server again. Personally I don't even bother to back up my MythTV backend's 600GB RAID-5 array (wow, that was really massive when I built it years ago, now I could back up the entire thing on one drive) since the only thing I keep on it are recorded television programs. Sure, I'd be pretty heartbroken if I lost 2 or 3 years of Law and Order SVU, but I'll get over it and can just buy the DVDs.

Don't worry about losing your media files (5, Funny)

NotQuiteReal (608241) | more than 7 years ago | (#19389655)

You can just download them again, right?

Re:Don't worry about losing your media files (3, Insightful)

noidentity (188756) | more than 7 years ago | (#19390143)

Even though it was modded funny, it's good advice: if most of your data is not something you created on your own, either directly or indirectly as a part of using the computer, it's possible to replace it from an outside source if lost. All you really need a backup of is your unique data.

Re:Don't worry about losing your media files (1)

egr (932620) | more than 7 years ago | (#19390309)

my friend does it when he reformats his hdd, since it much faster and cheaper

Go JBOD (0, Troll)

ringfinger (629332) | more than 7 years ago | (#19389669)

I know there was an article in linuxjournal on how to build out a 1 TB JBOD storage server out of JBOD and a normal PC chassis. I'd look back and do that -- with multiple sets of discs as backup. I can't remember the issue, I believe it may have been Oct 2005. good luck -- http://30days.itious.com/ [itious.com]

Re:Go JBOD (1)

JWSmythe (446288) | more than 7 years ago | (#19390121)

  You don't need an article, it's real easy.    I have a 440Gb array beside me, and it's only that small, because it was filled with drives I happened to have laying around. :)   5 120Gb IDE drives.

  BTW, in the original posting, his math was wrong.  he indicated 3 1TB drives should give him 3TB.  Wrong.  RAID 5 is S=C*(N-1)

S = Size, C = Capacity, N = Number of drives.

  Throw away a little from S for overhead and fudged numbers for manufacturers.

  So (for ease of math) in a RAID 5 configuration:

  3 1Tb = 2Tb ,  5 1Tb = 4Tb ,  6 1Tb = 5Tb

  But, if you were to take the same 6 drives, and make two RAID5's out of them,  you'd only have 4Tb.  RAID 50 can be two RAID5's, which are then set up as a RAID0 between the existing devices.

  Back to my original example.

  In my case, I have a Linux machine with 5 120Gb Drives.  It also has a 160Gb boot drive.  I like to keep my data and OS seperated, but since there's nothing important on the OS, it doesn't need any sort of redundancy.  Should it fail, I set up on a fresh disk, throw it in, and all my important stuff is on the RAID.

  Remember the formula.  S = C * (N-1)

480Gb = 120Gb * (5-1)

My device came out as 480.1Gb.
The OS sees 441Gb.  Remember, overhead.

Disk /dev/md0: 480.1 GB, 480134037504 bytes
2 heads, 4 sectors/track, 117220224 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

root @ backup (/root) df -h /dev/md0
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0              441G  293G  125G  71% /host

  I use mine for an off-site backup system, with BackupPC (http://backuppc.sf.net).  It works very very well.

  Now, how the heck to make this?  Easy.  There are tricks you can do.  I do have one machine with a RAID1 and a RAID5 on the same 3 drives, to give me redundancy, with this same method.  It's all in how much you want to work at it, and how much practice you have.  I wouldn't suggest it for a first attempt. :)

1)   Set up your OS, and compile in all the RAID device drivers into your kernel.  Your distribution may already have modules for this, which is ok as long as it's not your boot device. :)

  If you're compiling, they're at:

Device Drivers -->   Multi-device support -->  (all the stuff here)

2) Stuff your machine full of drives.  :)  I have a midtower with 10 3 1/2" bays.  I used Promise IDE controllers.  each can control 4 IDE drives, which means I could actually use 8 drives in here, but I only had 5.

3) fdisk your drives, making an fd type partition on each.  If you have dissimilar drives, this will help you recover the unused space.  If they're identical, you can get away with using the entire device in the next step.

4) Make your /etc/raidtab.  Adjust the device names as necessary.

raiddev /dev/md0
raid-level 5
nr-raid-disks 5
nr-spare-disks 0
persistent-superblock 1
parity-algorithm left-symmetric
chunk-size 4
device /dev/hde1
raid-disk 0
device /dev/hdf1
raid-disk 1
device /dev/hdg1
raid-disk 2
device /dev/hdi1
raid-disk 3
device /dev/hdj1
raid-disk 4

5) Make it a raid.  "mkraid /dev/md0"

6) If you don't reboot after the last step (don't bother), you need to start the raid.  The command is "raidstart /dev/md0" .

7) Now you have a hard drive like device at /dev/md0 .  Format it, mount it, do whatever.

mke2fs -j /dev/md0
mkdir /storage
mount /dev/md0 /storage

  A few caveats.

1) If you end up with a REALLY big device, you'll run into filesystem problems.  Some filesystems don't want to format something way too big (2Tb and 4Tb were some magic numbers I ran into).  This will vary based on the age of your kernel and distribution.  Try upgrading the system, or simply picking a different filesystem.

2) Don't, under any circumstances, no matter how cool it may seem, buy two Promise 15100 chassis, and stuff them full of 300Gb drives.  It seems like a really nice idea, but it's not.  I did the above, letting a single device be created across all the drives.  Ideally it would be 8.7Tb.  Those chassis had a tendency of skipping a beat once in a while, which would kill your raid.  usually a reboot of both the machine AND the chassis would bring it back up, but it's not a situation you want to end up in.

3) Like others have said, RAID is !!!NOT!!! a replacement for a good backup solution. It's good as a large capacity, fault tolerant system.

4) There are risks with RAID5.

    No raid, you will lose everything when there's a drive failure.
    RAID 0, you will lose everything when there's a drive failure.
    RAID 1, you have a good mirror, but don't extend your capacity.
    RAID 5, you gain redundancy and extend your capacity.

  I like RAID 5.  I like it a lot.  I also keep good backups because I know there's a risk.

  If a RAID 5 loses one drive, you're ok and can rebuild.

  I've been told if you lose TWO drives in a RAID 5, it's fatal.   In experience, I've lost two drives, but not in sequence (say #1 and #4 of a 6 drive set, with no hot spare).  It wasn't very happy and took a long time to rebuild, but it still came back up.

  If you lose TWO drives in a RAID 5, it's fatal.

  If you can sweet talk the drives into coming back up, and you're not using some funky controller, you can probably get your data back, and maybe even pull one at a time to let it rebuild normally.  Been there, done that, had praises sung to me when I was done.  If you only have 3 drives in a RAID 5 and lose 2, BUT there was a hot spare AND it was never configured as such, feel free to bitch slap the person who set it up.  :)

  On the above machine, moving it, plus a couple power outages REALLY upset it.  It believed it had 3 drive failures (3 of 5 is REALLY bad).  A controller had come loose, and the best I can guess is that the power surges upset the drives a little.  That took reseating the controller, and sweet talking the drives into working again.  It did come back up, and has been running solid for months now.  Mental note:  buy a UPS.

  I've been told that RAID 5 (or anything but RAID 0) is very very slow.  I beg to differ.  I've had great success, and depending on what you're comparing slow to, you'll probably be very happy with good drives in your RAID 5.

RAID (1)

whtmarker (1060730) | more than 7 years ago | (#19389697)

Does anyone know... once you setup raid what happens when a drive fails? Does something in the harddrive pop up and tell you? What if it is a linux server in a closet and you'd rather the server sent you an email?

Re:RAID (4, Funny)

alexandreracine (859693) | more than 7 years ago | (#19389805)

Does something in the harddrive pop up and tell you?


Actually, the failed hard disk will personnaly walk to you.

mdadm (2, Informative)

statemachine (840641) | more than 7 years ago | (#19390039)

What if it is a linux server in a closet and you'd rather the server sent you an email?

If it is a Linux server, you're already using mdadm, which has a monitoring daemon with e-mail notification.

Re:RAID (2, Informative)

JWSmythe (446288) | more than 7 years ago | (#19390229)

  cat /proc/mdstat

root @ backup (/usr/src/linux) cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 hdj1[3] hdi1[4] hdg1[2] hdf1[1] hde1[0]
      468880896 blocks level 5, 4k chunk, algorithm 2 [5/5] [UUUUU]

unused devices: <none>

[U] is Up.  [_] is Down.

Design for today. (2, Interesting)

Joe U (443617) | more than 7 years ago | (#19389705)

Design for what you want to use today and in the near future, don't design for a few years from now, you'll never get it built.

That being said, mirroring might be the easiest solution to upgrade, but you'll sacrifice speed and space.

If you want speed and redundancy, you'll have to go with something like RAID 5 or RAID 10 and just have a painful upgrade in the future.

I would use (and do use) linux software raid (2, Informative)

aachrisg (899192) | more than 7 years ago | (#19389707)

I'm running a few arrays, all over 1TB. Largest is 8 drives in a raid6 config. everything uses software raid. Be sure to use LVM, so that you can snapshot your drives. Once you're properly RAIDed, your more likley to lose your data by an accidental file deletion than by unfixable hardware failure.

Re:I would use (and do use) linux software raid (0)

Anonymous Coward | more than 7 years ago | (#19389789)

Yes, software RAID is great, especially if you like your writes being really slow. :-/

If you're going to dump hundreds of $$ into hard drives, cough up a bit more for a HW raid controller!

Re:I would use (and do use) linux software raid (0)

Anonymous Coward | more than 7 years ago | (#19390049)

CPUs are pretty fast these days. I wouldn't worry too much about a major speed impact.

Anyone got any hard data on performance impact due to software RAID5 parity calculation?

Re:I would use (and do use) linux software raid (2, Informative)

QuesarVII (904243) | more than 7 years ago | (#19390061)

Yes, software RAID is great, especially if you like your writes being really slow. :-/
If you're going to dump hundreds of $$ into hard drives, cough up a bit more for a HW raid controller!


Says the person who's never done any real benchmarking of these things...

Unless you buy the right raid card, you'll likely get worse performance from it than you would from software raid. I'm talking the name brands too - LSI, Adaptec, 3Ware. They all suck. Of the 3, 3ware is the best. On a LSI SAS raid controller I recently tested, I only got a 30% I/O speedup going from a single drive to a 6 drive raid 5. That's pathetic! Software raid at least gave me 140% improvement.

If you really want good numbers, get an Areca [areca.com.tw] controller. They perform very well and have drivers right in the linux kernel (2.6.19+).

The older RocketRaid cards (Highpoint) performed fairly well, but were not really hardware raid - they were "hardware assisted" raid. Most of the work was really software raid in the driver. As long as you had a fairly fast cpu, you got great numbers. I believe the newer ones are true hardware raid now, but I haven't benchmarked them yet as they only had up to 8 port controllers in PCIe last I checked.

Re:I would use (and do use) linux software raid (1)

nxtw (866177) | more than 7 years ago | (#19390267)

Yes, software RAID is great, especially if you like your writes being really slow. :-/


The "writes being slow" problem clears up if you have enough bus bandwidth. This means PCI-X or PCI Express.

RAID (2, Informative)

Anonymous Coward | more than 7 years ago | (#19389713)

If you have 3x500GB disks in RAID5, you only have 1TB of usable space, as one drive is used as parity (and therefore not for effective data storage). If you replace the disks with larger ones, the array is not increased in size if you replace each disk one at a time and let the array rebuild itself. However, you can just plug in your new drives (if you have enough ports), create a new array, and then copy data across to the new array. Alternatively, if you are using software RAID, as you increase the size of the drives, you can create extra partitions on the drives and RAID these. e.g. 3x500GB drives in RAID5, changed to 3x1TB drives, each with 2x500GB partitions = 2x1TB RAID5 arrays. This is not recommended however!!

Personally, I would just buy a Buffalo TeraStation or Netgear StorageStation and let that do the hard work. Just plug it into your network, then share the data. Just have a single 500GB drive on your media centre for recording TV, and then anything you want to keep just copy over to your NAS box.

Re:RAID (0)

Anonymous Coward | more than 7 years ago | (#19389855)

No, RAID 5 does not have a dedicated parity disk. The parity data is distributed across all the disks, unlike RAID 3 and 4, where there is a dedicated parity disk.

Re:RAID (1)

zippthorne (748122) | more than 7 years ago | (#19390117)

and.. how large is the parity stripe...

It depends (2, Informative)

gbaldwin2 (548362) | more than 7 years ago | (#19389717)

It all depends on how the RAID is implemented. Most inexpensive controllers require a rebuild when you change sizes. It is not a big deal. I would never implement anything important jbod the chance of failure is too large. I have replaced too many disks. Do RAID5 or RAID1. Over 99% of my disk is RAID5 and I manage just over 500TB.

Drobo (0)

Anonymous Coward | more than 7 years ago | (#19389719)

Buy a Drobo.

big disks (0)

Anonymous Coward | more than 7 years ago | (#19389729)

Just get a couple of these. [newegg.com] They're cheap enough that you can use one for live storage and one as a backup.

Is Google broken today? (2, Insightful)

Talez (468021) | more than 7 years ago | (#19389731)

RAID 5 drives are limited to the size of the smallest drive in the array.

Yes... Duh....

And the way things are looking, even if I gradually replace all of the drives with larger ones, the array will still read the original size. For example, say I have 3x500gb drives in RAID 5 and over time replace all of them with 1TB drives. Instead of reading one big 3tb drive, it will still read 1.5tb. Is this true?

Yes... Fucking duh.... Have you even read the RAID 5 Wiki article? [wikipedia.org]

I also considered using JBOD simply because I can use different size HDDs and have them all appear to be one large one, but there is no redundancy with this, which has me leaning away from it. If y'all were building a system for this purpose, how many drives and what size drives would you use and would you do some form of RAID, or what?

We've been through this a million times before and the answer is always the same. You're a cheap bastard who wants gobs of space with an acceptable amount of redundancy but aren't willing to buy two sets of drives. Buy 4 of the biggest drives you can afford and RAID 5 them. Don't expect stellar write speeds. You won't have a backup if something happens and all 4 drives blow but you'll at least have protection when one drive gives up the ghost which is mainly what most people want to protect against.

Why does stupid shit like this keep getting posted to the front page?

Re:Is Google broken today? (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#19389863)

Amen. Ask Slashdot is pretty gay sometimes.

You can mix raid drive sizes, with planning. (3, Interesting)

Kaenneth (82978) | more than 7 years ago | (#19390023)

You can put RAID 5 on varying size disks.

I had 4 300GB drives, and 2 200GB drives.

I broke them up into 100GB partitions, and layed out the RAID arrays:

A1 = [D1P1 D2P1 D3P1 D5P1]
A2 = [D1P2 D2P2 D4P1 D6P1]
A3 = [D1P3 D3P2 D4P2 D5P2]
A4 = [D2P3 D3P3 D4P3 D6P1]

Then I concatenated the arrays together, giving a little less than 1.2 TB of space from 1.6 TB of drives; if I had just RAID'd the 4 300 gig drives, and mirrored the 200's I would have only had 1.1 TB available, and the drive accesses would be imbalanced.

I could also grow the array, since it was built as concatenated, so later when I got 4 400GB drives I raided them then tacked them on for 2.4 TB total.

Re:Is Google broken today? (1)

glwtta (532858) | more than 7 years ago | (#19390221)

"Instead of reading one big 3tb drive, it will still read 1.5tb. Is this true?"

Yes... Fucking duh.... Have you even read the RAID 5 Wiki article?


Well, no, his 3x500GB array will still be 1TB, not 1.5TB - unless there's a new RAID level that magically gives you redundancy without using any space for it. A 3 disk RAID 5 is basically a terrible configuration, no matter what you do with it.

Seriously though, how hard is it to type "RAID" into Google?

Get what you need for *NOW* not for later (4, Insightful)

CPE1704TKS (995414) | more than 7 years ago | (#19389737)

This is what you do: buy 2 drives exactly the same size and mirror them. End of story. If you're worried about a blown raid controller, then buy another hard drive and stick that on another computer and run a weekly cron job to copy everything. Right now you can get 500 GB hard drive for about $150. Get two of them and mirror them. (If you need more than 500 GB I would highly suggest encoding your porn into a different format than MPEG2) By the time you run out of space, you will be able to get 1 TB drives for about $150. Migrate over to the 2 1 TB hard drives. Repeat every few years.

With computers, the stupidest thing you can do is spend extra money to prepare for your needs for tomorrow. Buy for what you need now, and by the time you outgrow it, things will be cheaper, faster and larger.

By the way RAID 5 is a pain in the ass unless you have physical hotswap capability, which I highly doubt.

Re:Get what you need for *NOW* not for later (4, Interesting)

QuesarVII (904243) | more than 7 years ago | (#19389911)

By the way RAID 5 is a pain in the ass unless you have physical hotswap capability, which I highly doubt.

With recent kernels, you can hotswap drives on nvidia sata controllers (common onboard). I believe several other chipsets had support for this added in recent kernels too. Then you can swap drives live and rebuild as needed.

One more important note - if you're using more than about 8 drives (I personally recommend 6), I would use raid 6 instead of 5. You often get read errors from one of your "good" drives during a rebuild after a single drive failure. Having a 2nd parity drive (that's what raid 6 gives you) solves this problem.

Re:Get what you need for *NOW* not for later (2, Insightful)

Grishnakh (216268) | more than 7 years ago | (#19389953)

(If you need more than 500 GB I would highly suggest encoding your porn into a different format than MPEG2)

500 GB isn't that much space any more. If he's thinking of making an HDTV MythTV box, for instance, full-res HDTV streams will require a lot of space to store in real-time. It would probably be too computationally intensive to recode them into MPEG4 on the fly.

Wait a sec... (3, Funny)

GFree (853379) | more than 7 years ago | (#19389741)

Out of all the details you're still working on, you decided to ask Slashdotters about storage?

Why not the "windows XP vs. Linux" bit? Do you want 100 responses or 1000?

Re:Wait a sec... (1, Funny)

Anonymous Coward | more than 7 years ago | (#19389829)

I want 7,000,000 [google.com]

Media Server? (4, Funny)

foooo (634898) | more than 7 years ago | (#19389743)

Media Server: n. A euphamism for digital porn storage.

Re:Media Server? (4, Funny)

ScrewMaster (602015) | more than 7 years ago | (#19390213)

Media Server: n. A euphamism for digital porn storage.

Only if you make all your network file shares pubic.

Linux, raid5, LVM on top, can use extra capacity (4, Informative)

Spirilis (3338) | more than 7 years ago | (#19389745)

With Linux you can create a RAID5 md device, say /dev/md0, then run LVM on top of that (pvcreate /dev/md0 ; vgcreate MyVgName /dev/md0) and use that to carve out your storage. The key here is to create partitions on each drive, eg filling up the entire disk, and create your raid5 with those.

If you buy 1TB drives further down the road, here's what you do- With each disk, create a partition identical in size to the partitions on the smaller disks, then allocate the rest of the space to a second partition.
Join the first partition of the disk to the existing RAID set. Let it rebuild. Swap the next drive, etc. etc. Then once you've done this switcharoo to all the drives, create another raid set using the 2nd partition on your new disks--call it /dev/md1. So now you have /dev/md0, pointing to the first 500GB of each disk, and /dev/md1, pointing to the 2nd 500GB of each disk.

Take that /dev/md1 and graft it onto your LVM volume group. (pvcreate /dev/md1 ; vgextend MyVgName /dev/md1). Now your LVM VG just doubled in size, and you can use all that new space. Whatever you do though, do NOT create any "striped" logical volumes (the "-i2" option to lvcreate; LVM's Poor Man's RAID0, basically) because you will suffer terrible performance, since you'll be striping across different volumes on the same physical spindles (a big no-no for any striped configuration). But if you use the extra space by creating new filesystems or growing existing ones, you shouldn't see any trouble.

Just be sure that any replacement drives you have to buy... you must partition them out similarly. I'd recommend pulling back on the partition sizes a bit, maybe 5%, to account for any size differences between the drives you bought right now and some replacement drives you may purchase later on which might be slightly lower in capacity (different drive manufacturers often have differing exact capacities).

depends on the raid implementation (and level?) (2, Informative)

KStieers (84864) | more than 7 years ago | (#19389751)

It depends on the implementation (and possibly the raid level). Some raid cards will let you expand the container after you've replaced all of the drives with new ones of a larger size. Then you have to expand the partition, or put another partition into the new space. I've done this with Compaq hardware running Win2k in a RAID 1 (mirrored pair).

The "Raid 5 can't do what I heard" isn't quite what's going on, again, depending on the implementation. Most raid cards I've used allow you to add drives to the array and expand the array to the new drive(s) without downing the server or requiring a rebuild.

So RTFM for the card you're going to use.

Linux, RAID 5, md (5, Informative)

Pandaemonium (70120) | more than 7 years ago | (#19389757)

Go RAID5. RAID5 = Hardware failure resilience + maximum storage.
Go Linux. The Linux MD driver allows you to control how you RAID- over disks or partitions. there are advantages. We will discuss.

First, don't get suckered into a hardware RAID card. They are *NOT* really a hardware card- they rely on a software driver to do calculations on your CPU for RAID5 ops. Software RAID is JUST AS FAST. Unless you blow the big bucks for a card with a real dedicated ASIC to do the work, you're fooling yourself.

Now, you want to go Linux. By using the md driver, you can stripe over PARTITIONS, and not the whole disk. By doing this, you can get MAXIMUM storage capacity out of your disks, even in upgrades.

Say you have 3 500GB disks. You create a 1TB array, with 1 disk as parity. On each of these disks is a single partition, each the size of the drive. Now, you want to upgrade? SURE! Add 3 more disks. Create three partitions of EQUAL size to the original, and tack it on to the first array. Then, with the additional space, you can create a WHOLE NEW array, and now you have two seperate RAID5's, each redundant, each fully using your space.

Another advantage with MD is flexibility. In my setup, I use 5x 250 drives right now. On each is a 245GB partition, and a 5GB partition. I use RAID1 over the 5's, and RAID5 over the rest. Why? Because each drive is now independently bootable! Plus, I can run the array off two disks, upgrade the file system on the other 3, and if there's a problem, I can always revert to the original file system. So much flexibility, it's not even funny.

I recommend using plain old SATA, in conjunction with SATA drives, and just stick with the MD device. For increased performance, watch your motherboard selection. You could grab a server oriented board, with dedicated PCI buses for slots, and split the drives over the cards. Or, you can get a multiproc rig going, and assign processor affinity to the IRQ's- one card calls proc 1 for interrupts, the other card calls proc 0. If you have multiple buses, then performance is maximized.

The last benefit? Portability. If your hardware suffers a failure, then your software RAID can move to any other system. Using ANY hardware RAID setup will require you to use the EXACT same card no matter what to recover data. Even the firmware will have to stay stable or else your data can be kissed goodbye.

Windows? Forget about it.

Good luck!

Re:Linux, RAID 5, md (1)

tjstork (137384) | more than 7 years ago | (#19389781)

This is REALLY cool. Can you steer a poor man over to a FAQ on setting up such an MD device?

Re:Linux, RAID 5, md (4, Informative)

Pandaemonium (70120) | more than 7 years ago | (#19390075)

It'll take some reading and combining from multiple sources. I've been doing it for a few years, combined with a handful of upgrades, plus setting it up as an iSCSI backend- all of that lent to the pool of greyness in my head.

I recommend Gentoo to do this with. Other distro's dont include the latest mdadmtools required to manage and migrate RAID5 md devices. Ubuntu is catching up, I believe.

Here are some places to start:

http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Sof tware_RAID [gentoo-wiki.com]
http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2- quickinstall.xml [gentoo.org]
http://linas.org/linux/Software-RAID/Software-RAID .html [linas.org]
http://linas.org/linux/raid.html [linas.org]
http://evms.sourceforge.net/ [sourceforge.net]
http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html [tldp.org]

Re:Linux, RAID 5, md (1, Informative)

Anonymous Coward | more than 7 years ago | (#19390077)

If I recall correctly, Ubuntu alternate install CD allows you to set up a raid during install, LVM or md.

Re:Linux, RAID 5, md (0)

Anonymous Coward | more than 7 years ago | (#19390085)

OpenSolaris and ZFS owns this market. Until linus and alan cox stop being dickheads and allow ZFS support, linux is a second class OS. Heck, FreeBSD and OS X support ZFS. What's the penguin afraid of?

Re:Linux, RAID 5, md (3, Interesting)

ptbarnett (159784) | more than 7 years ago | (#19390111)

I did exactly this for a new server recently. The only thing I would add is to use RAID 6 instead of RAID 5. That way, you can tolerate 2 drive failures, giving you time to reconstruct the array after the first one fails.

I have 6 320 GB disks. The /boot partition is RAID 1, mirrored across all 6 (yes, 6) devices, and grub is configured so that I can boot from any one of them. The rest of the partitions are RAID 6, with identical allocations on each disk.

There's a RAID HOWTO for Linux: it tells you everything you need to know about setting it up.

Re:Linux, RAID 5, md (1)

Pandaemonium (70120) | more than 7 years ago | (#19390285)

Some more info:

Hardware RAID cards, including expensive ASIC-based cards: Don't do it. Unless you think you're actually going to be serving up a database, you only have to think about yourself. Fortunately, you will not be writing to the array as much as you will be READING. This means that you can take the CPU hit, especially if you will have a dedicated server, on the parity calculations during writes. RAID5 reads are JUST AS FAST as RAID0, if not slightly slower.

RAID6: It's nice, but it takes an additional disk away for storage. This is your home server. If you have one drive fail, down the thing until the replacement comes back. You can live without your porn, right? Well, if not, then go RAID6.

How the hell did this make the front page? (4, Insightful)

Erwos (553607) | more than 7 years ago | (#19389769)

I really can't believe this made the front page. The questions are badly written, and the question itself could have been answered with some basic Internet research. RAID isn't an esoteric topic anymore, folks!

This place has really gone downhill. I thought Firehose was supposed to stop stuff like this, not increase it!

Anyways, just to be slightly on topic: there's no one answer to this question. It depends on your budget, your motherboard, your OS, and, most importantly, your actual redundancy needs. This kind of thing is addressed by large articles/essays, not brief comments.

Re:How the hell did this make the front page? (0)

Anonymous Coward | more than 7 years ago | (#19390153)

May I respectfully inquire, WTF is "Firehose"?

Bad assumption (2, Informative)

tfletche (708699) | more than 7 years ago | (#19389791)

you write that if you have 3 500G disks in a RAID 5 that you will have a 1.5T, etc. Don't you realize that (N x C) - C = Total ? i.e. (3 x 500) - 500 = 1000 or 1 terabyte. That's only the first problem with your logic...

Risk what you can do without (1)

Tribbin (565963) | more than 7 years ago | (#19389795)

I have a lot of data (500 GB of music/movies/pictures/wallpapers/audiobooks/ebooks ; filled to the last GB)

I'm a student and I do not have the money for redundant storage.

I rsync my documents and pictures over the two drives and burn my favourite movies to DVD. I use ffmpeg to turn DVDs into Xvids and oggenc to turn flacs into ogg q5s.

If I lose one of the harddrives; that's life.

So for those who do not have the luxury that the poster has; make sure that you backup what is really important and risk what you can do without.

Re:Risk what you can do without (0)

Anonymous Coward | more than 7 years ago | (#19389925)

500 GB is about $120 (on sale or OEM) now. Save about $2 a week and you'll have redundant storage in about a year, just in time for the price to be $75.

Re:Risk what you can do without (0)

Anonymous Coward | more than 7 years ago | (#19389929)

So, you're burning your favorite movies and then ripping them back? That's an interesting way to save space.

Drobo: storage robot (0)

Anonymous Coward | more than 7 years ago | (#19389797)

This thing is very cool: Drobo, from Data Robotics. Check out the demo! http://www.drobo.com/products_demo.aspx [drobo.com]

Infrant (1, Informative)

Anonymous Coward | more than 7 years ago | (#19389811)

You could always purchase a NAS from Infrant [infrant.com] . A diskless version retails for around $640; and they have a proprietary raid level called X-RAID. It is basically a RAID 5 array, but allows for expansion using larger drives. The standard rule applies, each individual drive will be limited to the size of the smallest drive, but you can hotswap one drive at a time, allow the drive to be rebuilt, and repeat the process for all four drives. Once the final one is done, it will auto expand to your new capacity. Pretty futureproof.

Get a ReadyNAS (0)

Anonymous Coward | more than 7 years ago | (#19389817)

I went through all this.. Read all about rolling my own RAID, using Linux, using Windows, etc. In the end I opted for the ReadyNAS NV from Infrant.

I bought the ReadyNas NV without drives for about $700. I put in 4 500GB drives last year and now I've got 1.5GB of RAID 5 storage. It works great for my needs (media storage of videos, music, pictures streaming through XBMC).

What for? (1)

TopSpin (753) | more than 7 years ago | (#19389837)

If y'all were building a system for this purpose, how many drives and what size drives would you use and would you do some form of RAID, or what?
For what purpose?! You haven't said word one about how this storage will be used. What is it for? Email back end, shared file systems, RDBMS (OLTP or OLAP), streaming loads, D2D backup, etc. Define your use case, please! Post after post on this topic and not one of you ever think to specify what the @!%*$ it is you're trying to do.

Agonizing over the ability to incrementally upgrade an array is a sure sign you have cost at the very top of your list of concerns, with everything else far below. Learn about software RAID. At the throughput levels you're planning for (3 disks?) hardware RAID is a waste; contemporary CPUs can cope with all the parity calculations involved with negligible effort. Save money on proprietary hardware/licenses with Linux+LVM+MD and use the cash to upgrade the drives simultaneously. Or become a guru and figure out how to layer LVs and MDs to use capacity incrementally; the only cost is your time and spare stomach tissue.

If I had to manage fault tolerant storage with mis-matched physical disks and no budget I'd be looking at ZFS. There are other ways of doing it but the ZFS model is so simple and obvious that it has a high probability of actually working in the real world. Right up until it gets corrupted and you learn there is no ZFS fsck...

Re:What for? (0)

Anonymous Coward | more than 7 years ago | (#19390305)

read the summary again you hyper-angry fuckstick

Planned obsolescence (3, Insightful)

Solder Fumes (797270) | more than 7 years ago | (#19389849)

Hardware WILL get old, WILL die, and better stuff WILL become available. So it only makes sense to recognize this and plan for it.

Here's the way I do it (for a home storage server, not a solution for business-critical stuff):

Examine current storage needs, and forecast about two years into the future.

Build new server with reliable midrange motherboard, and a midrange RAID card. These days you could do with a $100-$300 four-port SATA card, or two.

Add four hard disks in capacities calculated to last you for two years of predicted usage, in RAID 5 mode. Don't worry about brand unless you know for a fact that a particular drive model is a lemon.

Since manufacturer's warranties are about one year, and you may have difficulty finding an unused drive of the same type for replacement, buy two more identical drives. These will be your spares in the event of a drive failure.

When the two years are up, you should be using 80 to 90 percent of your total storage.

At this point, you build an entirely new server, using whatever technology has advanced to at that time.

Transfer all your files to the new server.

Sell your entire old storage server along with any unused spare drives. A completely prebuilt hot-to-trot RAID 5 system, with new matching spare disk, only two years old, will still be very useful to someone else and you can recoup maybe 30 to 40 percent of the cost of building a new server.

Lather, rinse, repeat until storage space is irrelevant or you die.

mod 3vown (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#19389853)

slow news day? (1)

RelliK (4466) | more than 7 years ago | (#19389857)

When did slashdot become a substitute for usenet/google/wiki or (gawd forbid) a fucking manual? Why do editors feel inclined to post the drivel of every clueless newbie who needs handholding, while rejecting important/interesting news stories?

As to the poster's question: read the fucking manual, kid.

Re:slow news day? (1)

colourmyeyes (1028804) | more than 7 years ago | (#19390181)

Hey slashdot, I'm thinking of buying a computer. How is mac and pc different?? I have microsoft, will it run on a mac? Also are there any other "operating systems" besides microsoft and apple because I heard there are? thx slashdot lol!

Can I be on the front page too??

RAID 5 is for cheapskates (1)

pestilence669 (823950) | more than 7 years ago | (#19389871)

No, really. It's all about cost. Even with hardware accelerated RAID, you can expect a steep performance hit. If you're going for a massive data repository I'd suggest several RAID 1+0 setups in hardware with a decent volume manager & file system (not NTFS).

2x500GB drives in a RAID 1 (for peace of mind). Then double that in a RAID 0 stripe (for speed). That's 4 drives per TB. Then use a decent file system, like ZFS, to chain your RAID 1+0 clusters into a single volume 1TB at a time.

Whatever you choose to aggregate your storage, I don't think you'll be able to get away from mirroring every drive... unless you go the budget RAID 5 route. I'd suggest no redundancy in that case.

RAID5 and disk size (1)

gweihir (88907) | more than 7 years ago | (#19389889)

Some RAID controllers allow you to enlarge a RID5 array. If the OS also allows you to enlarge the partitioning, then you are set. I think currently both is possible under Linux.

However, the better approach would be to recreate the array on disk upgrades. After all for any kind of reliability, you need backup anyways. RAID is not a replacement for backup!

More info please (1)

blhack (921171) | more than 7 years ago | (#19389891)

It would be nice to know just how much data you are trying to store. If this is going to be a whole bunch of mp3s, then you might look into a Raid 1 array of that new 1TB drive from hitachi.
At 1TB, it is still gonna be pretty hard to fill this with DIVX encoded movies. I guess though, if you need more space, do a 0+1. Meaning a redundant array of a data-striped set.

If you are talking about some sort of seriously whacked out array of like some Blu-Rays or HDDVDs or some crazy thing like that....then i would....uhhh....probably just start praying.

Honestly, the best set up (once again depending upon your intentions) is probably going to be a linux box running:
mt-daapd (for streaming to itunes)
mpd (the media player daemon, for hooking the box into a stereo)
slimserver (to stream through a web interface to any machine that can reach it on the network).
Samba (for sharing the music to windows clients)
vsftpd (for sharing the music to everybody else)

slap a couple of those 1TB drives in there with some Raid 1 for redundnacy....and i think you should be in VERY good shape.

OH OH OH...put it into one of those ultra-sexy HTPC boxes for added win factor.

mdadm on linux (1)

flyboy81 (698817) | more than 7 years ago | (#19389897)

Using mdadm and linux you can grow a RAID 5 if you replaced all the disks one by one, this has been possible for a while. Recent kernels have made it possible to expand the RAID 5 sets by adding more drives. So you can basically grow as you need. Some guys have even migrated from a JBOD to RAID 5 using just one extra disk by creating an array from two drives but marking one as missing, I'm not recommending this unless you have backups :) (and since you have your backups it will be quicker to just create the whole thing right away than to go through reshaping for each disk).

RAID 1 (1)

daybot (911557) | more than 7 years ago | (#19389919)

I'm a big fan of RAID 1. It's 100% wasteful, but I can relate to that. Advantages of RAID 1 include:

- Simplicity: RAID card broken? Fine, just shove one of the drives onto a non-RAID interface and you're off. Simple setup also means low overheads, which leads to...
- Speed: Faster than RAID 5 because the controller isn't doing anything clever. If you want faster, go 0+1.
- Robustness: The temptation with RAID 5 is to have one massive partition across loads of drives. That's great, until you accidentally format it or something. Don't forget to back up, but splitting your storage into smaller arrays would be safer.

If you're seeking 1.5TB, you could have two 750GB RAID 1 arrays using four 750GB Seagate Barracuda 7200.10s.

Hardware vs Software RAID (1, Informative)

Anonymous Coward | more than 7 years ago | (#19389977)

You've left off which method you plan to use for your RAID (whichever implementation)... If it's hardware, the suggestions mentioning things like LVM are irrelevant, it only matters if the RAID controller supports dynamic re-partitioning (if it doesn't, then who cares what the OS supports, the OS can't use what the controller doesn't say is there).

Also, since you mentioned that you haven't chosen an OS, I believe MS will be releasing Windows Home Server this fall. It's based of the 2003 Server system, so it's well proven and has no problem with drivers or any of the issues Vista is currently having. Also since it's built on top of 2003, there's already lots of industry support out there. The UI they've grafted onto it is very friendly, and the backup system that it has is awesome (incl full-disk restore using only a boot CD!) It's actually being designed to fulfill more or less exactly the role you seem to be seeking a solution for.

I know I'll get blasted for having suggested an MS sol'n on /., but since you said you hadn't decided, I just wanted to make you aware of another option that you might not have considered.

Anyway, for your system, I'd make a Software RAID1 partition for your OS (whichever it is) and install a Hardware RAID5 solution for my data. Since it's hardware RAID5, you can break it up however you like, and still have redundancy AND a minimal loss of space. You could consider RAID6 for increased safety, but I haven't seen a hardware RAID6 controller out there anywhere yet... (RAID 6 is like RAID 5, but has 2 parity drives, thus enabling up to two drives in the array to fail whilst remaining operational).

-AC

Juice, Heat and Bux... (1)

jddj (1085169) | more than 7 years ago | (#19390003)

I started to put a 2-drive RAID 1 setup in my MythTV HD server. I eventually bagged it and went with a single SATA disk.

Here's why:

  1. I was storing a mirror of my collection of MP3 files for service with mt-daapd. The MythTV server disk goes down, I have another copy elsewhere already.
  2. The rest is just TV. The System disk image is backed up to read-only media, so I can blast it back onto a replacement disk if needed. So I lose a couple weeks of TV. Boo-hoo - it's nice outside and there are brand new films at the theater.
  3. Two drives use twice as much juice and make twice as much heat as one. 5 drives increase your carbon footprint, heat dissipation and financial outlay even further, and require a more highly-cooled enclosure AND warm up your den/living room/home - which you cool once more with the A/C in the summer. If you do the math, you'll probably pay more for running the disks over their service life than you do to buy them.

So I figured I didn't HAVE to have the array, realized the machine would run lots cooler and lots cheaper, and when it finally went down, BFD, I'd buy an even bigger disk for less money, install it without a hassle and copy my MP3s back onto it.

N.B. - I've been on a 20+ hour bridge-line call today as our data center guys try to figure how to rebuild an enterprise disk array. Are you SURE you want to go with a RAID?

Go RAID 5 BUT with real hardware.... (2, Insightful)

Fallen Kell (165468) | more than 7 years ago | (#19390027)

If you are going to do this, do it right. It will cost you some up front, however, in the long run, doing it right will be cheaper. Get a real raid card, as in hardware RAID. Get something that supports multiple volumes and at least 8 disks. I personally just got the Promise SuperTrak EX8350. Now, why do you ask do you need 8 disks? So you can upgrade, that is why. Use your current 3 or 4 disks you have now in a raid volume. In a couple years when bigger disks are dirt cheap, pick up 4 1TB+ size disks and build a second volume on the RAID array using the new disks. Now you can offload all the old data onto the new RAID volume and either ditch the old disks or keep them around (up to you, however, I recommend ditching to other computers or whatever so that you now have 4 empty slots on the RAID card so that you can rinse/repeat the whole process again in another few years...)

Again, doing it correct up front takes care of upgrade options down the line. It also gives you room to do monster sized volume if you ever need that much space (8 disk array). Most of these RAID solutions are also OS independent, so if you want dual boot, the volume would be recognized by Windows, Linux, Unix, BSD, etc., and you are also not dependent on using the exact same motherboard if you motherboard dies or wants to be upgraded (you would lose all your data if you use the built in RAID on the motherboard when changing to a new motherboard other then the exact same model).

These better cards also can be linked together (i.e. you always get a second card assuming your motherboard has a slot for it, and add more disks to the array that way as well).

ct0m (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#19390065)

One acronym - ZFS (2, Interesting)

GuyverDH (232921) | more than 7 years ago | (#19390081)

Get a small box, install opensolaris on it, configure your JBOD as either raidz or raidz2, configure either iSCSI or SaMBa to share the files using a gig link.

Your data should be perfectly safe, with raidz2 can lose up to 2 drives, without data loss.

RTFM (1)

sholden (12227) | more than 7 years ago | (#19390097)

that is all.

Drobo (1)

Bugmaster (227959) | more than 7 years ago | (#19390113)

Has anyone tried using drobo [drobo.com] ? I'm contemplating upgrading my own storage, and they seem to have the least painful solution (as opposed to managing Raid and dealing with disks of different sizes, recovery, etc.), but I have no idea whether their product actually works.

JBOD (1)

goofdad (741126) | more than 7 years ago | (#19390173)

I considered this for quite some time before putting together my 2.2TB Myth box two years ago, and I went against the grain and went with JBOD. What takes up the most disk space on my server is images of my DVD's, for which I have the originals, so I don't need the redundancy. For the recorded broadcasts, if I lose them, I lose them. I have a small RAID-0 pair of disks for my music and such that I don't want to lose. I could have shrunk my disk space and went for a RAID cluster (lord knows I have the disks), but the redundancy was overkill for the application.

Re:JBOD (1)

JustNiz (692889) | more than 7 years ago | (#19390295)

>> I have a small RAID-0 pair of disks for my music and such that I don't want to lose.

It seems you think RAID-0 has some level of error-redundancy, but it doesn't. Its no better than JBOD or a single drive in this respect.

RAID 0 has no redundancy, great performance (1)

Lucas123 (935744) | more than 7 years ago | (#19390247)

RAID 0 will offer you the best performance but no redundancy of data -- so no fault tolerance. You lose a disk and your dead in the water. RAID 5, on the other hand, offers a parity array, so while it stores parity information for rebuilding data, it doesn't store redundant data. Therefore, you get the most for your money. If you really concerned about data protection, you can also go with RAID 6, which includes a second parity array, for protection against dual disk failure (something I believe StoregeTek originally came up with). Again, it makes the most of your storage capacity by only saving parity data (striped across multiple disks) and not duplicate copies or your original data blocks.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>