Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

Journal advid.net's Journal: Ideas for a Home Grown Network Attached Storage (slashback)

Summary for this thread :
Ideas for a Home Grown Network Attached Storage?

[...]I would like to build my own NAS and am interested in hardware/software ideas. While the small form factor PC cases are attractive, my NAS will dwell in the basement so I am thinking of a cheap/roomy ATX case with lots of power.[...]

Notice: this text is a mix of several authors plus personnal updates, I thank everyone. This time I tried to credit authors from /. crowd ;-)

My conclusions are at the end along with request for comments.

from the /. crowd :

General advice

One big growable partition: Take a bunch of disks, turn them into RAID5 array. Make a logical volume (LVM on Linux) and add the RAID-array to it. Create a growable device on the LVM and format with a standard growable FS.
When you get new disks simply create a new RAID5 array and add that to the logical volume and add to your current and grow the FS on it.

You don't want everything on one big RAID0, I lost 200G of data that way. I can say I'll never do that mistake again.

FileSystem type

Common linux file systems (ext, reiser, etc) contains critical data-losing type bugs on file systems bigger than 2TB, except XFS. This was found to be the case in even the most recent 2.6 kernels.
Tony Battersby posted a patch to the LBD mailing list recently to address the ones he could find, but lacking a full audit, you probably shouldn't use any filesystem other than XFS.
Considering the gravity of these bugs, you might consider using XFS for everything, if the developers left these critical bugs in for so long, it makes you wonder about the general quality of the filesystems.

What of IBM's JFS? We run that here on our .75ish TB file server, and it's been great for us. We've not had any data corruption issues since we deployed it ~1yr ago, and it's survived a number of power outages with no problems. I'm impressed so far :)

See Tivos. If they use XFS, it's probably because it deletes even very large files instantaneously whereas most other filesystems takes longer the larger the file is. This is a clear advantage if you want to be able to delete a large movie file from disk at the same time that you want to record TV to that disk.

Summary for this thread :
Ideas for a Home Grown Network Attached Storage?

[...]I would like to build my own NAS and am interested in hardware/software ideas. While the small form factor PC cases are attractive, my NAS will dwell in the basement so I am thinking of a cheap/roomy ATX case with lots of power.[...]

Notice: this text is a mix of several authors plus personnal updates, I thank everyone. This time I tried to credit authors from /. crowd ;-)

General advice

One big growable partition: Take a bunch of disks, turn them into RAID5 array. Make a logical volume (LVM on Linux) and add the RAID-array to it. Create a growable device on the LVM and format with a standard growable FS.
When you get new disks simply create a new RAID5 array and add that to the logical volume and add to your current and grow the FS on it.

You don't want everything on one big RAID0, I lost 200G of data that way. I can say I'll never do that mistake again.

FileSystem type

Common linux file systems (ext, reiser, etc) contains critical data-losing type bugs on file systems bigger than 2TB, except XFS. This was found to be the case in even the most recent 2.6 kernels.
Tony Battersby posted a patch to the LBD mailing list recently to address the ones he could find, but lacking a full audit, you probably shouldn't use any filesystem other than XFS.
Considering the gravity of these bugs, you might consider using XFS for everything, if the developers left these critical bugs in for so long, it makes you wonder about the general quality of the filesystems.

What of IBM's JFS? We run that here on our .75ish TB file server, and it's been great for us. We've not had any data corruption issues since we deployed it ~1yr ago, and it's survived a number of power outages with no problems. I'm impressed so far :)

See Tivos. If they use XFS, it's probably because it deletes even very large files instantaneously whereas most other filesystems takes longer the larger the file is. This is a clear advantage if you want to be able to delete a large movie file from disk at the same time that you want to record TV to that disk.

Struture

There's no reason the NAS box has to have all the files in one file system. Just create multiple partitions or logical volumes. You export directory trees across the network on NAS, not file systems.

Exporting shares

I feel strange advocating a MS-originated protocol -- but the truth us, serving files via Samba on Linux is going to be the best-performing[1], most-compatible remote file system available. [1] Samba beats the MS implementations of SMB/CIFS. No guarantees about Samba vs NFS, GFS, Coda, whatever. Structure

There's no reason the NAS box has to have all the files in one file system. Just create multiple partitions or logical volumes. You export directory trees across the network on NAS, not file systems.

Exporting shares

I feel strange advocating a MS-originated protocol -- but the truth us, serving files via Samba on Linux is going to be the best-performing[1], most-compatible remote file system available. [1] Samba beats the MS implementations of SMB/CIFS. No guarantees about Samba vs NFS, GFS, Coda, whatever.

RAID or not RAID

Regarding RAID, it's been my experience working at The Archive that RAID is often more trouble than it's worth, especially when it comes to data recovery. In theory, recovery is easy, you just replace a bad disk and it will rebuild the missing data, and you're good to go. In practice, though, you will often not notice that one of your disks are borked until two disks or borked (or however many it takes for your RAID system to stop working), and then you have a major pain in the ass on your hands. At least with one filesystem per disk, you can attempt to save the filesystem by dd'ing the entire raw partition contents onto a different physical drive of same make + model, skipping bad sectors, and then running fsck on the good drive. But if you have one whopping huge 2.4TB filesystem, then you can't do that trick without a second 2.4TB device to dd it all onto, and even if you have that, it's probably going to be copied over the network, which makes an already slow process slower .. if you can stomach it, you might just want to make one filesystem per hard drive and NFS (or Samba, or whatever) export each of your six filesystems separately.

On the contrary:

Saying so about RAID is insane.
See mdadm/mdmonitor to get a mail as soon as there is a failure .
Personally I would recommend setting up nagios or some other software monitoring. Everytime something goes wrong on a machine, we write a script to monitor that. Now, very few things go wrong unnoticed.
I'd really much prefer that to not having a RAID array. We've used that system (*knock*,*knock*,*knock*), for 4 years, and with about 5TB of filesystems at work, we've never ever lost a RAID'ed filesystem. We have lost several, incredibly important filesystems that weren't RAID'ed.
If you have spare drives arround, you can configure mdadm to automatically add them into the system. Unlike the standard md tools, you can have one spare for any number of md arrays.

Beware of some misleading "advice": "RAID 5 is about as fast as RAID-0 on reads..." ok but "...the bottleneck on writes is the parity calculation, not access time for the drives." is false:
Even the paltry 366Mhz Celeron in my fileserver can perform parity calculations at nearly 1GB/sec. The bottleneck with RAID5 most certainly *is* the physical disk accesses (assuming any remotely modern hardware)
I would suggest using a motherboard with multiple PCI buses. Basically, look for something that's got two (or more) 64 bit PCI-X slots, as these boards nearly always have multiple PCI buses.
Also putting multiple IDE drives on a channel will destroy performance.
Using RAID50 instead of RAID5 is pointless.
Just buy yourself some four port IDE controllers, put one drive each port and use Linux's software RAID to create two four-disk RAID5 devices (or one 8-disk device if you prefer). Then put LVM over the top to make the space more manageable. If you've got the hardware resources, make sure each disk controller is on its own PCI bus, or at the very least sharing it with something inconsequential (like the USB controller or the video card)

External USB / FireWire enclosures

The state of external enclosures, USB chipsets and firewire chipsets is a sad thing.
I had to go through 3 different USB chipsets (different motherboards) before my external enclosure would write data without random corruption.
Firewire's no better, either. I had an Adaptec firewire card (Texas Instruments chipset, I believe) and it worked with my external drives, yet after 5 or 10 minutes, would randomly drop the drive and corrupt data.

Testimonial

I did this a while back. (3+ years, so it's obviousely not 1TB).
My fileserver runs 24/7 and has been doing that for about 3 years (minus downtime for moving).
I use 4 40GB SCSI drives in RAID 5 configuration, using Linux software RAID.(Obviousely I would have used large IDE now, but these were the cheapest per GB at the time, and I already had the SCSI controller laying around)
This gives me about 136GB of useable space. PArtition is running ext3 as filesystem. The CPU is a Pentium II 450 and it has 256MB of RAM. Is running on a Tyan dual mobo with builtin 10/100 and SCSI.
The server is running an older RedHat release with no GUI, upgraded to Kernel 2.6.8.1.
The RAID is shared on the network using Samba.
Read performance is decent, getting around 5-7MBytes/sec read speed which is pretty good on a 100Mbit link. Write speed is slower, around 3-5MB/s

Misc

When you're dealing with that much storage, you really need to catagorize your files into what needs to be backed up and what doesn't.
If you use Linux, LVM will become your new best friend. Think of: noise and power use, heat and airflow

Don't forget to enable S.M.A.R.T. drive monitoring
I do a lot of software raids and with smartctl, no drive crash has ever surprised me. i always had the time to get a spare disc and replace it on the array before something unfunny happened.
do a smartctl -t short /dev/hda every week and a -t long every month or so ...
read the online page of it: http://smartmontools.sourceforge.net/
Software raid works perfect on linux... and combined with LVM the things gets even better
A number of people also recommended MDADM [freshmeat.net] for building and maintaining software RAID systems on Linux.

This won't be best solution noise-wise, but this would extend the drive lifetime. Cut extra holes to the case and build air-flow tunnel to help cooling the drives. I measured drop from 46C to 25C with 12cm nexus low speed fan.

Some small commercial solutions

Device for Samba sharing a USB drive 100$
Need to add: USB drive or drive + USB adapter, up to 8.

Rebyte 150$
A simple flash Linux distro with a converter board that plugs in to an IDE slot. Supports all the standard raid setups. I recommend investing in cooling for hard drives -- not things you want to have fail on a NAS system.

Credits:

Hast zoeith GigsVT HoneyBunchesOfGoats sarahemm booch richie2000 -dsr- Keruo TTK Ciar ComputerSlicer23 drsmithy delus10n0 tchuladdiass Winter beegle

*** advid.net Conclusion ***

Some small commercial solutions are worth to look at -for the lazy/hurried, but a real DIY setup would be:

ATX PC tower with linux 2.6 distro and the kind of disks you can aford (ATA,SATA,SCSI), low or medium RAM and CPU are enougth, and a 10/100 NIC of course. Better perf with one drive per chanel
#1 Use software RAID5 (raidtools2 or mdadm) and LVM.
#2 One or more logical volume.
#3 A growable filesystem (XFS (lnk2), JFS (lnk2), ext3, ext2
#4 A reliable filesystem (XFS, JFS, stable ext3 ? or good old ext2 ?)
#5 Export shares with Samba.
The box: since NAS could be outside the rooms you live, add extra holes and fans to keep the disks cool. Power: UPS here of course.

Please can you comment about:

#1 Some pointed out that raid could be worse than no RAID but simple copies from one disk to another, I'm thinking of rsyncing locally or even to some other host on LAN when reachable. What do you think ?

#2 Different logical volume for different kind of files (size, lots of writes or mainly reads, backup needs). Thus we would choose the FS and tune it differently on each volume. From backup point of vue it could be simpler and smarter: think of a small FS with your most precious files (your work), it could be handled in a 1st class way (replicas,multi backup everywhere,...). What do you think ?

#3 & #4 Some more feedback for XFS, JFS, ext3, ext2 on kernel 2.6 ?

Edit 1.1:

I think ZFS from Sun is the best FS for this purpose, too bad it can't be on Linux... yet.

Rev 1.1

This discussion has been archived. No new comments can be posted.

Ideas for a Home Grown Network Attached Storage (slashback)

Comments Filter:

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...