Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Power

Cross Platform, Low Powered Home Servers w/ RAID? 94

Milo_Mindbender asks: "At home I've collected too much data to easily backup, so I've been thinking about RAID5 for a little extra data security. I multiboot my computers for both Linux and Windows so I really need a RAID solution that will make the data at least readable by both OS's. I don't think this can be done on a single machine (can it?) so I'm looking to put together a Linux home server with RAID5 serving both SAMBA and NFS. Aside from the usual questions (software/hardware RAID, types of disk to use...etc) because I live by myself in an apartment I have a few tricky requirements I hope the Slashdot crowd can help me with." How would you set up a RAID5 server to perform Samba/NFS sharing duties without it wasting a lot of wattage, while it idles?
"I hate to waste electricity, so how can a Linux RAID5 server be setup to automatically spin down to the lowest possible standby power use, then spin back up when a computer accesses it? I don't have a basement, garage or other remote place to put the thing, so it needs to be quiet or at least not die a thermal death if I lock it in a closet. What's the sweet spot for choosing CPU type/speed, hardware/software RAID controller, motherboard and memory to make a home server? Since this is only going to be serving a few machines (and maybe doing router/gateway duty), I'm sure there's a point where adding more CPU horsepower doesn't improve performance much. Any suggestions on motherboards, cases or even complete systems that work particularly well for this kind of small headless home server?"
This discussion has been archived. No new comments can be posted.

Cross Platform, Low Powered Home Servers w/ RAID?

Comments Filter:
  • Yep .. (Score:2, Informative)

    by douggmc ( 571729 )
    I use an AOpen i855 motherboard w/ a Pentium M proc. There is a newer one from Aopen called i915 also. I use it as a gateway/firewall/server (use a distro called clarkconnect .. http://www.clarkconnect.org/ [clarkconnect.org] works wonderfully). Low power, quiet, cool ... ** But why NFS? Just use Samba.
    • But why NFS? Just use Samba.

      I think maybe because NFS uses the available bandwidth more efficiently? I'm not really sure. I don't do enough big data transfers to really care.
      • I've used both NFS and Samba. NFS was slow, hard to get set up, and would lock up mounts on a regular basis. The only real trouble that I've run into with Samba is transfering files over 4GB, but I'm not 100% sure that that's a Samba problem.
        • I've used both NFS and Samba. Samba was hard to setup, and never seemed to work quite right. NFS is comparatively easy to setup, just edit /etc/exports! Plus it preserves UNIX permissions and usernames, and is ultra-easy to mount on any UNIX (including OS X). Samba on OS X just plain doesn't work, but NFS is great.

          If you're using Windows, then Samba is the only answer. But this is slashdot, so I assume that you don't use Windows. In that case, I prefer the 20+ years of testing that NFS has endured (ov
          • In that case, I prefer the 20+ years of testing that NFS has endured...

            I agree that NFS has been around longer, but that doesn't necessarily make it better. Have you never had a mount lock up on you that couldn't be recovered? It happened regularly to me. Doing research to fix the problem led me to the real reason that I selected Samba - the community behind it. Say what you will, but there sure seem to be a lot more people running Samba than NFS these days. There doesn't seem to be an equivalent to

            • >Have you never had a mount lock up on you that couldn't be recovered?

              Here's a tip for NFS mounts. Use the option "-o intr,soft". This will allow you to kill processes that have a file open on a hung nfs mount, thereby allowing you to umount the filesystem. NFS uses "hard" mounts by default, in which case a process won't get a return from a read or write system call until that call is successfull, which has the side effect of causing processes to hang when the nfs server goes away. Soft mounts will al
              • Thanks for the tip. This is definitely preferable to rebooting the box, which is never the right solution. :-)
          • Samba on OS X just plain doesn't work

            This is surprising news. I'll have to stop using it, then.

    • In my experience Samba has really crappy performance.

      Using 100 Mbps ethernet the transfer rate is ~4.5 MB/s with Samba and ~11 MB/s with NFS.

      There are also the weird cases when transfer rate drops to ~200 kB/s with Samba. I've only seen that happen when transferring between Windows and Samba though. Between two Samba boxes the performance seems to be consistent.
  • Hardware RAID (Score:3, Interesting)

    by darkjedi521 ( 744526 ) on Friday November 25, 2005 @11:53PM (#14116643)
    If you have a true hardware controller, the RAID will be platform agnostic and neither know nor care what OS is accessing it. Anything software based (most onboard stuff) is going to be tied to a specific platform.
    • Don't use RAID to begin with. It's needlessly expensive for a home server and introduces an unneccesary point of failure. Maybe I have bad luck, but I have experienced many more RAID controller failures than I have hard drive failures. I even once got redundant RAID controllers, and the controller-controller failed. Let's say you were willing to spend $500 on this RAID solution: rather than do that, spend $250 on improving your backups and pocket the difference.

  • by MerlynEmrys67 ( 583469 ) on Saturday November 26, 2005 @12:01AM (#14116678)
    Pick an old slow CPU, it really doesn't matter at 100Mbs speeds, if you were going multi gig - well then maybe. However don't skimp on RAM - put as many 1 GB sticks in it as you have memory slots (2 GB sticks are too expensive right now, but maybe)

    Go with slower hard drives, ie 7200 RPM drives, maybe slower - and you won't have the heat problems. However you might want to look into RAID 15, so if you can get a system that will hold 6, even better.

    Now remember, to drop back CPU power, and raw disk speed for the thermal/power savings

    • > Pick an old slow CPU, it really doesn't matter at 100Mbs speeds

      Amen.

      My home server is a Pentium 233MMX, and has no problem saturating my 100BaseT network.

      If you're going for low power/low noise, there's a lot of room to underclock any modern CPU.
    • "However you might want to look into RAID 15, so if you can get a system that will hold 6, even better."

      Raid 5 will work with 3 or more drives and only 1 drives worth of data is unusable regardless of the number of drives in the raid. Raid 15, if I understand it right (I'm much more familiar with raid 5), would set aside 4 drives worth as unusable in a 6 drive array.

      If I have a bunch of 100GB drives:

      a 3 drive raid 5 gives me 200GB of usable space.
      a 6 drive raid 5 gives me 500GB of usable space.
      a 6 drive ra

      • No RAID 15 is simply a set of Mirrored (Raid 1) Raid 5's. So I guess you are correct in that 4 drives are redundant - but it really helps with redundancy are read performance (with the correct hardware of course). So as long as you don't loose 2 drives on EACH mirror set you are fine (you can loose all three of one mirror set, and one drive on the other and you are still fine)
      • I wonder if more CPU power might be useful on the occasion that a drive needs to be rebuilt - ie when a drive has failed and it switches to a spare. This period of time is one in which the whole array is vulnerable and can't survive another disk failure, so the faster the spare is reconstructed, the better....with s/w raid, CPU performance is not insignificant, I'd bet.
  • Some Ideas (Score:4, Interesting)

    by Bios_Hakr ( 68586 ) <xptical@gmEEEail.com minus threevowels> on Saturday November 26, 2005 @12:02AM (#14116682)
    Grab one of the Via MoBos. They'll have at least one PCI slot, onboard video and NIC, and maybe even sound if you look around.

    Then grab a PCI SATA card. It won't need RAID capability, just a ton of SATA ports.

    Attach a smallish hard drive to the master onboard PATA port and set a CDROM on the slave on the same channel. Install your SATA card and attach some big-assed SATA drives.

    Install Debian to the PATA drive and then remove the CDROM. Disable, in BIOS, everything you won't be using.

    Once you are in Debian and everything works, use 'mkraid' to initalize the SATA drives in a RAID5 config. Mount that under /mnt/storage and then use samba to share that across your network.

    Some might say that RAID5 will be too slow. But, across a network, chances are the wire will be saturated before the hard drives hit the sustained transfer rate. If you are concerned about performance, throw a Gig-E NIC in there and use RAID0+1 or RAID3.

    I'm not sure how well Linux can deal with suspending the hard drives in a RAID controller during inactivity. If the kernel can handle it, use something like 'hdparm' to sleep the drives when they aren't in use.

    Good luck, man...
    • Re:Search.. (Score:1, Funny)

      by Anonymous Coward
      Slashdot has a search? What does it do?
    • Re:Search.. (Score:4, Insightful)

      by egarland ( 120202 ) on Saturday November 26, 2005 @01:40AM (#14117177)
      The poster asks different questions than the articles linked raise and they are valid questions.

      The objectives of large scale redundant and able to be put in a closet are needs unmet by todays storage designs and are likely to be as common tomorrow as wireless ethernet is today.

      So.. basically.. snoo snoo off. :)
  • by TheWanderingHermit ( 513872 ) on Saturday November 26, 2005 @12:05AM (#14116696)
    First, I have to say I'm truly awed that you have that much data. You must really love collecting pr0n -- er, have a lot of sound and video files.

    I recently had to set up two new servers. One is for business, and one is for personal data. For both, I used RAID 5. They run NFS and Samba, with different directories shared as needed to other systems. RAID 5 is EXTREMELY simple to set up (it's a one line command, once you install mdadm, which, on Debian, installed like a dream), and I'd just suggest Googling for mdadm and tutorial. You'll get several tutorials. There's really no need to pay for hardware RAID cards on Linux (unless you're using an old, slow system). Besides, until you get into the range of something like $300, the RAID cards all do the work through drivers anyway, so you might as well just get a cheap ($10-$20) PCI IDE Controller card to add to your existing IDE channels. Just make sure it works on Linux and is NOT Adaptec (they fsck with the drive order).

    On both my systems, all the drives are the same size and model number. I figure you can't always tell if a 160GB drive is 160GB or 140GB, and I didn't want to mess with that. RAID 5 takes 3 drives, but with mdadm, you can add a spare for failover (and the monitoring daemon will e-mail an account on that system in case of failure, so you have a warning to replace the bad drive). My only concern about using the same model for all drives is that there may be a flaw in that model. I found drives that were given a large number of good reviews at NewEgg.

    You can also add more spares and more devices with mdadm, or replace faulty devices (not hot swappable, unless you have special hardware, and I don't even know if Linux supports that).

    One last note on mdadm: when you first set up a RAID 5 array with it, you'll get an immediate warning of something like a degraded event. This is normal. I think (can't remember details) mdadm and the kernel (mdadm is by the person who wrote the RAID code for the Linux kernel) don't do an exact version of RAID 5 and, instead, use something that lets it rebuild on a new drive faster than it would otherwise.
    • by swillden ( 191260 ) <shawn-ds@willden.org> on Saturday November 26, 2005 @02:34AM (#14117414) Journal

      You can also add more spares and more devices with mdadm, or replace faulty devices (not hot swappable, unless you have special hardware, and I don't even know if Linux supports that).

      Here's another tip: If you're using Linux software RAID, carve your drives into multiple partitions, build RAID arrays over those, then use LVM to weld them into a larger pool of storage. It may seem silly to break the drives up into paritions, just to put them back together again, but it buys you a great deal of flexibility down the road.

      Suppose, for example, that you had three 500GB drives in a RAID-5 configuration, no hot spare. That gives you 1TB of usable storage. Now suppose you're just about out of space, and you want to add another drive. How do you do it? In order to construct a new, four-disk array, you have to destroy the current array. That means you need to back up your data so that you can restore it to the new array. If there were a cheap and convenient backup solution for storing nearly a terabyte, this topic wouldn't even come up.

      If, instead, you had cut each 500GB drive into ten 50GB partitions, created ten RAID-5 arrays (each of three 50GB partitions) and then used LVM to place them all into a single volume group, when it comes time to upgrade, you will have another option. As long as you have free space at least equal in size to on of the individual RAID arrays, you can use 'pvmove' to instruct LVM to migrate all of the data off of one array, then take that array down, rebuild it with a fourth partition from the new disk, then add it back into the volume group. Do that for each array in turn and at the end of the process you'll have 1.5TB, and not only will all of your data be safely intact, your storage will have been fully available for reading and writing the whole time!

      Note that this process isn't particularly fast. I did it when I added a fifth 200GB disk to my file server, and it took nearly a week to complete. A backup and restore would have been faster (assuing I had something to back up to!). But it only took about 30 minutes of my time to write the script that performed the process and then I just let it run, checking on it occasionally. And my kids could watch movies the whole time.

      For anyone who's interested in trying it, the basic steps to reconstruct an array are as follows. This example will assume we're rebuilding /dev/md3, which is composed of /dev/hda3, /dev/hdc3 and /dev/hde3 and will be augmented with /dev/hdg3

      • pvmove /dev/md3 # Move all data off of /dev/md3
      • vgreduce vg /dev/md3 # Remove /dev/md3 from the volume group
      • pvremove /dev/md3 # Remove the LVM signature from /dev/md3
      • mdadm --stop /dev/md3 # Stop the array
      • mdadm --zero-superblock /dev/md3 # Remove the md signature from the disk
      • mdadm --create /dev/md3 --level=5 --raid-devices=4 /dev/hda3 /dev/hdc3 /dev/hde3 /dev/hdg3 # Create the new array
      • pvcreate /dev/md3 # Prepare /dev/md3 for LVM use
      • vgextend vg /dev/md3 # Add /dev/md3 into the array

      In order to make this easy, you want to make sure that you have at least one array's worth of space not only unused, but unassigned to any logical volumes. I find it's a good idea to keep about about 1.5 times that much unallocated. Then, when I run out of room in some volume, I just add the 0.5 to the logical volume, and then set about getting more storage to add in.

      • This is a great idea.

        I'm thinking about rebuilding my home server which mainly is a mythtv backend. Currently using LVM to get bigger partitions than I have drives (my tv partition is at 480GB right now) and was thinking about RAIDing them for added security, but was put off by the fact you can't easily extend a raid.

        I'll follow your tip, but will probably add boot and swap partitions to every drive. Not because it's needed on every drive, just to keep the setup consistent over all drives.

        Couple quest

        • I'll follow your tip, but will probably add boot and swap partitions to every drive. Not because it's needed on every drive, just to keep the setup consistent over all drives.

          I did that too. I also had a couple of drives which were actually slightly bigger than 200GB, so I used the extra space for /root partitions (mirrored).

          This will work with those newfangled extended partitions, right? Didn't use those since the days we dual booted DOS and OS/2.

          Yep. In fact, to keep the numbering clean, it's a

          • Your ideas and suggestions are welcome.

            Bear with me - I'm still recollecting parts of my just exploded brain :-)

            I can see you wouldn't want to extend a degraded raid though. OTOH, if one knows that one can reconfigure it later, no trouble just replacing the disk and then rework one partition after the other, at one's leisure.

            What I'm trying to work out right now is this: I read the total size of a raid is (number of drives -1)*size of smallest drive. Right now there's 4 drives in that computer - 120,

            • I hope you excuse me for intruding in the thread, but I find this to be a quite interesting optimization problem. You ultimately want to optimize, T, the total size of the LVM of arrays, where:

              T = Sum_{j=1}^{m}[(N_{j}-1) * S_{j}]

              where Sum_{j=1}^{m} is the sum over m arrays, N_{j} is the number of partitions in the jth array and S_{j} is the size of a partition in the jth array. One thing that this equation shows us, is that each array in the LVM does not need to be the same size! The other constraint in

              • Please ignore item 4; It was scratch work that I forgot to delete! In any rate, with the 3 array LVM I outline above you get a total of 180+180+80 = 440 GB array with 40 and 90 GB left-over on two of your disks.
                • Thanks for the lead. I started with 40GB as 'ideal' size because it'll fit best into my disc sizes. I realized that even a 2 drive raid is ok, because it can still work - the more partitions, the better, though.

                  I put 3 4-drive partitions at 120GB each, 1 3-drive partition at 80, and a 2-drive partition at 40, with 50 left. Gets me 480GB usable out of 730.

                  I could even scale down to 20GB partitions - not to optimize usage, but to allow for smaller partitions which will help in extending/moving to another dr

              • Seems a rather roundabout way of doing it. The easy answer is to simply have a number of small partitions on each disk and use multiple partitions to split thing evenly. Say you've got a 80GB drive, 2 120GB drives, and a 200GB drive. Let's say we want to make 4 RAID volumes and then group them all into a single LVM volume. Dividing by 4 tells us each partition needs 20GB on the first drive, 30GB on the second and third drive, and 50GB on the fourth drive. 10 multiplies evenly into all of those drives so we
                • Can't do that. The idea of a raid 5 is that one partition is allowed to fail without data loss. Soon as you put 2 partitions onto the same drive, losing that drive will make your raid fail.

                  If you don't care about that, easiest is to forget about the raid and go with LVM directly on the drives.

            • Right now there's 4 drives in that computer - 120, 160, 200 and 250GB. By slicing them up to partitions and taking one partition of each drive into a raid, I'd get the same size than doing a single raid over all drives - a maximum of 3*120=360GB. Losing over half my current diskspace for redundancy.

              Not really. You'd have 360GB in the RAID array(s), but you'd still have the other 40+80+130 = 250GB available to use in other ways. If you wanted to maximize your space, but have redundancy on as much of it

      • If you do something like this, be *VERY* careful how you cut partitions up on disk.

        Essentially, you are doing RAID-0 (striping, no redundancy) of RAID-5 blocks. This means that if a single Raid-5 block goes out (not partity rebuild, but failed), so does all the rest of your data.

        What I do is simple do software Raid5, and when I need to expand, bring my computer and a new disk in to work, dump my data onto a big frickin disk array we have there, reformat the Raid5 with an extra disk, restore, bring home, vio
        • Essentially, you are doing RAID-0 (striping, no redundancy) of RAID-5 blocks. This means that if a single Raid-5 block goes out (not partity rebuild, but failed), so does all the rest of your data.

          Maybe I'm missing something, but what you're saying doesn't make any sense. How would one of the partition-based RAID-5 arrays fail? The exact same way a RAID-5 array that uses whole disks would fail: two failed hard disk drives. In either case, if you're doing RAID-5 of complete disks or multiple parallel

  • by DanteLysin ( 829006 ) on Saturday November 26, 2005 @12:12AM (#14116737)
  • by ka9dgx ( 72702 ) on Saturday November 26, 2005 @12:51AM (#14116930) Homepage Journal
    A file server has one job, to serve files, reliably. You shouldn't care what OS it uses, as long as it's stable. You definitely shouldn't be trying to run more than one OS on it. Get it up and running, and leave it alone.

    However, you've then got all your eggs in one basket... not a good long term situation... you're going to need off-site backup... which is yet another Ask Slashdot question.

    --Mike--

  • The biggest problem with these things is no longer the cost of the hardware, but the cost of running this stuff. Why can't these things just go to sleep when they're idle and wake up when they're needed? I probably use my server 20 or 30 minutes a day, and the rest of the time it just costs me money. If it were a mac, maybe it would work, but it would cost an arm and a leg. Why is linux power management so touch and go?

    Has anyone used solaris 10? ZFS is looking nice; I just wonder about the power management
  • VIA C3 (Score:4, Interesting)

    by metamatic ( 202216 ) on Saturday November 26, 2005 @01:54AM (#14117244) Homepage Journal
    VIA C3 processor. Socket 370, up to 1GHz. Runs on 11W of electricity. If you get a VIA motherboard, you'll probably find that everything has open source Linux drivers. (I know the EPIA M-series do.)

    Now, anyone know of a socket 370 motherboard that'll take 4 or more SATA drives?
    • Why use a Via C3 when you can use a ULV 1.2GHz Pentium M? They have a TDP of 5W, and I bet they're faster too.

      Of course, there is also the regular LV Pentium M, which hits 1.5GHz at a TDP of 10W.
      • There are a lot more Socket 370 motherboards than Pentium-M motherboards.

        Plus, why pay Intel prices?
        • There are a lot more Socket 370 motherboards than Pentium-M motherboards.

          True, though there are still Pentium M mini ITX motherboards, and of course the Asus adapter lets you use it in any Asus board.

          Plus, why pay Intel prices?

          Because Pentium Ms have much higher performance while drawing less power. The 1GHz C3 is three or four years old, it can't compete because it is simply out of date. Plus I have a hard time trusting Cyrix derived cores ever since the horrible Cyrix M2.

          Yes, the Pentium M costs more, but
          • It's a file server. Why does he need a faster CPU?
            • Because software RAID-5 sucks CPU cycles. Because he might want other services on the box.

              And because the Pentium-M draws HALF the power of the C3 while it is busy being faster, and he wants a low power solution. 5W is a nice improvement over 11W, or whatever the C3 was.
              • Software RAID isn't going to be noticeable on a Pentium II, much less anything you can buy today.

                Pray tell, what other services are going to require extra CPU power?
                • Perhaps one would want to run a BitTorrent client on a machine. Some of the better clients such as Azureus are notoriously demanding on the CPU (and memory) front. Perhaps one might use it as a media server, which sometimes involves transcoding the video to a format supported by the output device (MPEG-2 or WMV).

                  You're also mistaken about software RAID's CPU usage. Doing some googling, one user reported a 5-disk RAID-5 array used 80% of the processing power of an Athlon 700 to do writes. The Cyrix 3 core is
                  • > Perhaps one would want to run a BitTorrent client on a
                    > machine. Some of the better clients such as Azureus are
                    > notoriously demanding on the CPU (and memory) front. Perhaps
                    > one might use it as a media server, which sometimes involves
                    > transcoding the video to a format supported by the output
                    > device (MPEG-2 or WMV).

                    The question was about a file server. File servers usually don't have a mouse, much less a bittorrent client.

                    > You're also mistaken about software RAID's CPU usage. Doing
                  • A VIA C3 1GHz on an EPIA-M board can happily run Azureus without significant CPU load. Been there, done that. As for video, it has hardware MPEG-2 acceleration and SSE. There's a reason why so many people use EPIA-Ms for MythTV.

                    I run Linux 2.6 software RAID-5 on systems with Pentium II 200MHz CPUs, and stick GB-sized databases on top, all on ReiserFS. The guy whose Athlon couldn't cope was clearly doing something wrong.

                    12W is low power in my book. Sure, maybe a Pentium-M can do it in 5W, but the price diffe
          • I'd rather buy from VIA, who provide open source drivers for their hardware, than Asus, who are disparaging about open source.

            Given that a Pentium II 200MHz can handle RAID-5 without pulling a sweat (I have some seriously obsolete servers running at work), CPU speed really isn't much of an issue for me.

            As for the C3 being out of date, well, yes, that's why there's the VIA C7, which looks like it may leapfrog the Pentium-M in CPU power per watt. http://www.viaarena.com/default.aspx?PageID=5&Arti cleID=40 [viaarena.com]
  • As others have mentioned, the CPU is definitely unimportant. I bought a Dell Poweredge server a few years back with a 2GHz P4. I set up p4-clockmod, and the thing has never come off the 300MHz minimum speed during normal operation.

    The most trouble I've had is keeping it all cool. Between the P4 and all the drives, a fair bit of heat is generated. Stopping the drives when not in use would help there, but then I've always been concerned that the extra starts would increase the failure rate. The way I fig
  • I've had a similar problem and I'm using a shuttleX as but the fan noise of 4 computers in my office is about to drive me crazy.

    you may want to take a look at www.littlepc.com [littlepc.com] these guys have some interesting low voltage, and fanless systems that could serve the basis a good home server system.
  • If you don't want/have time to tinker it yourself, you should consider one of the infrant readynas models. It's very easy to setup, reliable and has a lot of interesting features.

    http://www.infrant.com/products_ReadyNAS600.htm [infrant.com]
    • I've been looking for a readymade solution to this problem for a little while, the best I've found has been the Infrant products. They aren't the most attractive pieces, but check out some of the reviews on them!!
  • hmm (Score:5, Informative)

    by GigsVT ( 208848 ) on Saturday November 26, 2005 @03:49AM (#14117619) Journal
    I'm surprised no one has mentioned that RAID5 is no replacement for backups.

    I guess if it's just porn you got for free or whatever it doesn't matter, but if the data is important you still need some sort of backups.

    RAID protects against:
    Disk Failure

    Backups protect against:
    Disk Failure
    Accidental Deletion
    Malicious users
    Malicious programs
    Filesystem corruption
    Errant program causing file corruption

    RAID won't protect you from any of those other things one bit.
    • It also helps to verify that the backups you made actually work. Some people tend to make backups and don't test if it works, until they need them, and end up finding out that their backups weren't any good.
    • Re:hmm (Score:3, Informative)

      by Homology ( 639438 )
      I'm surprised no one has mentioned that RAID5 is no replacement for backups.

      Indeed. In RAID options for OpenBSD [openbsd.org] you see the following warning:

      While a full discussion of the benefits and risks of RAID are outside the scope of this article, there are a couple points that are important to make here:

      * RAID has nothing to do with backup.
      * By itself, RAID will not eliminate down-time.

      If this is new information to you, this is not a good starting point for your exploration of RAID.

      • * By itself, RAID will not eliminate down-time.

        That's simply misleading.

        Nothing can eliminate downtime. Even if you have backups, restoring from backup means something is wrong, ergo, downtime.

        RAID, by itself, does reduce downtime. It's simply not sufficient for error-proof data integrity.
        • > That's simply misleading.

          Maybe, but some people have a gigantic hard-on for RAID and think it magically solves all reliability problems. Skim any thread here about storage, you'll see a mystical faith in the awesome power of RAID.
  • Check LinkSys NSLU2 (Score:3, Informative)

    by Frodo420024 ( 557006 ) <henrik@fa n g orn.dk> on Saturday November 26, 2005 @08:19AM (#14118233) Homepage Journal
    Given that low power and backup are the main purposes, I suggest you take a close look at the LinkSys NSLU2 [linksys.com]. It takes two external USB2 drives and provides Samba shares for them. It'll automatically pull backups from your network, then back up one of the drives to the other. If you use laptop (2½") drives with the unit, it can supply enough power to them, and the whole setup will use less than 10 watt total. Takes less space, too.

    Having the backup done by normal file copying rather than RAID is not a problem in my view - after all, backup is the purpose, and that's done by the firmware. RAID ain't always ideal: A friend of mine had a nice RAID5 setup in his computer. Then the primary drive got corrupted - and that was immediately mirrored to the second drive! He lost all his data...

    No mention of the NSLU2 is complete without noting that it's eminently hackable [nslu2-linux.org]. :)

  • Get a Buffalo (Score:3, Interesting)

    by eyepeepackets ( 33477 ) on Saturday November 26, 2005 @09:04AM (#14118318)
    Buffalo technologies makes some really nice products, including RAID storage devices.

    I recently bought a single drive NAS unit with a 300 GB hard drive, use it for backup/storage for both Linux and WinXP (uh oh.) I also has additional tricks like built-in Gigabit ethernet, ftp server, printer server, backup of itself to attached USB 2.0 drive and misc. other tricks. Very nice device.

    The main advantage of doing your backups onto a device such as this is the power savings -- this thing uses very little power compared to running an additional PC/server. Doesn't make much noise and generates very little heat. You can get up to 1.5 TB of storage out of one of these for a pretty price.

    Check out the handsome little Buffalos at:

    http://www.buffalotech.com/ [buffalotech.com]

  • If you investigate Network Attached Storage you will find some solutions that are truly agnostic about the computer you connect with ( NAS solutions appear over tcp as a storage volume, without the need of a separate host computer) - if you get just a RAID enclosure you will need to attach it to a host computer. If you use a separate host computer you are better off with Linux as the OS (though you could use a Mac and still get the open-source advantages of SAMBA, NFS, what have you) and connect the Win bo
  • See this journal [slashdot.org] for information about this topic, gathered on slashdot.

    I've set up a fileserver in my garage, Linux mandriva 2005, serving NFS and SaMBa shares. Running since 3-4 months
    I use EVMS [sourceforge.net] as professional LVM. Raid 0 or 1 available, and bad blocs relocation too. Also SMART monitoring is running as daemon.

    Your main problem for spinning down drives is the filesystem:
    With journaled FS (recommended) disks will spin up every 10mn or so, after some tuning. For me too it's still too much and I'd like

    • With journaled FS (recommended) disks will spin up every 10mn or so, after some tuning.

      If a file system hasn't been written to since the last syncing of the journal to disk, why would a journaled file system bother writing to (or reading from) the disks, thus causing them to be spun up?

      • Why? I still wonder.
        But it seems to me that it's what happen with ext3 journal... as a kind of periodic checkpoint - no matter the activity.

        To minimize access and spin up my fstab entries look like

        /dev/hdd10 /mnt/src ext3 defaults,noatime,nodiratime,commit=600 1 2

        (that's 600s = 10mn)

        I hope I'm wrong, maybe I've overlooked something that keep the drive spinning. If course I won't use a noflush patch for such a fileserver!!!

  • I run a mini-ITX setup with software RAID. RAID isn't really a backup solution as other people have mentioned. My theory is that having the two files in one place are better than one. I have a script that copies my critical files over at midnight to a new folder on the server, in addition to storing media.

    Once every month or two, or with the completion of a project, I'll burn a incremental DVD with data.

    Once a year or ~18 months, I swap out the drives for newer ones. I then move the old drives into storage
  • Backup (Score:2, Insightful)

    by fjf33 ( 890896 )
    I have three computers at home, plus a KuroBox running my mail server, and LDAP for centralized accounts. I was going to set it up as an NFS server for homes but I haven't been able to use AMANDA to back it up. For some reason it hangs xinetd. Anyway, the Kuro takes no power whatsoever and the other machines can be up and running as needed. For backups what I did is buy the biggest IDE drive I could find and set it up on my machine. I run AMANDA and set it up to backup to that drive, it does compression an
  • Comment removed based on user account deletion
    • I have a Sun E450 (these can be gotten cheap on eBay, but you really have to find the right deal). Upped it to 2GB ram and 4x400MHz Ultra2 CPUs. It holds 20 SCSI disks, so getting a bunch in there and RAIDing them is not a problem at all.

      that is a pretty good solution..but it might not fit the submitters "low-power" constraint :P

    • I used to run an E450 at a previous job with software RAID-5 in it. With UFS logging enabled, it ran pretty well with a 10-way RAID-5 with 9GB disks. That was with Solaris 7; it should be better on later versions of the OS.

      As for the other comments about power, from the handbook [sun.com]:
      Maximum Power Consumption 832 Watts

      So, fairly hefty power draw...

  • Ok, RAID5 is c00l, but I rather work simple. For important data, I store it to two different disks (on different computers) and offline. For really important data, fourth, fifth location is that far away to protect from about else that direct hit by a large asteroid.

    I have EPIA PD10000 motherboard on my server (Debian, of course). Currently, there are 3 PATA disks (one old 60G from desktop and two quite new 250G) thus I have internal capacity to add more disks on PATA channel. Disks are configured to s

    • Be careful with USB and firewire-attached hard drives. Find the controller chip in the product you're attempting to buy and make sure to do proper research into it. A lot of controllers, such as the Prolific Logic PL-3507, have terrible problems which include delayed write failure errors and eventual loss of file tables. There are firmware updates for some of these chips, but I'm still scared shitless of using them again. I've lost three hard drives on USB 2.0 and 1394 controllers, and I was never able
  • Since this is only going to be serving a few machines (and maybe doing router/gateway duty)

    You may not want your file server doing firewall duty. If it gets rooted, all your files are compromised. In addition, if it fails for any reason, you've lost all your files, and your Internet connectivity. It makes google searching to fix the problem that much harder.

    Consider having a dedicated machine serve as the firewall/gateway/router. If it gets compromised, the intruder will still have another layer to

    • For me, a low power, quiet, 486DX266 running OpenBSD has worked well for the past 5 years as a firewall.

      Without sounding snarky or sarcastic, why do you do this? I bought a $40 wireless router a couple years ago and use its built-in firewall with great success. The thing is about the size of 2 packs of cards, has no moving parts, and the 4 LAN ports are perfectly suited to connect my home server, desktop, media PC, and printer. Granted, I only use about half a dozen firewall rules (in addition to a few i
      • I won't pretend to answer for the parent, but I have a similar system to his. My firewall is a Pentium 90 underclocked to 60 MHz with 8 meg (probably overkill), a floppy and no hard drive. Two NICs and a copy of FreeSCO. So far it has served me well. But your question is a common one. So, why do I use it, instead of a commercially available router/firewall thingie?

        Well, I initially did it because I had the parts laying around, and those routers at the time (> 5 years ago) cost a tad more than they do now

        • So, it comes down to a mixture:

          because I had the parts laying around
          those routers ... cost a tad more than they do now
          I can add more NICs
          alter the distribution to add other extras to it


          I'm not really counting the following reasons, because they do not present any advantage or difference over a wireless broadband router:

          I am not limited in the number of ports I can use - just slap a larger switch on the inside NIC (you could plug that same switch into a LAN port on a home router)
          a wireless NIC could be set u
          • Believe me, I have thought about getting a broadband router, especially since they are now so cheap. All of your points are valid. However, every time I almost decide to do so, I take a look at my box I built and say to myself "Why?". Partly because, as I have noted, I am lazy, but also partly because what I have works - it does the job I have it do, it does it effectively and well. Once again, why fix something that isn't broke? Buying a broadband router would entail spending extra money ($30 - $40 I could
  • I currently have a setup, that while it doesn't meet your needs for low power, and isn't a RAID'ed system, has been fairly reliable, considering its location in my home.

    The system is a P2-300 with 256 MB and a 40 gig drive, running Debian Woody. It currently serves the files via SAMBA, and it also has Apache, PHP, MySQL, and PostgreSQL installed (for web app dev work). I have things set up so that cron jobs (and in the case of the one 'doze box, a task manager run DOS batch file) copy various files from the

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...