Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Server Redundancy for a Small Business?

Cliff posted more than 10 years ago | from the disaster-preparation dept.

Businesses 81

SadPenguin asks: "I am currently working for a small company of about 15 people each with one to two workstation/laptop machines a piece. We are looking for a new server solution, as our last one crashed, and lacking any server redundancy, we nearly lost all of our data since our last backup (it was only a few days, but an important few). What the kind of server (and redundancy) solution would be appropriate for a company of my size? Most advertisements are for large scale enterprise serving solutions, but these are costly and excessive for my situation. I'm sure that there is a simple Redundant Server technology out there that is a bit less costly, but won't result in any downtime in the event of a motherboard component failing (like we faced this time when our mysterious surface soldered VRM failed). So what do you use? What should I use?"

cancel ×


Sorry! There are no comments related to the filter you selected.

A few recommendations. (5, Informative)

chuckcolby (170019) | more than 10 years ago | (#9338451)

Excellent question!

I actually run a computer consulting firm specializing in small businesses. I'll outline some of the more common recommendations - with what I think is the most important first.

From my experience, the best approach is to layer your defenses. I'd REALLY recommend a UPS (I generally assume this is purchased with a server, but it isn't always) at very least. Your local power company is only required to provide you with something CLOSE to 120v. They generally can't keep it consistent enough for power supplies (and electronic componentry in general). Protect your investment, UPSes are generally relatively cheap.

The fact that you've got a backup solution is good, but (as you've seen) not enough. Evaluate it, and see if it's consistent with best practices - i.e., is it a tape (or optical) backup system that is done in rotation and taken offsite by somebody in the company? If not, set that in motion first.

Next, some sort of drive redundancy is in order. At very least, mirror your drives. I generally recommend RAID5 (or one of its variants), but in very small companies RAID5 isn't either required or affordable or both. IMO, the jury's still out on the long-term viability of IDE RAID, but I think it looks promising.

Finally, redundant power supplies and NICs (for those of us that are REALLY paranoid ;) ). I've had a couple of servers' power supplies die on me, but the server kept right on ticking thanks to a redundant unit.

If it's affordable to your company, consider hot-swappable server components, as well. This significantly reduces downtime to your coworkers... and expense to your company.

Hope this helps. Good luck!

Oh yeah, FP ;)

Re:A few recommendations. (0, Offtopic)

advocate_one (662832) | more than 10 years ago | (#9338529)

"Oh yeah, FP ;) "

Excellent one as well... anything else that follows will largely be redundant...

Re:A few recommendations. (1)

nateb (59324) | more than 10 years ago | (#9342430)

So the 'R' is Redundant, then?

Some folks will recognize someone else's post from a day or two ago.

/bin/bat /dev/head | /dev/wall

Re:A few recommendations. (2, Interesting)

jmt9581 (554192) | more than 10 years ago | (#9338701)

These are all very good solutions. More ideas include high availability network stuff such as UCARP [] or HA-Linux. Many of these depend on specific details of your setup, these recommendations are definitely biased towards Unix/Linux systems. In my experience, having backup systems for your important is a crucial idea, having employee downtime as a result of system failure is a nightmare for any company, especially a small business.

A mysteriously soldered VRM sounds a bit odd, you might think about going with a rock-solid hardware platform if/when you upgrade your current infrastructure. I've had great success with a Sun E450 for the last two years, it's a rock-solid piece of equipment that's required absolutely no hardware maintenance.

It's difficult to give you more concrete recommendations without knowing more about your setup. Things like hot database backups can be tricky, especially if your IT staff isn't experienced in database backup solutions. You might find value in discussing some ideas with a consulting company. The parent post seems to know a lot about this, and there are certainly other companies out there [] . :P

Best of luck to you.

RAID 5? (1)

PatHMV (701344) | more than 10 years ago | (#9338755)

Is there any sort of RAID 5 available in the range of between $1000 and $2000? I have a small law firm, and would love to be able to have the redundancy capability offered by RAID 5. I would think there is a market for a stand alone Firewire box that I could pick up. The box could either come with 3, 4, or 5 harddrives, or allow me to pick up my choice of harddrives separately and just plug them in.

Re:RAID 5? (1)

chuckcolby (170019) | more than 10 years ago | (#9339172)

A quick Google turned up the following sites:

PC Pitstop [] []
Adaptec [] (DuraStor line... a bit beyond your stated price range, though)

Hope this helps!

Re:RAID 5? (1)

ADRA (37398) | more than 10 years ago | (#9339389)

RAID 5 is supported through the Linux natively, so all you'd need are the seperate drives to add to the virtual array. Just make sure to ALWAYS have every disk available. Rebuilding an array isn't that fast.

As for a standalone box, I don't know anything that's sub-SCSI, but I imagine someone sells something similar. It'd be really slow though. Firewire can average out around 14MB/s and firewire 2 is still missing critical mass. If in doubt, get a 4 channel firewire card, 4 drives and tape them to eachother!

Re:RAID 5? (1)

mabhatter654 (561290) | more than 10 years ago | (#9342941)

Adaptec and Escalade[?] both make IDE/SATA Raid cards. They're not the cheap $100 ones like Promise makes, but they're much cheaper than the SCSI options and use standard IDE drives. For a matter of fact most of the midrange "Enterprise" data tank solutions Utilize a big box of 8-10 disks set up this way with some extra "sauce" in the mix.

Re:RAID 5? (1)

ADRA (37398) | more than 10 years ago | (#9343278)

Obviously, but the parent wanted an EXTERNAL option . He's looking for a chassis that has multiple drives inside, and the raid is done outside the machine. (Or inside Linux's subsystems as my example listed).

If you look in the External storage solutions at Adapter you'll find that there are NOT SATA/IDE interfaces. As for internally, you can use SATA hard-drives in the setup, but no matter how you slice it, a 12-drive storage array is not a viable option for:
Small businesses (5k for the chassis, n for the fiber channel, etc..
People who want to 'carry it away' frequently

Re:RAID 5? (2, Interesting)

pbox (146337) | more than 10 years ago | (#9339710)

I need to point out that your selection criteria should include multiple firewire ports, and firewire controllers on both the drive and ant the server end. Should add only marginal cost to your setup.

I have a maxtor FW single HDD backup solution, but I definitely would not recommend that particular one for constant on situation (for lack of ventillation). It seems that when the drive does the temp calibration the FW insterface hiccups, and the ongoing transfer gets interupted. All is well after diconnection and reconnection. I only notice it when I am doing unison on very large directories (30+GB), but if you would serve files off of it, you might be getting into trouble as well.

Other issue with FW is bandwitdh. I am getting about 40-50MBps which is enough for sustained transfers, but the drive would be capable to 100MBps in burst (short files cached on the drive). This might be detrement to the file server performance.

I do have to say that I quite like the idea of having 4-6 external FW (or better FW800) disks hooked up, and running a virtual RAID5. This way the failed disk would be very easily hot-swapped. And might even be much cheaper than having hot-swap support backplane/chassy/server.

Re:A few recommendations. (3, Informative)

llefler (184847) | more than 10 years ago | (#9339806)

While your suggestions are good, some of them might be a little expensive for a company this size. Depending on what kind of business they are.

The first red flag I saw was that although they had backups, they were three days old. If the data is worth saving, it's worth doing it right. Full backups on the weekends and incrementals nightly.

Ok, the redundant stuff... power supplies, hot swap drives, RAID5. You're approaching a $10k configuration. That, BTW, would have still gone down because they had a motherboard failure. And since they needed backups, their drives were corrupt, so the RAID probably would have been too.

Really though, this whole question is about designing their new server without any idea of the load required. Based on the info that is available, I think I would lean towards purchasing two servers. Make them a little smaller than what you would purchase if you only had one, and divide the load between them. If one fails, you can temporarily transfer to the remaining one until you can get it fixed. You could even go so far as to move drives and RAM temporarily if necessary. Just make sure the equipment is server rated. IE: my Dell 400sc Poweredge servers are rebadged desktop machines. My Compaq Proliant 800s are definately not. Even good equipment is getting pretty cheap if you have reasonable requirements.

Above that; daily backups. The UPS equipment like you suggested, just keep in mind that UPSs are consumables. And possibly IDE RAID-1. Drives are cheap and 15 users shouldn't need the performance of SCSI.

Re:A few recommendations. (2, Insightful)

afidel (530433) | more than 10 years ago | (#9345720)

Forget incrementals, if the data is worth backing up its worth backing up correctly. You main expenses will be in the drive/changer and manpower. Tapes are kind of expensive but not as expensive as losing your data, 90% of businesses that suffer a catastrophic loss of data go out of business within 5 years. As for server solutions a pair of 2U Dell's with RAID5 and redundant PSU's can be had for under $10K, unless this is an unprofitable company that is cheap. I have quite a few companies with 35-50 employees with a lot more servers than just 2.

Re:A few recommendations. (2, Insightful)

llefler (184847) | more than 10 years ago | (#9348078)

If I remember correctly, the survey that I read was 90% of small businesses....

And $10k is a huge investment for a company of 15 employees if they aren't technology based. Most would start to squeal long before you hit $5000. Sometimes you just have to be happy that the 'server' isn't the owner's PC.

Re:A few recommendations. (0)

Anonymous Coward | more than 10 years ago | (#9353247)

99% of unproven statistics are made up.

Re:A few recommendations. (1)

phorm (591458) | more than 10 years ago | (#9356546)

We have a server here with dual-200GB drives in RAID-1. It's primarily used to backup several offsite servers on a nightly schedule. Assuming that there was space elsewhere in the building, putting another server in there with RAID-1 drives and doing networked backup should be fine.

With 'nix, even software RAID-1 works well. RAID-5 is also a choice. Doesn't take an insanely fast CPU (or a monitor that that matter)... so you can manage a multidrive backup machine for under $2000 or even under $1000.

Re:A few recommendations. (1)

llefler (184847) | more than 10 years ago | (#9360857)

We have a server here with dual-200GB drives in RAID-1. It's primarily used to backup several offsite servers on a nightly schedule. Assuming that there was space elsewhere in the building, putting another server in there with RAID-1 drives and doing networked backup should be fine.

Let's be clear about what you have here... You've taken a box, stuck a pair of IDE drives in it and called it a server. While not necessarily a bad solution, it's not in-line with the post I replied to that suggested redundant power supplies, NICs, and RAID (not IDE RAID). I know you have to be talking about IDE, because the largest SCSI drive on pricewatch is 180 gig and they run $600 each. Kind of hard to build a $1000 server when the drives would be $1200.

You don't need 'nix (sic) to have software RAID. NT4 handled it just fine. But rather than getting real cheap I'd recommend getting something like a Promise IDE controller and letting it handle the IDE RAID in hardware. That frees up the CPU to do real work.

If all you were doing it building a backup box like you suggested: Dell 400SC - $400, Promise TX4000 - $125, and 2 200g IDEs - $120 ea. Easily under $1000, but not a solution I personally would recommend as a critical server. At the very least, there has to be some kind of offline backup capability. Tape, removable drive, something...

Re:A few recommendations. (1)

phorm (591458) | more than 10 years ago | (#9362680)

Actually, the server is running a PROMISE raid controller, and dual 2200 CPUs, etc. The point was that one can just as easily make a cheap "backup machine" that will handle offsite backups and/or be able to swap in the event of an emergency. Other servers still do the primary work, but this one makes sure that if one of them goes down - the data is still around.

Re:A few recommendations. (0)

Anonymous Coward | more than 10 years ago | (#9362216)

Great suggestions but server availability and data backup are two different things and anyone on a budget should consider them seperately. In general, protecting from existing data getting lost should be a higher priority then maintaining the server running for new data entry. A small business can normally do things by paper if the server is down for a PS but they no amount of server availability can recover data lost that was not backed up. Bascially, don't worry about 2 nics and dual power supplies until you have a data backup solution going. There are internet solutions for backup (provided you have the bandwidth) and even cheap external usb HD's if need be.

If truely on a limited budget, concentrate on hardening your storage and backup solution first and if money allows, start expanding to provide redundancy in the availability.

It depends (1)

SealTit (606480) | more than 10 years ago | (#9338489)

It all really depends on how much money you want to spend. You could roll your own dual opteron server and thrown in a bunch of small (20-40 GB) hardrives and RAID 5 'em. That would be my solution. It would cost you like 2 grand if you do your homework and get a good hardware raid card. 3ware makes good stuff that's compatible with Linux.

Daily backups (4, Insightful) (24157) | more than 10 years ago | (#9338503)

>> we nearly lost all of our data since our last backup (it was only a few days, but an important few)

Daily backups !

general recomendations:

quality server (Dell/HP/etc)
NO ide drives!
SCSI in software raid5
minimum software install (e.g. no compilers)

get second 'devel' server to test/compile software before using on production server
If it is not broken, don't fix it. as in screw with the devel server.

Re:Daily backups (1)

itwerx (165526) | more than 10 years ago | (#9338637)

Forget software raid. The extra money you spend on hardware raid will be immediately recovered the very first time you have a drive fail.

Re:Daily backups (1)

Drakon (414580) | more than 10 years ago | (#9339478)

Forget hardware raid. The extra time you spend on software raid will be immediately recovered the very first time you have a hardware controller fail and corrupt all the drives at once.

Re:Daily backups (2, Interesting)

itwerx (165526) | more than 10 years ago | (#9345171)

Heh, touche'! :)
But I've been in the industry for over fifteen years with thousands of clients and the last time I had a hardware raid do that was almost six years ago.
Software raid on the other hand inevitably takes more time/effot/energy to recover from failures (especially if you're so foolish as to use what's provided by Win2K!).
Hardware hot-swap RAID is easy, just change drives and nobody knows anything happened.
Software RAID usually requires at least a reboot if not fiddling with system files.
Ergo, the labor cost of software raid usually ends up being more than the component cost of hardware raid.
Not to mention the performance difference between software and hardware raid is like night and day! (Just remember to get the little battery pack option if you decide to use write cache on the raid card :).

Re:Daily backups (1)

mnemoth_54 (723420) | more than 10 years ago | (#9347864)

My basic rule of thumb is this:

If they can afford to get a top end RAID5 setup from a quality vendor, it's the better choice. You can be relatively assured that when that raid controller dies in a few years, you can get another card that can import your config and recover your data.

If you are trying to do things on the cheap, and cannot get the top of the line RAID card, software RAID provides the hardware independence to upgrade cards and drives as needed, with what ever is cheapest at the time they fail. There is certainly a speed penalty for software, but thats the price you pay.

The hardware raid setups in between that can't hotswap and auto rebuild, just aren't worth the cost/flexibility tradeoff IMHO.

Re:Daily backups (1)

DrZaius (6588) | more than 10 years ago | (#9340716)

SATA is the only way to go.

SCSI is stupidly expensive -- I can build a 750GB raid 5 (4x250GB) for the same price as a 140GB SCSI drive. Both of these solutions are roughly $1000CAD. Even throwing in a 3ware SATA controller is still cheaper than doing a software Raid 1.

I also think that everyone is going a little overboard -- I'm pretty sure that original poster does not need redundant servers running linux HA and development and production servers. They can probably afford downtime over hot swappable memory dimms and CPU's. I'm also pretty sure that they don't need E450's and one poster suggested.

My guess is that at a 15 person shop that needs to ask such questions, they probably have a file server. They probably don't have a domain controller. I bet they outsource their email to their ISP and colocated their website for cheap somewhere.

My recommendation is to poke around the community and find a small grey box company that offers IT services for small businesses. Make sure you talk to some of their clients and make sure they offer similar services to what you are looking for.

They can then build you servers with quality parts (not the crap they put in HP's or Dell's). They can also do it for a lot cheaper. They should show you how to back up your servers using something like BackupExec. Plus, they should offer some sort of warranty. If it breaks, they should be able to come on site to fix it for you.

That's all.

Re:Daily backups (1)

Alex (342) | more than 10 years ago | (#9343814)

>> we nearly lost all of our data since our last backup (it was only a few days, but an important few)

Daily backups !

Why not hourly backups ? Hourly incrimentals + daily full backups ?


My small company solution (4, Interesting)

Bistronaut (267467) | more than 10 years ago | (#9338523)

I work for a small company that only has three full-time employees (including me). I use two Debian boxes (cheap-o machines that are just retired desktops with some big cheap IDE hard drives in them) running Samba. I use the rsync mirroring technique I found here [] .

One box is the "live" server and the other mirrors the live server every night. If the main server dies (which happened once - power supply failure), I can "promote" the backup server by changing one line in its Samba configuration. As a bonus, the backup server keeps "snapshots" back a week or two.

Re:My small company solution (2, Interesting)

questforme (542772) | more than 10 years ago | (#9351954)

I use the rsync solution for my home computer(although to an external hard drive, not a second computer) and love it, as a matter of fact I had an unplanned test last night and it rebooted without a hitch.

If you really wanted to save some more money you could use an external drive to rsync to although you would have to get your server fixed before you could copy the rysnc'd files back over.

Re:My small company solution (0)

Anonymous Coward | more than 10 years ago | (#9356024)

rsync is the way to go.
From the server (e-smith) we've made an one-on-one copy on another harddisk. Removable brackets blabla. Put this in another machine, mount it at /mnt/rsync-backup-machinename. Run the rsync, (excluding all kinda stuff you don't want synced anyway) and in case of a disaster, just swap the bracket, reboot, and off we go.

Re:My small company solution (1)

redux (82428) | more than 10 years ago | (#9390859)

For an incredible implimentation of the afforementioned Mike Rubel rsync trick, check out rsnapshot [] . It is perl based, quite robust in the error checking arena, and very configurable. Plus the creator is very open to suggestions and is quick with the updates. Plus, there is even a deb repository:

deb binary/


Simple, cheap answers for redundancy... (3, Interesting)

stienman (51024) | more than 10 years ago | (#9338524)

I do three types of redundancy/backup at my sites:

* Mirrored Raid in all servers
* A regular workstation with a good, large had drive that copies the server data to itself nightly
* A DVD-RW backup made nightly on yet another workstation, with at least one off site - 5 discs, one each weeknight, replaced a few times a year.

In most cases the server RAID (cheap ATA promise controllers) takes care of 90% of the problems - only one HD goes bad at a time, lightning strikes rarely take out the hard drives at all, nevermind both hard drives, etc. Even if it dies it's unlikely that the problem affected the HD backup on the other workstation, and it definitely didn't affect the cd-rw.

However, whenever you get a catastrophic failure in any component in the server, replace the entire thing. If the MB or power supply fails, copy the data to new hard drives, and use the old ones in less critical applications, etc.

Much cheaper than an 'enterprise' solution, and it should be because your application doesn't require such a solution. Use large tape drives in place of the dvd-rw if you must back up a huge amount of data on a nightly basis.

This sort of solution is very tolerant of cheap hardware, so replacing the server later may not be such a major cost.


applications (3, Insightful)

perlchild (582235) | more than 10 years ago | (#9338591)

Depending on the applications you need to have redundant, you might be able to just use a compactpci server, with redundant hardware in it(this technology, while expensive, even allows removal of failed cpus during operation of the machine, it was developed for telecom carriers, and is rather expensive). That would protect you from component failures, but not from power outages without redundant power, nor from os failures.
This is a hard problem(NP-Hard perhaps, I'm not sure), and you need to have a:

List of applications you want to protect

Budgeted amount

What threats you are trying to protect from

What kind of failures you will tolerate(do you need 99.9% uptime? or better? worse?
You could, for simple applications, like web service, bump up a pair of linux machines, gimmick some replication between the two, and hope nothing goes wrong, if you have a very low budget, and you'd probably spend a fair amount of work debugging later on, "synchronisation problems". But for redundant storage. The openssi project [] is working on highly-available single-image clusters for linux, in an open source model, they might be your first place to look. It's not however, something for the unprepared to do, nor is it something that I'd recommend if you do other tasks for this company. Maintaining such a beast will require a significant implantation investment. The good news is that once everything works to your satisfaction, you can probably take a 4 week vacation somewhere with golden beaches and much sun, and let it take care of itself. I can't stress this enough, this is a hard problem, if you really want to do this right, you'll want to surround yourself with qualified people with experience in this field, it's non-trivial, and mistakes can lead to severe data-loss.

Our backup system (4, Informative)

fava (513118) | more than 10 years ago | (#9338608)

At my place of work (18 people) I have set up spare low end machine (p233) with a 80gb drive as a backup file server. During the day every 15 minutes everything that has changed is copied to the backup server. The backup fileserver is configured as read only so a user cannot accidently change anything.

If the main fileserver goes down I simply change the configuration to read/write and change filemaping on the users machine and they continue to work. The whole process will take about 10 minutes to reconfigure the server and a couple of minutes per user machine.

As a bonus I dont delete the intermediate versions of changes files as I update the server. Instead I compress them with a unique filenames. So I can recover a fairly complete history of any given file. I have yet to fill up the 80gb drive so I havent needed to delete any backups. When the backup drive is full I will start deleting some of the older version, I should have room for about 6 to 9 months of backups at 15 minute intervals.

Re:Our backup system (1)

larien (5608) | more than 10 years ago | (#9340251)

Yes, this is exactly what I was thinking. The advantage is obvious; you can get by with very cheap hardware, failover time is in the order of minutes (although manual intervention is required). The intermediate versions are also useful, I'd imagine; might be worth checking up on someone who use rsync for backups with hard links which did a similar job.

One thing to add; I really hope you're not relying on the backup machine as your sole source of backups; if you lose the site (fire/flood), you lose all your data. At a minimum, take weekly backups offsite (taking tapes home is acceptable, although make sure tapes are secured), ideally daily.

Finally, you shouldn't need to reconfigure clients; just make the backup server appear to be the main server by changing its IP address and netbios name (if using Samba).

Re:Our backup system (1)

fava (513118) | more than 10 years ago | (#9340665)

I also rsync to an offsite server at my home 2 times a day, the company pays for my broadband connection. Rsync has been configured to delete any remote files that are no longer on the server but will only delete 50 files per session.

As well I copy all files that have changed during the day to a dated directory I periodically burn them to a cd. I end up with about 1 cd a week going back over 2 years.

All together I have 5 levels of backup.
1) Onsite mirror updated every 15 minutes with incremental versions saved.
2) Daily incremental backups over the last 2 years kept on site.
3) Offsite mirror updated 2 times a day.
4) Old projects are taken off of the server, burned to 2 sets of cds with one set kept off site.
5) Ocasionally I burn a snapshot of the fileserver onto cd's and keep off site.

I think I am well covered regarding backups.

Re:Our backup system (1)

Tony-A (29931) | more than 10 years ago | (#9346686)

If you intend to survive, that's the way to do it.
You need multiple backups, and they need to be cheap.

First rule of backups is that when you need them, something is not the way it should be and any scheme that assumes everything is normal is quite likely to fail. This means you want any failures of the backup systems to be as independent as possible of failures in the main systems.

Second rule of backups is that every backup except the one that matters was a complete waste of time. Backups need to be cheap and easy.

Multiple bad backups is survivable. Single good backup, if it shares a flaw with main system, can be extremely reliable except when it is needed.

Your's looks quite good.
The only thing I'd be tempted to add is: take an old clunker and a large IDE drive, mirror everything to it, write the date on it, and bury in in the bottom of a closet.

Re:Our backup system (1)

ptudor (22537) | more than 10 years ago | (#9341489)

Instead I compress them with a unique filenames.

What's your method for this? I recall something along those lines from the last time I read the rsync man page/docs, but I'm wondering how you go about it.

Re:Our backup system (1)

fava (513118) | more than 10 years ago | (#9341811)

I dont use rsync for the backing up over the local network. I maintain a list of files on the server and compare file modify times using perl. If the file does not exist on the backup server it is simply copied. If there is an existing file then the existing file is renamed before the new file is copied. The unique filename is simply the file renamed with a unix time value (ie seconds since Jan 1/1970) while maintaining the file extention. The actual compression is a cron job that runs over night.

For example:
becomes M1530-2.1.(1086400701).dwg
which them becomes M1530-2.1.(1086400701).dwg.gz

don't fall into the RAID trap (4, Insightful)

zaqattack911 (532040) | more than 10 years ago | (#9338736)

I've been a system admin for a production webserver for a few years now, and I can tell you this.

99.9% of the time when I've had to retreive data from backup, it was because of human error. I.E. someone deleted something they shouldn't have, or the moved the wrong directory to the wrong place, or an error was made during a software upgrade, etc..

the rest is due to random harware failure which would be a reason for using RAID. But pouring thousands into redundant servers and disks, is overkill for a biz your size.

If someone accidently wipes out a folder or data, your raid disks won't be any help.


Re:don't fall into the RAID trap (4, Insightful)

MarkGriz (520778) | more than 10 years ago | (#9339098)

"But pouring thousands into redundant servers and disks, is overkill for a biz your size."

I think it's a mistake to make a blanket statement that a RAID array overkill for a small business. My company is similar in size (18 employees) and a RAID is absolutely essential for us from a downtime perspective. We simply can't afford to be down becuase a drive crashed.

Sure, backups are essential for the lost/deleted file, but a RAID (or at least a mirrored drive) keeps your server up and running. Not everyone needs that type of reliability, but if you figure the cost of recovering from a failed hard drive (even in a small company), the additional cost of a RAID upfront is well worth the investment.

Re:don't fall into the RAID trap (1)

llefler (184847) | more than 10 years ago | (#9339952)

I think it's a mistake to make a blanket statement that a RAID array overkill for a small business. My company is similar in size (18 employees) and a RAID is absolutely essential for us from a downtime perspective. We simply can't afford to be down becuase a drive crashed.

Exactly. RAID is all about buying time when a hard drive fails. My personal server ate it's OS drive, and from a user's perspective, you would never know it. Being lazy, I waited several months before I replaced it. OTOH, at work, I have remote servers mirrored because if they have a drive failure, I have to call a 3rd party to service them. I'm much more comfortable sending a drive and rebuilding the mirror than having someone rebuild the server. Not to mention the ability to shift the repair to non-production hours of the day.

Re:don't fall into the RAID trap (1)

Fweeky (41046) | more than 10 years ago | (#9341727)

FreeBSD and a few other OS's support filesystem checkpoints, which effectively let you keep multiple versions of a filesystem on the same disk. They're actually used for background fsck in fBSD (create snapshot => fsck snapshot, leaving the rest of the filesystem live), but could also be used for keeping filesystem checkpoints about in the event of screwups like this.

With WinXP SP2, it also seems you can use WebDAV shares like normal fileshares -- an interesting project would be interfacing this to SubVersion to provide a fully versioning filesystem :o

Re:don't fall into the RAID trap (1)

afidel (530433) | more than 10 years ago | (#9345778)

Windows Server 2003 supports versioning, although their solution isn't nearly as flexible as say NetApp's.

WHAT are you serving? (1)

vasqzr (619165) | more than 10 years ago | (#9338782)

Daily backups, #1

What kind of server though?
Mail? SQL? Files?

Server Clusters and Raid 5 (1, Insightful)

haplo21112 (184264) | more than 10 years ago | (#9338881)

If you can do it the best way to handle is Clusters with an external Raid 5 device that is a shared resource between the two(or more) servers.
Set them up with a shared hardware Raid 5 device.
There is only one active Node in the cluster at a time, if that one fails the second one assumes the identity. Works great never fails!
We are a bit larger so we use EMC Symmetrix, however a smaller shop could probably do a low end EMC Clariion CX200 or the like.

Backup Server (1)

mrgrey (319015) | more than 10 years ago | (#9338903)

I'm a sysadmin for a tier 2 autmotive company in Michigan with about 35 client machines.

The two main servers are xeons with raid 5, redundant PSU's etc etc. One server runs the domain and as a file server while the other runs the manufacturing software suite (heavy database workload). All the data is very important but I rarely have a problem with lost data , unless some smuck over-writes a file or something stupid like that.

The backup solution I implemented was a Debian box that runs rsync every night backing important data to the hard drive of the backup server. This machine doesn't need to be anything special. Now if someone looses something I have ready access to it. On Fridays I do tape backups of the data on the backup server and then run rysnc with the delete option so only the data currently on the servers resides on the backup server. This solution is quite simple and straight-forward utilizing cron and bash scripts.

So far it has been working quite well...

Re:Backup Server (1)

DRue (152413) | more than 10 years ago | (#9339868)

if someone looses something

Karma be damned, you gotta learn the difference between loose and lose. Look, you lose your virginity to a loose MILF. Get it? DAMN, DOOD, every day. Idiots!

5000$ or less and you are in business (3, Interesting)

nenam (613985) | more than 10 years ago | (#9338920)

We just finished building a 2.5 TB (terabyte) server for less than 5000$. You could probably spend even less than that since we spend about 1000$ on two fiberoptic cards. We have 2 6 chanel 3ware RAID cards and 12 250 133ATA Maxtors hooked up to a 520 watt powersupply plus another 520 watt power supply acting as redudant power(we did that mod inhouse). 2.5 TB is probably more than you guys will need unless you are doing some advertising or something like that... so you could probably go for 1 TB, which will cut your costs down even more. So all in all you could probably get it done in about 3000$ not too shabby for 16 ppl. Our server backs up my whole college.

An option (2, Insightful)

Halvard (102061) | more than 10 years ago | (#9339064)

I too have long experience doing small business consulting and in some other areas. One thing you could do is use RAID-1 with a spare drive. That way if you lose one, you aren't screwed. You also could have a couple spare drives in hot-swap carriers. Pull a drive every night and have a duplicate of your server. Fire up the duplicate server and pop in your known good pull and boot if you server fails.

OS dependent, you don't even have to have exactly the same hardware if you use a more generic kernel build and you can list a different NIC for the spare server in the conf file for modules assuming you aren't compiling them into the kernel.

Continue with good backups made to another machine, to tape/CD/hard drive, or off-site. This way, even if your good pulled drive is a little out of date, you can bring to data current in short order.

You don't mention the OS of the server or budget, but I'll assume that since you've got 2 machines per desk time 15, you can afford a spare server. You don't mention OS and that affects cost, but still, if you are doubling up on hardware on desktops, you can afford to do this or most any of the other solutions offered.

Of course, you get what you pay for and if the experience is lacking in house, hire a knowledgeable consultant or company you trust to do it for you.

Cheap Redundancy (5, Informative)

Zambarra (696249) | more than 10 years ago | (#9339077)

a relatively cheap setup for data/service redundancy for a small business.

* two identical servers, running linux (of course).
* heartbeat
* drbd
* two UPS

Notes, Ins, Outs and What Have You's

service redundancy

heartbeat is used to make 2 servers look as if they were one. if one of the servers dies, heartbeat makes sure the other assumes the ip address and has all the relevant services started.

data redundancy

drbd is a network block device. again, it looks like one device, but when data is written to it, its actually being written to 2 seperate locations. if one box goes down, heartbeat makes sure drbd makes the other box primary.


these two call for a dedicated network and serial connection. so 2 nics and a serial port per box.
definitely raid array of some sort.

see for more details.

this is not a 100% proof setup, but its cheap and covers most of the bases.
of course, it requires a linux dude to get it all to work.

Re:Cheap Redundancy (1)

sydb (176695) | more than 10 years ago | (#9341166)

of course, it requires a linux dude to get it all to work.

I'm a linux dude, I'll do it (I already have for a company's mail servers). But I'm in the UK...

Re:Cheap Redundancy (1)

kinema (630983) | more than 10 years ago | (#9342301)

If your going to go the reduntant server route then you should also make sure that each of your servers/UPSs is on a differnt electrical circuit.

Re:Cheap Redundancy (1)

faster (21765) | more than 10 years ago | (#9344922)

I'm setting up drbd and heartbeat on a couple servers that will go into my colo cabinet this month. Everything the parent mentioned is positive (and true), so I'll provide a little balance by mentioning a few of the negatives.

First, it's not trivial to set up. If configuring one server is 1 unit, configuring redundancy is (1+n)^2 units of work where 'n' is the number of services that need to fail over. Maybe that's a little high; ((1+n)^2)-n might be closer.

If the machine is internal-only (no public IP), you don't need to worry about upgrades too much, but if it's on the public internet, consider how an apt-get (or equivalent) will affect your drbd devices, remembering that only one server at a time can mount the shared space.

In general, admin overhead is about double because you have to think a LOT more before you do anything that affects the configuration, especially of services that use the drbd device.

And last on my list, isn't the final answer for very many of your questions, once you really get into it. You'll have to hang out on the email lists and search the list archives. drbd/heartbeat is not as mature as e.g. Samba.

Re:Cheap Redundancy (1)

jelle (14827) | more than 10 years ago | (#9353311)

Good suggestions, except for "two identical servers", which is a very bad idea. Identical servers will fail at nearly identical points in time.

You'll know what I'm talking about when you have a backup server die the day after you switched over to it when the primary failed.

Diversify at least the the mainboards, power supplies, and hard disks.

A little addition for the UPS's: plug them into different power outlets on preferably different circuit breakers (if unknown, try opposite ends of the room). No need to let a single breaker bring both your servers down.

And, even with a setup like that, still a backup should be kept, because that file-delete command will replicate to the backup server in the blink of an eye.

If you're using Linux (3, Informative)

sydb (176695) | more than 10 years ago | (#9339116)

you may benefit from a combination of heartbeat [] and DRBD [] , which respectively provide IP address/service failover and a network (no special hardware required) data replication solution.

If you have appropriate hardware you might also appreciate Stonith [] , which provides forced-shutdown of a failed node (in the case that the failed node won't release the IP address, and hence you would otherwise have problems switching service).

If you're in the UK then give me a shout and I'll set it up for you (for a reasonable fee)! My contact details are available on my web site.

Simple things first (1)

DaveJay (133437) | more than 10 years ago | (#9339518)

If you're already making regular daily backups, and are only worried about in-between-backups, run RAID on your server -- I forget the specific RAID number, but use the one that mirrors your data on two disks (not the one that speeds up disk access by splitting your data between disks).

as a refreshing alternative... (3, Interesting)

BigGerman (541312) | more than 10 years ago | (#9339705)

.. get people into the habit of running CVS or Subversion client on "their documents" folders. Tortoise integrates right into Windows explorer. Advantages: file versioning, ability to work off line and still sync with the server later, etc.
if people actually work with plain text docs, they would love how CVS,etc will merge multiple users' changes.
Of course you would back up your CVS server but in case of a crash, chances are that very important file can be found on the desktop of the user who edited it the last time. Much better than relying on a network drive and then it is just not there.

Rsync (3, Informative)

peterdaly (123554) | more than 10 years ago | (#9340198)

It's already been mentioned a little, but a second server kept up to date with rsync may be a cheap way to go depending on how big your server is. While I don't know how much data you are talking about, I would expect rsync could sync a few times a day easily via a cron job.

I would suggest springing an extra $90 to get two extra gigabit ethernet cards and a crossover cable for a dedicated connection for rsync which doesn't compete with office traffic.

Using rsync as a basis, the solution could be made as low tech and simple or automated complex as you feel is needed.

Do woodworking? 50 Router Bits []

Re:Rsync (1)

LordMyren (15499) | more than 10 years ago | (#9349030)

i'm probably off my rocker, but i thought gigabit has some sensing capabilities which meant that you didnt need crossover cables, you could use any old cable between two cards and it would auto-detect and cross itself over.

My ideas.... (1)

the eric conspiracy (20178) | more than 10 years ago | (#9340290)

One of the things that I think people underestimate is the importance of version control. Far too often data loss is due to somebody accidentally deleting a file they spent a week working on. With version control you should be able to revert and get back most of the work.

The other is redundant hardware. As people point out, RAID etc. only provides protection for the redundant components. If the controller, motherboard or such which is not redundant goes bad, you are screwed. The best solution is two servers with some sort of mirroring.

Another factor to consider is location. One of those servers should be off site somewhere so that if the sprinkler system goes off (or thieves get in) or a disgruntled employee gets access you don't have all your eggs in one basket. It might be as simple as having some employee with a company paid cable modem stick it in a closet.

Finally, you need to make sure that restores really work. Many times I've seen data lost because the restore process failed due to lack of testing, even though all the hardware and software was in place.

A couple of solutions (4, Informative)

Peartree (199737) | more than 10 years ago | (#9341100)

If you are using Windows 2000/2003, an easy redundant file serving solution is to setup DFS (distributed file system). Just a tip, don't setup a domain-wide share for a file server that gets a lot of updates. Using DFS like that can create an administrative nightmare (last writer wins situation). You would want to use a domain-wide share if you have a lot of read-only files (like installation files, PDF image archives, etc) and you need a high-availability solution. You would be restoring files from tape a lot. Anyhoo, if your first server crashes, temporarily redirect your users to the second server either via DNS or just renaming the servers. DFS doesn't replicate printers, so you would have to install a new printer two times, once on the first server and a second time on your second server. Shouldn't be too much a problem if you only have 15 users.

If you are using Linux/UNIX/*BSD, you could use Rsync [] . There was a great article explaining Rsync usage in the June '04 print edition of SysAdmin [] .

Re:A couple of solutions (0)

Anonymous Coward | more than 10 years ago | (#9423370)

AD allows mutliple server through Printer Pooling to have redundent printers.

If you dont see (1)

smurf975 (632127) | more than 10 years ago | (#9341158)

If you can't see the business opportunity for a small and cheap business server distro solutions. Then you must be blind.

1. Do the things mentioned in other posts.
2. Distributed OS.
3. Offer offsite backups
4. Profit!

Re:If you dont see (1)

mabhatter654 (561290) | more than 10 years ago | (#9342976)

Really, you're right this screams for a Knoppix or Mepis type treatment! Start with a Live CD with all your common apps pre configured [apache, php, perl, samba, etc] as well as several options for hardware configuration and then boot and go.

Obviously, you'd have to limit your hardware configurations somewhat due to constraints, but that would be a good learning experience for why you needed the hardware and what each redundancy was buying you.

Re:If you dont see (2, Informative)

hirschma (187820) | more than 10 years ago | (#9344528)

While this _should_ be a great business opportunity, I think you'd find that small businesses pose some interesting challenges:

* Small business owners are CHEAP. They don't want to spend a nickel on something that isn't an immediate problem.

* They don't see the value in disaster recovery until they experience the disaster.

* They are hard to sell and market to.

* They often use horrible niche-market server based solutions that are Windows only.

I spent a few weeks talking to various business owners about a solution that would offer the following:

* Redundancy, in many of the same ways discussed here,

* Security: firewall, antivirus, antispam

* Offsite backup and admin

* Four hour replacement

* Other stuff, potentially, like ad blocking, web whitelists/blacklists, fax server, email server, etc.

The price to do this for a small business would have to be at least $250/month. They won't spend it on something that they see as intangible. This is the reaction, even considering that at least $200 a month is spent by them in man-hours to have someone, often the owner, wrestling with the cheapo Windows server that they're using. Keep in mind that the $250 would include DSL connectivity AND the hardware for the box.


Re:If you dont see (1)

smurf975 (632127) | more than 10 years ago | (#9355283)

That was a nice price you offered. However many people that aren't into computers wouldn't understand you. So you would need to use some FUD. Like many companies did with the Y2K (bug?).

Re:If you dont see (1)

hirschma (187820) | more than 10 years ago | (#9356270)

Tried the FUD angle. This was shortly after 9/11, and the question was: what would you do if all of your data vanished? If your office was destroyed?

Of course, the FUD angle is: what would you do if your server was eaten by worms/viruses?

Again, it is a great idea, but one that would be very problematic to actually sell.


Re:If you dont see (1)

smurf975 (632127) | more than 10 years ago | (#9356617)

Well as long as they have your business card in their rolodex. And will not call some very expensive Oracle guy after shit hit the fan and now are paying Oracle consultents your money times many for just advice.

This reluctance makes me think about my car insurance. I have it and my driving license for almost 10 years but never had an accident but still pay about (average cause in the beginning its more expensive) $725 a year for now ten years.

The chance of me getting into an accident now are almost zero.

pugservers (0)

Anonymous Coward | more than 10 years ago | (#9398750)

Not to tute my own horn, but I have a company that is trying to tackle this problem. We have a small, low-cost RAID server at the moment.

I'd love to hear your thoughts...

Re:pugservers (1)

smurf975 (632127) | more than 10 years ago | (#9405756)

The funny thing is that about a year ago I had the same idea. Well almost the same idea and using a mini-itx board however with one HD and you buy a second server for backups. Here is what I wrote a year ago:

Network Server:
Ability to serve web pages, mail, files, printers and other traditional network resources. Also act as firewall and router.

Key feature:
You use one computer for multiple roles each role will run in its own space like a virtual machine. This is for enhanced security and stability. For example if you're using a computer for both web page serving and file server. If the web server goes down due to an error or hacking. It will only affect the particular virtual machine.

Using User Mode Linux for this. Yes it does have a speed penalty but this is not noticeable in low bandwith situations.

If a virtual machine fails. An email is send to the admin and he can check what made it fail and restart it. It can also be restarted automatically.

If possible each virtual machine will be backed up each night totally or only configuration and data. This happens on a separate server or back up medium. This can of course happen automatically.
If a server goes down and data is lost. A back up will be automatically restored without user input.
Total back ups will be saved for a defined number of days. However the second back up method (configuration and data changes) for longer.

Key feature 2:
The OS's running is a stripped down version and so need very little extra resources except for the task they are build for.

So a virtual file server will only have the bares to run a file server and a web server the bares to run a web server. The host OS will only be able to host the guest OS's.

Key feature 3:
Extensive remote administration capabilities for each virtual server and the host. This can be done via SSH or other command line utilities. Or a web interface like WebMin.

This allows you to outsource the administration of the servers.

Key feature 4:
Zero maintenance and configuration.

Basically once installed everything should be running for years.

Everything will work from any client OS using several mechanisms like FTP, DHCP, DNS and universal email clients. So using universal protocols any OS can be used as client to any service provided.

Motherboard Onboard graphics, audio, 10/100 Ethernet, USB and TV-Out VIA EPIA 800 - 800Mhz 100
Ram Major 3rd 256MB PC2100 DDR DIMM 39
Hard drive 20 GB 2.5" HD 59
Power supply Morex 55W PSU and DC-DC Converter Kit 50
Casing DIYS 30
Other (cables & stuff) dunno 15
Total Price: 297

However this server is about the size of thick book and generates alsmost no heat so can be placed anywhere. As its cheap the idea is to add servers then to upgrade this one.

rdiff-backup (1)

Muzungu (785457) | more than 10 years ago | (#9344849)

I'm a sys admin at a small mission in Uganda. We landed in some hot soep after blowing up the server a couple of times. We now use 2 junk computers with suse 8.0 with samba and have ( a differencial backup every night. Works wonderfull!! Also alows you to repair a blunder of some days back. Check it out.

My solution.. (1)

pavera (320634) | more than 10 years ago | (#9347361)

Similar company size, about 20 employees,
we have a nice server with 5 36GB drives, running RAID 5, and another old system, with 2 120 GB IDE drives running RAID 1 in software (redhat), this machine rsyncs every hour with the main server... Its been fine for 2.5 years now.. lost a drive once in the RAID 5, replaced it and everything came back up fine...

One Button Disaster Recovery (0)

Anonymous Coward | more than 10 years ago | (#9348519)

Look into this. ackup/obdr_print.html It really works. I have used it in the past to save my ass. In theory, you could set up two identical very low cost servers in terms of hardware. Have one in production and the other in a different location in case of a true disaster. Then, do full back ups. If one server fails, just bring the other one in and restore from tape. Bang! You are up in running in very little time. It will make you look like a hero. It can also be used to covery your tracks in case of a screw up. For instance, if you load the latest patch of whatever and it blows up the OS. Just restore from tape. Basically, coupled with the proper backup solution, this bad boy will do open files, exchange servers, SQL DBs, all while the system is live. This solution is beautiful and I love it. Enough said.

Google Style Redundancy. (1)

megabeck42 (45659) | more than 10 years ago | (#9348722)

A lot of posts seem to surround getting a large, professional server machine with redundant everything. Those are expensive and still have points of failure.

I would suggest buying a number of the inexpensive wal-mart PCs and clustering them redundantly. Keep spares around for emergencies - emergency switches, nics, drives, etc.

This is a more technically complicated environment, because you have to worry about data consistency between computers, but, these walmart PCs are disposable and can work independent of each other.

Automated nightly remote encrypted backups (1)

mi (197448) | more than 10 years ago | (#9356068)

Be sure to establish the nightly backups:
  • Automated -- no one needs to press the button
  • Nightly -- no more than a day's work lost. Done at night (when the business is closed, rather), they are most likely to be self-consistent.
  • Remote (!) -- no tapes to shuffle, nor lose to the same fire/flood, that gets your server. Find a similar office and exchange each other's backups, or pay one of the many commercial providers in the area.
  • Encrypted, so you don't worry about the other guys poking through your data.

Unless you can not afford a downtime, you don't need RAID.

Think things through first... (1)

AdamMB (303318) | more than 10 years ago | (#9360166)

One thing that everyone seems to be missing is the question of how important the data is to you. IF the loss of a server (for an hour/day/etc.) is going to cost you $10,000 (purely an example figure) then you could probably justify putting around $10,000 or so into a nice top of the line server (you'd still have to skimp on things at that price, but still, it's to give you an idea). IF, on the other hand, having the server down for a day or the data loss you experienced costs your company only a couple hundred and happens very infrequently, then I wouldn't think you should spend $1000s on a beefed up server, but instead find a nice UPS, or just pump some money into a Service Level Agreement for the server, or even for offsite backups. It's all about your bottom line...don't just spend money on a server because of this one time. Fix the problem, but keep in mind how large of a problem it is to start with.

Instead of using hardware that'll breakdown... (1)

coryrc (786649) | more than 10 years ago | (#9376063)

why not used something designed to run "forever," like a nice old Ultra 10 or any other Sun machine? Unlike the majority of choice in x86 land, these computers are actually made to be servers that can't afford to not function.

Double-Take (0)

Anonymous Coward | more than 10 years ago | (#9380886)

Redundant NICs & Power Supplies, RAID, etc. are all great (and very important), but I have yet to see a server with redundant motherboards. ;)

Sunbelt software makes a product called Double-Take that I like the looks of. It's a bit pricey, but it allows you to mirror your expensive live server with a cheap whitebox PC (assuming it has the processing power to *be* a server for a few hours or days while you fix your broken production box). Plus it's real-time mirroring across the network with automatic failover.

And no, I don't work for these guys, or resell the software, and I don't even actually use it myself. I just think it's a nifty product that could be a good fit, if you can scrape up ~$5,000 for a hardware-independent backup solution.

Almost forgot the URL:

Cheap, reliable, efficient offsite backup? (1)

berendes (649537) | more than 10 years ago | (#9388990)

We've (er, I've) struggled with how best to do handle offsite disaster recovery (e.g. building goes up in smoke, or "bad guys" break in, steal everything). Overall storage of about 40gigs in a four person business, me as the CIO/CEO/etc. etc.

Initially, we mirrored a Snap drive to a remote site via rsync, but dropped that when we downsized. We've used Backup Exec to a 30gig tape, but that's finicky - tapes seem to go south for no discernable reason. Currently experimenting with DVD, but it takes lots of disks to do the full backup, and I'm flagging.

How do you do offsite?

Re:Cheap, reliable, efficient offsite backup? (0)

Anonymous Coward | more than 10 years ago | (#9396786)

Ibackup service or similar probably the way to go, I remember seeing a favourable review of these types of services.

pug servers (1)

scm1999 (170551) | more than 10 years ago | (#9397992)

I actually found a company that specialized in a Lunux box that would be ideal for this problem.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>