Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Backup Solutions for Small Tech Companies?

Cliff posted more than 8 years ago | from the too-much-data dept.

Data Storage 34

Brushfireb asks: "This has been hit on before, but given the cheap cost of hard drives, larger capacities and speed increases in possible storage (USB2, FIOS, etc) we though that an update would be really helpful. Here's the scenario: We are a small tech company and we have an assortment of workstations (Macs, Linux and Windows desktops), and servers (Web, Database, File, e-Mail, DNS, etc) that run on different Linux distributions. What advice or recommendations do Slashdot readers have for our needs that: won't break the bank; won't force us to take our servers down for an extended period of time (our servers must run 24/7); are reliable; and are easily to maintain. What are some typical mistakes that small tech companies make when it comes to backing up? What software and hardware do Slashdot readers use to accomplish these tasks in similar situations?"

Sorry! There are no comments related to the filter you selected.

Did someone backup and restore this story? (1, Funny)

Anonymous Coward | more than 8 years ago | (#13973592)

Because I could swear it wasn't here last night when it is timestamped.

Large raid system with periodic offsite backups (1)

QuesarVII (904243) | more than 8 years ago | (#13973796)

I am the admin for a small company myself. Here, there is only 1 thing we really need backed up regularly, and that is our customer data. It is backed up nightly onto a large raid system, and then once a week it is backed up to dvd from the raid. Larger volumes of data might be better handled with backups to tape or external hard drives. Those backups should then be stored off site for disaster recovery. Our regular employee data (current projects, etc) is backed up just to the raid itself. If anyone ever has anything particularly valuable, it too can be backed up offsite. The raid is just a bunch of ide disks in a software raid 5 with a hot spare. The raid is monitored by mdadm and warns me if a drive fails. If one fails, I let it rebuild onto the spare, and then cold swap the failed drive(5 mins downtime max).

Re:Large raid system with periodic offsite backups (2, Interesting)

Nutria (679911) | more than 8 years ago | (#13975329)

It is backed up nightly onto a large raid system, and then once a week it is backed up to dvd from the raid.

And if on the 6th day the RAID controller goes haywire and screws up your data, you've lost 6 days of inserts/updates/deletes.

Maybe your data is pretty static, but for most 99.9% of businesses, daily backups are essential.

Re:Large raid system with periodic offsite backups (1)

FCKGW (664530) | more than 8 years ago | (#13976280)

No, they don't lose the data, just its backup. The original data is still intact wherever it was backed up from.

I agree with the phrase "RAID is not backup" if that's what you're trying to say, but in this case the OP isn't pretending that a single RAID array counts as backup. That's why it's backed up onto this other RAID array (so there are now two copies on two separate machines) and then onto DVD.

Re:Large raid system with periodic offsite backups (1)

Nutria (679911) | more than 8 years ago | (#13977436)

but in this case the OP isn't pretending that a single RAID array counts as backup.

Ah, yes, you're right. My eyes passed right over the " It is backed up nightly onto a large raid system.

rsnapshot and/or backuppc (3, Interesting)

DRue (152413) | more than 8 years ago | (#13973856)

I use and recommend rsnapshot [] for taking disk-to-disk backups of unix based servers and PCs. It has a *really* slick directory structure where each daily/weekly/monthly backup directory is a *full* snapshot - but by using hard links, it only saves the changed files multiple times. Also, because it uses rsync, it only copies changed files across the network, and can use ssh no problem.

It's downsides: it's basically just a wrapper for rsync. It requires a lot of babysitting (if your backups fail for some reason, it'll try to do full backups the next day possibly with disasterous consequences as it tries to jam hundreds of gig down your T1). Also, it has to log in as root on all of your boxes, so there are some very careful sercurity considerations.

But a box with a bunch of disks in it, put it off site, and whamo you have a complete backup solution.

For the windows users, I like backuppc [] . I have never actually used it, but it allows windows users to choose when their backups are taken, and allows them to recover files themselves through a web interface. It's big downside is the cryptic way it stores files internally, making it really hard to extract files without using the web interface.

Backuppc (3, Interesting)

sr180 (700526) | more than 8 years ago | (#13975818)

We have used backupPC for around 9 months now. A linux based server (Fedora Core 2) with 6 200gb USB2 external drives. The drives are encrypted using a pre-generated key that is stored locally (also encrypted) and off site on cd multiple secure locations. The drives are rotated often, storing both full and incremental backups. 5 off site, one onsite. This stores around 6 months of backups for us. Every 6 months we archive the important information to DVD. Drives are monitored for errors and replaced at planned 18 month intervals.

We have around 15 desktops and 10 servers being backup by this solution. It was trivial to setup. Drives are secure while in transport and storage. Its automated. Recoveries can be made very easily from the website on the linux server.

Its much easier for us than the tape backup system (Veritas) that it replaced.

The only issue is that with windows servers, it cant access open files. Our sql servers simply make a backup copy of their databases which it grabs, but exchange will cause you issues.

Re:rsnapshot and/or backuppc (2, Interesting)

giverson (532542) | more than 8 years ago | (#13976320)

rsync + hard links is awesome. And since rsync runs on most any platform, I backup Netware/Windows Server/Windows XP and Linux to our Debian backup server. I use Apache for single file restore and a Samba share for multiple files. (Only admins can restore ATM)

I didn't use rsnapshot (didn't know about it), instead I wrote a script to take care of the linking.

It's easy, it's fast, it's reliable and I don't miss Backup Exec at all.

Re:rsnapshot and/or backuppc (1)

Not Invented Here (30411) | more than 8 years ago | (#13979099)

Also consider Box Backup [] . It encrypts the backups so that the server storing the backup doesn't have the necessary key to read the backed-up data. It's working well for me.

Netbackup vs. scripts (2, Informative)

anonymo (878718) | more than 8 years ago | (#13974028)

If you have money enough then Netbackup is a very nice solution: version handling, user-restore, fine-grain authentication of systems and users. User friendly interface, Scales excellent. There are modules for databases and different OS-es.
Not cheap at all. When the database dies, it's very difficult to revive it. I suggest having a stand-in copy just in case.

I tested about 10 more or less freeware programs like Amanda, afbackup, Arkeia etc. - and idn't liked any of them.
Basically I use 2 scripts on systems not on Netbackup:
1) 2-week cycle: Total bup on friday, incremental on weekdays, none on Sat-, Sundays. Local tape streamer or scp to another machine with large disk capacity and tape it from that one.
No user interaction, only sysadmin access. Two tapes or tape sets for week one and for week 2 or disk directory A and B.
90% of restore is within 2 weeks. Perfect for smaller, developer systems.
I use ufsdump, dump and/or gnu tar depending on OS and tasks. Written i shell script

2) Monthly cycle: Total bup once a month, incremental backup is a backup of all of the data that has changed since the last total backup weekly and daily incrementals from the nearest backup.
I use GNU tar with labeling, plain text listing files for search for versions via grep. Several shell scripts in crontab and an demand. Wheel users and sysadmin access only.
Data is stored for several months. Typical use smaller servers.

My most important advice: nobody get fired having a working backup!
So time after time evaluate you backups!
You must be sure that you can do Bare Metall Recovery at any time!
Live CDs are nice to have at hand!

Again: nobody will be fired having a working backup! If your boss thinks that you spent too much time on restore-practice tell him that in case you have not enough security to do a BMR will he write it down that this is on his account...

Re:Netbackup vs. scripts (1)

anonymo (878718) | more than 8 years ago | (#13992202)

About the hardware:
I suggest buying an SDTL2 tape system: the mechanism of tapes and tape-units are very realiable and the tape is highly resistant to EMP, better than any other tape media.
DAT and 8mm and it's derivates are mechanically less stable, they are a bit slower too and takes less EMP.

Floppy disks (0)

Anonymous Coward | more than 8 years ago | (#13974121)

Since you are a "small business", you probably use a "version" of "WordPerfect" such as "4.2", "5.0", or even (bestest) "5.1"
I recommend that you save files on "floppy disks" and I'll place my bets on the new "3.5-inch" "floppy disks" instead of the older technology "5.25-inch"
In any case, you should use magnetic technology of some kind.
Something to do with "ElectroMagnetic" waves is fine.
Hmmm...I guess technically that includes "photons" also

If only there was some kind of Compact Disk technology that was read-write. Since computers are generally "digital", it would be nice to have some sort of "Digital Versatile Disk" (DVD)

Amanda (2, Informative)

Noksagt (69097) | more than 8 years ago | (#13974449)

If you have knowledgable IT, Amanda [] is nice--it will let you spend money on a nice tape changer and media, rather than expensive backup software that is often flakier than Amanda. If you don't have knowledgable IT, I'd actually say the next-best would be to out-source the backups.

Re:Amanda (0)

Anonymous Coward | more than 8 years ago | (#13980793)

i second that. i use amanda on a AIT-2 6 tape autochanger (spectralogic) for doing full nightly backups and it works flawlessly.
300gigs backed up every night is more than enough for us (small dev shop).

Don't have enough information to adequately respon (1)

Nutria (679911) | more than 8 years ago | (#13975384)

  • How many servers?
  • How much data?
  • How much "churn" is in your data? I.e., lots of new/modified/deleted stuff every day, or is it relatively static?

My first thought would be external firewire disks that "you" bring home.

More factors: (1)

oneiros27 (46144) | more than 8 years ago | (#13978213)

  • Overall budget ('cheap' to one person may not be cheap to you)
  • Any existing backup system(s) in place?
  • Cost of downtime (eg, if you make up to $5k/hr in sales profit, an hour's downtime may cost $5k ... possibly more, due to loss of reputation)
  • Amount of administrator time allocated per day/week (if you're overworked as it is, you don't want an extra hr each morning verifying backup tapes)

I like the firewire disk approach -- we use it here for systems where we have a lot of data, but it doesn't change that often, and we need to be able to get something back up as quickly as possible.

Of course, we don't take them off-site, as the data is mirrored in other locations (and it's many terabytes, so we'd be taking home many spindles each night ... and then mess up the time to recover, as the copies would be off-site)

And back to the original poster:

What are some typical mistakes that small tech companies make when it comes to backing up?
  • Not verifying the backups. (just because the process finished, doesn't mean you can get the data back out)
  • Not taking a recent enough cold backup of the database. (takes too long to rebuild database)
  • Keeping all of the backups offsite (may take too long to get them back to the office for a recovery
  • Keeping all of the backups onsite (may be ruined in a physical disaster)
  • Not backing up everything of importance (might 'forget' about a developer's system, etc.)
  • Not using proper retention times (some data needs to be kept for years, some for weeks, etc.)
  • Not keeping a spare tape drive of the same model (some tape drives are 'quirky' and write tapes that can't be read from other drives)
  • Not re-archiving long-term storage. (media has a limited lifespan -- if you have stuff that needs to be kept for multiple years, you might want to cycle the data to a new medium)

There are plenty of others, but then I'd be typing forever.

Re:More factors: (1)

Nutria (679911) | more than 8 years ago | (#13986344)

it's many terabytes

For multi-TB systems, why aren't you all using multiple high-end (LTO 6 or SDLT-320) tape drives? With autoloaders, you kick them off at night, come in the next morning, take them off site and you're done.

Why not tape drives (1)

oneiros27 (46144) | more than 8 years ago | (#13988260)

First off -- tape backups are not as fast as you claim, as you're missing the critical part of backups -- verifying that they were good. (restore a few random files from each tape, preferably from a different mechanism, and compare them to the original file). I've seen too many people get stung by that in the past -- (little known fact : SIMS [Sun Internet Mail System] could run on Solaris 6 or 7 ... Solaris 6 had a 2GB file limit ... so SIMS backup software would stop the backup at 2GB, with no notice, even on Solaris 7. Our shop ran for over a year before we had to do a full restore for one of the upper management ... the incrementals were fine, but the fulls weren't, so we couldn't get shit back)

Second, the time to recover isn't worth it with tapes, for the situation I'm in now. We can bring up a warm server that we have an another building plug the firewire disks into it, and have an image up within 30 minutes. It would take us hours if we had to start a restore from tape. Time to recovery is quite possibly the biggest factor in designing a backup strategy.

Re:Why not tape drives (1)

Nutria (679911) | more than 8 years ago | (#13995977)

Our shop ran for over a year before we had to do a full restore for one of the upper management ... the incrementals were fine, but the fulls weren't, so we couldn't get shit back)

Shame on your shop for not testing the backups. But you know that now...

plug the firewire disks into it, and have an image up within 30 minutes.

How long does it take to back up "many terabytes" to disk?

And how often do you back it up?

We need to back up 3TB every other night, and keep the those backups for a month. Even with the new 500GB drives, that would mean a minumum of (3000/500)*30 = 180 drives. Because you won't be completely filling each disk, it would be more like 200+ drives. Managing that many drives would be a royal hassle. (Compression, you say, would cut the number of disks needed. But that compression is done on the host, which would cut into the amount of CPU available to the nightly batch processing.)

That's why robot silos and bar-coded tapes are so useful. A dozen tape drives (with built-in h/w compression), a silo with 140 tapes and a cron job that backs up the data in the middle of the night. Come in the next morning, remove last night's tape, and box them up for Iron Mountain, which swings a delivery van around retuning the "today - 1 month" tapes while picking up today's tapes.

one big massively redundant device (1)

i.r.id10t (595143) | more than 8 years ago | (#13975418)

You need one massively redundant device that can offload whatever is appropriate to DVD, tape, whatever. Whipping up a big RAID 5 machine shouldn't be too hard, and just either run everything from it (think SAN) or sync everything to it every so often via rsync or whatever. You can even scale this to enterprise level by calling up your favorite IBM rep and saying "If you take me to a nice lunch with alcohol and strippers, I'll recommend to the PHB that we order a SHARK storage unit from you". Sure, it'll cost you *lots* but 1) it will work and keep working and 2) it will be the best freakin business lunch you'll ever have.

go LTO3 tape drives... (0)

Anonymous Coward | more than 8 years ago | (#13976412)

whatever software you choose. I am using (IBM) LTO2 for 2 1/2 years now and has been very reliable. Keep the firmware updated. I backup 1 TB a day to two LTO drives (SAP/Oracle DBs/Windows servers). Over the weekend I do full backups. These two tape drives runs continuosly from 18:00 Fri to 5:00 (AM) Mon. Last tape error was in February. I am suggestion LTO3 because is the current generation of LTOs. These tape drives are part of ADIC Scalar 100 tape library. Hint: tech support and ADIC software are not good, but they are improving, hardware is good.

It depends (1)

CXI (46706) | more than 8 years ago | (#13976450)

Some of it also depends on how critical the backups are. Our user's data is important, but anything that's really, really important should be on the RAID array of the file server which is backed up two separate ways. Files on the PCs are ok if we can only get them from last week, but that will only happen if our backup drive failed at the same time and we didn't swap in a spare and do an immediate full backup fast enough.

For that reason we're not blowing a lot of needless money on things we don't need. We are moving from a tape only backup system on really slow, low capacity DDS3 & DDS4 tapes to a rotating set of 400Gig external USB drives plus archiving to DDS4 tape for 3 months of offsite storage. Each week we'll back up to one USB drive while the other gets dumped to tape then swap them (logically, not physically) for the next week.

IDE/USB is cheap, a lot cheaper that SATA and firewire, and plenty fast enough for our needs, plus external enclosures keep the drives cool to improve their life. If we lose a drive, at most we've lost a week of backups, which is acceptable given we've had to restore data from backup all of one time in over a year. Right now the tape full backups start Friday at 5:30pm and run until about 3pm on Monday. Yes, around 72 hours, ouch. Did I mention tape drives are slow? The new system should manage it in just a few hours and we can take our time all week swapping tapes to archive the previous week's data. The next step is to see about replacing Veritas with a solution that doesn't require us to hand over bags of cash on a yearly basis.

Re:It depends (1)

hmallett (531047) | more than 8 years ago | (#13977539)

Did I mention tape drives are slow?
You mean your tape drives are slow. LTO3 can stream at 160 MB/s.

Another vote for amanda (1)

martin (1336) | more than 8 years ago | (#13977070)

Can do *nix, MacOS X (via hfstar), Windows (via smb shares).

Also can backup to disks (via virtual tapes) or traditional tape drives.

Used it at work for over 4 years with no problems, even old SunOS 4 hosts are still OK.

An alternative is Bacula, but you an RDBMS (mysql or postgress I think) to store the indexes whereas amanda uses files. Also I think bacula uses it's own backup client whereas amanda is merely a front to dump(), tar() etc so it's more platform/filesystem idependant.

Don't forget to test restores every so often etc etc !

DIY? (1)

klokwise (610755) | more than 8 years ago | (#13977392)

Here's an easily modifiable script that uses hard links and rsync [] . I used this as our office's starting point and now have a system that:

  • creates a local snapshot every night and stores it on a separate drive;
  • archives a copy from the night before for only the storage cost of the changes;
  • writes off to an external drive every weekend.

The nightly back-ups mainly account for users accidentally deleting files or saving changes they wish they hadn't rather than hardware failure. Since it's all just stored as a copy, I can mount it over the network if necessary or archive the the snapshot to external media anytime I want. It doesn't require any downtime to back-up or restore on our setup, but if you were dealing with some more complex services you might need to make som allowances.

Re:DIY? (1)

DRue (152413) | more than 8 years ago | (#13980756)

For what it's worth, rsnapshot [] is based on Mike Rubel's scripts that you refer to.

One distro first (1)

ziggyboy (232080) | more than 8 years ago | (#13978065)

servers (Web, Database, File, e-Mail, DNS, etc) that run on different Linux distributions. What advice or recommendations do Slashdot readers have for our needs that: won't break the bank; won't force us to take our servers down for an extended period of time (our servers must run 24/7); are reliable;

If you intend to use some form of backup software or customized scripts to perform these tasks, you have to think about OS compatibility and the like. Why would you run various Linux distributions if you are a small company? That would certainly make things more complicated especially in backing up and upgrading. I know because part of my part-time job includes administering our Linux servers. If I was managing Redhat, Slackware, Debian and SuSE servers at the same time I would not only go crazy keeping up with each distro's updates but I would be wasting my time configuring which directory to backup. Remember that each distro keeps data files in different locations!

Sorry if this has become more of a Linux lecture than a backup one. Going back to your original problem, rsync servers that have small data to backup to a central server. These would be servers like DNS, web (if you're not a web company, that is), etc. For database and file servers you might consider running mirror RAID and do once-a-week tape backups. It also depends on the type of business you have and the likelihood of you wanting to get older versions of files. Doing daily tape backups also protects you from stupid human errors. Your accountant may accidentally screw up your records and having RAID won't be much help. It just mirrors his stupidity. :P In that case you'd want daily tapes.

the biggest mistake (1)

PermanentMarker (916408) | more than 8 years ago | (#13978669)

Well the bigest mistakes companies make. Is that they buy backup software and never test a rampage lightouts scenario in which they try to rebuild from backup software. So they don't no how to use their software and as a result they don't know how long it takes to bring it up again. What extra configs or pre configurations would be required. They just don't no it. Just buy an old PC and try to restore server X on it.... Some also don't catalog their tapes or accedently delete tape's belonging to tape sets....

Arkeia (1)

aspicht (929517) | more than 8 years ago | (#13980563) []
Arkeia has a very good solution for small businesses. It has native agents for Linux, Unix, Windows and MacOS. It can backup to tapes, managing tape libraries and autoloaders. It can also backup to disk. It has plugins to backup databases online so there is no need to stop the database.

There are 2 products:

  • Arkeia Network Backup which is enterprise class product.
  • Arkeia Smart Backup which is designed for small businesses.

Arkeia Smart Backup is free for 50GB of backed up data. And then it is $99 per 100 GB.

Connected / LiveVault (0)

Anonymous Coward | more than 8 years ago | (#13982575) [] []

Disclaimer: I work for Connected and Live Vault's parent company.

Software? (1)

chivo243 (808298) | more than 8 years ago | (#13989461)

Anything but Veritas Backup Exec9.x on W2K3 A stone table and chisel would be preferred to that! And Hardware? Stay away from anything that is SCSI!!! I have been battling these, and haven't had time to really, truly learn something else, sorry moderators!

tape is back! (1)

foobari (227908) | more than 8 years ago | (#13992626)

I just picked up a ultrium 3 tape drive (mine is HP but i think they are all basically the same) and although the drive was costly at nearly 6k we are reaping savings in time. Our data mix allows us to get about 700-750 gig on each tape and it takes under an hour and a half to write each tape.

This might mean that you can find Ultrium 2 drives for cheap now and in my opinion the technology is worth a glance. the tapes are not cheap at around 125-140 per 400/800 cartridge and they do have restrictive humidity requirements same as the DLT family we are all used to and have mostly forgiven but at least you can drop them on concrete and usually not break your bits.

our software solution wouldnt be interesting to a small company (enterprise CA stuff mostly built around TNG and ARCserve 11.5) but if you matched Amanda up with an ultrium 3 you would not be in terrible shape. I am assuming amanda can send at least 4 streams to its tape drives - if not dont get an Ultrium 3 since you wont get the 580gb/hour writes that I am mostly excited about.

Data Recovery (1)

Phoenix junkie (931844) | more than 8 years ago | (#14055039)

Many businesses fail to do regular backups of there desktop machines, and forget about the mobile computers coming in and out of the office. They are harder to manage than most BU Admins can deal with. Though having a network backup solution is a necessity business eventually feel the pain of a mobile user or desktop that crashes or becomes infected and all of the data on the machine must be re-imaged and thus lost in the process since the last regular backup.

For a complete security solution for you data, you better have a backup solution in place that allows for on-site data and the ability to have off-site data as well. Be it tapes, DVDs, or external drives. But as I said this is only the beginning of the equation. if desktops and notebooks are in the normal backup cycle they may be collected once or twice a week. that leaves alot of time and data on the table and I for one believe in 'Murphy's law'.

What do you do if your system crashes? The NT-boot loader gets deleted? I hear you now. "Re-image and start from scratch." Hope you have enough data on the server from the last backup to get back to last week and move on. This is a HUGE gap in data protection. And let me reitterate, mobile computers ya... that may be a week or more of data loss. Who has mobile computers? The CEO, CIO? How are they going to feel when a week of data is lost?

For extra data security, you should look at local machine recovery. Take a look at Recover Pro. Everyone in the IT world knows the Phoenix Technologies name. It on 80% of the computers in the world. Their applications are the most secure out there and if your system crashes, Phoenix can bring the data back from the ashes. (If its pre-installed of course.)
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?