Ask Slashdot: Temporary Backup Pouch? 153
An anonymous reader writes "It looks simple. I've got a laptop and a USB HDD for backups. With rsync, I only move changes to the USB HDD for subsequent backups. I'd like to move these changes to a more portable USB stick when I'm away, then sync again to the USB HDD when I get home. I figured with the normality of the pieces and the situation, there'd be an app for that, but no luck yet. I'm guessing one could make a hardlink parallel-backup on the laptop at the same time as the USB HDD backup. Then use find to detect changes between it and the actual filesystem when it's time to backup to the USB stick. But there would need to be a way to preserve paths, and a way communicate deletions. So how about it? I'm joe-user with Ubuntu. I even use grsync for rsync. After several evenings of trying to figure this out, all I've got is a much better understanding of what hardlinks are and are not. What do the smart kids do? Three common pieces of hardware, and a simple-looking task."
The smart kids (Score:1, Funny)
What do the smart kids do?
The smart kids don't use linux.
Re: (Score:3, Informative)
You Sir are completely correct.
The smart kids use Linux.
Re: (Score:2)
Hardlinks (Score:3, Informative)
Oh dear.
Hardlinks don't span storage devices. They are files that share the same inodes on single storage device. Soft links do, but they are pointers to the inode, so "backup" using softlinks and you have a bunch of pointers to data that is on the original system. NOT on the thumb drive!
Use one of the backup packages out there, you are not at the point of rolling your own.
Not even close.
Re: (Score:2)
Hardlinks don't span storage devices.
Except they do, on Windows at least. And you can even mount drive to a folder like c:/otherdrive/
Re: (Score:2)
mmmmm no.
Mounting a drive to a folder has nothing to do with hardlinks (or inodes). This is on a much higher level. In order to span hardlinks on different drives, you should have 1 filesystem on 2 drives, which is not possible.
Re: (Score:1)
Ok, Let me clarify this.
It is possible to span a filesystem on multiple drives (see various LVMs, RAID etc) but is offtopic to the problem at hand (you cannot have 1 filesystem on your hard drive and your USB stick).
Given that hardlinks exist on the filesystem level, anything that is on a lower layer and transparent to this level (consider a driver for example that can handle hard disks, network paths, RAM etc. as 1 block device) can be on 1 filesystem and have hardlinks span across. But this purely theoret
Re: (Score:2)
Not correct. You can make symlinks on Windows, but the data doesnt get stored in both locations. Ditto with hardlinks, on windows.
Re:Hardlinks (Score:4, Funny)
Nobody pretended this to be the case. Not even the article's author. Please read again - and use a ghostwriter, you are not at the point of rolling your own comments. Not even close.
Re: (Score:2)
Don't normally respond to cowards, but the author did say he was trying to do backups using "hardlink parallel-backup", which is so much gobbledygook as to make it clear they don't understand UNIX past the "I am dangerous" level.
Again, keep searching, find a package that can do 90% of what you want.
Use that.
unison-gtk (Score:4, Informative)
Very nice, simple front-end, and will do what I think you need.
Re: (Score:2)
Re:unison-gtk (Score:4, Informative)
I think people (including you) don't understand what he needs. He has a complete backup at home. When he's on the move, he wants to backup only modifications that are not already backed up at home, so that the backup fits on a USB stick. To know what has and hasn't changed, he can't access the backup at home, like rsync would need to do. His idea was to have a space-saving complete copy of the backup on his laptop via hard links. You might think that file modification times could be used, but both solutions leave the problem of communicating file deletion. Suppose he needs to recover. He would copy his home backup to the new drive and then he would have to integrate the incremental backup. How would the incremental backup keep the information about deleted files without access to the base backup? I suppose one could keep a recursive directory listing with the incremental backup, but that's the question: Is there a ready-made solution for this?
Re: (Score:2)
It would seem to make more sense to approach the problem from a different direction: limit your USB backups to the home directory (or a limited set of directories), and do incremental backups there. In the recovery mode, you first recover to the last known good state from the hard drive, and then you apply the changes from the USB stick to selected directories. It could be automated with a script that even I could write, or you just follow a simple step-by-step procedure.
A fully packaged solution as the O
Re:unison-gtk (Score:4, Informative)
To know what has and hasn't changed, he can't access the backup at home, like rsync would need to do.
If I understand correctly BackupPC [sourceforge.net] caches the checksums rsync generates to enable exactly that. It would be nice if that was possible with vanilla rsync.
Re: (Score:2)
Unison? (Score:3, Informative)
I hesitate to offer this, because I've not experimented with it in the precise scenario you describe. However, being another Joe User with ubuntu, I took a look at rsync as a way to implement backups between my home PC and an Apple Time capsule that I was using as a secondary backup device.
After some tinkering I settled on Unison, which is available in the ubuntu repositories. It's essentially a sophisticated rsync front end, with a few bells and whistles. You get 2-way directory replication between your 'local' and 'remote' file systems [though they could both be local or both remote if you choose] and you can essentially script multiple different backups into the single interface. For example, I have "Office" for documents, spreadsheets and the like, "Photos", for camera images, "Music", and so on.
Like most tools, Unison is imperfect, but it's simple to use once set up. The key point with it, as with any product you put in this space, will be knowing and keeping track of your definitive data source. If you have a document that exists on both your local and backup systems, and you edit that file separately at each location, then run Unison, only the most chronologically recent copy will be preserved. To go beyond this level of functionality and get to something that can intelligently merge changes, I think you're going to need something more like a CVS tool... There are hugely expensive proprietary solutions (like Livelink), but I've not come across anyone using a good FOSS alternative. HTH...
Re: (Score:2)
Like the GP, I haven't used Unison in this context, but (a) Unison is easy to configure, and (b) there's plenty of configuration which can be done. I use it for keeping my machines in sync, which, here, would just mean replacing a remote path with a local path. I would definitely invest some time in seeing if this does the trick.
Re: (Score:1)
Unison is NOT essentially an rsync frontend. It uses librsync, sure, but it is also a heavily researched, heavily modified way of backing up and synching files.
Here is their homepage: http://www.cis.upenn.edu/~bcpierce/unison/index.html
Unison can work in two ways, seeing if you also put files on the backupdrive that you want to synch back into the main repository. This is an option though, but
I suspect MANY of the file-backup services like dropbox base their code on Unison.
I have always been satisfied with
Re: (Score:1)
After some tinkering I settled on Unison, which is available in the ubuntu repositories. It's essentially a sophisticated rsync front end, with a few bells and whistles.
It is, in fact, a bit more than that. rsync doesn't handle deletions, so your backup will keep growing in size even though you're not really making any additions (say, you're renaming a big file - now that file is copied twice). unison does, however. This is essential for this use case, especially if one of the backup devices has somewhat limited space on it.
Re:Unison? (Score:5, Informative)
rsync doesn't handle deletions
rsync handles deletions just fine - that's why it has a --delete option...
Re: (Score:2)
Re: (Score:3)
Yes.
Re: (Score:1)
+1 for Unison. It will do everything you need it to, and is easy to use. You can setup your ~/.unison/*.prf files to have multiple roots on the same machine (one per removable drive in your case). Just pick the one you want to use when you sync. It does a better job of intelligently syncing and handling any resulting conflicts than anything else out there, bar none. It handles deletions fine (as does, btw, rsync). Here's a sample default.prf for your scenario:
root = /home/yourusername /media/usb
root =
DirSyncPro (Score:3)
I use DirSyncPro to automate my backup tasks. Not sure how to set it up for your particular task, or whether you can, but it might be worth looking into. A lot of options while still being easy to use.
Re: (Score:1)
Thank god its got 'pro' in the name, theres just one thing stopping me getting it now, I'll wait for 'deluxe' to be added to the name, that'll do it.
Re: (Score:2)
haha...it's another FOSS multiplatform thing
Duplicity, perhaps (Score:4, Informative)
Duplicity uses librsync to generate the changeset that rsync would use, then stores the change set. If you stored the change set to the USB drive, this could then be "restored" to the destination drive, perhaps? I don't know if there's any way to do this out of the box, or with a bit of scripting, or if this would need to be a whole new toolchain.
Re: (Score:3)
I suspect duplicity and git-annex are the only correct answers in this thread because the underlying problem is that your rsync like tool must detect the changes using a listing of file hashes, not access to the files themselves. It's the same problem as doing incremental backups to a host not running specialized incremental backup software. Duplicity does this, but rsync does not afaik.
not rsync (Score:2, Interesting)
there are better solutions than rsync.
rdiff-backup
dupicity
for example.
i probably don't understand what you are trying to accomplish.
dump(8) (Score:2, Interesting)
this is what dump(8) does
Re: (Score:1)
Unfortunately, dump requires a supported filesystem. But most people forget about the incremental backup features of regular old tar(1).
Finally! An interesting question. (Score:5, Insightful)
First, ignore the people who encourage you not to try, and who point you in other directions. Sure, there are much better ways of doing this, but who cares? The whole point is that you should be able to do whatever you want -- and actually doing this is going to leave you _so_ much smarter, trust me.
Some douche criticized you for not knowing beforehand why hard links wouldn't work. . . . because, you know, you should have been born knowing everything about filesystems. To hell with him, sally forth on your journey of discovery, this can be hella fun and you'll get an awesome feeling of accomplishment.
First off, you're going to have trouble using rsync with the flash drive, because I assume your constraint is that you can't fit everything on the flash drive, it's only big enough to hold the differences.
Next, come to terms with the fact that you'll need to do some shell scripting. Maybe more than just some, maybe a lot, but you can do it.
I'd recommend cutting your hard drive in two -- through partitions or whatever -- to make sure that "system" is fully segmented from "data." No sense wasting all your time and effort getting backups of /proc/ and /dev/, or, hell, even /bin/ and /usr/. Those things aren't supposed to change all that much, so get your backups of /home/ and /var/ and /etc/ working first. Running system updates on the road is rarely worth it, and will be the least of your concerns if you end up needing to recover.
Next, remind yourself how rsync was originally intended to work at a high level. It takes checksums of chunks of files to see which chunks have changed, and only transfers the changed chunks over the wire in order to minimize network use. Only over time did it evolve to take on more tasks -- but you're not using it for its intended purpose to begin with, since you're not using any network here. So rsync might not have to be your solution while travelling unless you start rsyncing to a personal cloud or something -- but its first principles are definitely a help as you come up with your own design.
The premise is that, while travelling, you need to know exactly what files have changed since your last full backup, and you need to store those changes on the flash drive so that you can apply the changes to a system restored from the full backup you left at home. You won't be able to do a full restore while in the field, and you won't be able to roll back mistakes made without going home, but I don't think either of those constraints would surprise you too much, you likely came to terms with them already.
So, when doing the full backup at home, also store a full path/file listing with file timestamps and MD5 or CRC or TLA checksums either on your laptop or on the flash disk, preferably both.
Then, when running a "backup" in the field, have your shell script generate that same report again, and compare it against the report you made with the last full backup. If the script detects a new file, it should copy that file to the flash disk. If the script detects a changed timestamp, or a changed checksum, it should also copy over the file. When storing files on the flash disk, the script should create directories as necessary to preserve paths of changed/new files.
For bonus points, if the script detects a deleted file, it should add it to a list of files to be deleted. For extra bonus points, it should store file permissions and ownerships in its logfiles as replayable commands.
The script would do a terrible job at being "efficient" for renamed files, but same is true for rsync, so whatevs.
I built a very similar set of scripts for managing VMWare master disk images and diff files about ten years ago, and it took me two 7hr days of scripting/testing/documenting -- this should be a similar effort for a 10-yr-younger me. I learned *so* much in doing that back then that I'm jealous of the fun that you'll have in doing this.
Of course, document the hell out of your work. Post it on sourceforge or something, GPL it, put it on your resume.
Re:Finally! An interesting question. (Score:5, Insightful)
Forgot to mention:
To accomplish this, you'll need to read up on:
- bash
- find
- grep
- awk
- sed
- md5sum
- chmod/chown
- mkdir -p
- diff/patch (for general reference, and also look up binary diffing tools)
Extra extra extra bonus points if you compress the changed files when storing them on the flash drive.
Re: (Score:2)
Avoiding to learn all the intricacy of all these tools was one of the main purposes of Perl.
Re: (Score:1)
Back in 1993 when I was new to *NIX, I asked a seasoned sysadmin which scripting language I should learn. He started listing all those same tools .... then said - "or you could just learn perl."
I did and you sir are 100% correct. I know grep, find, ksh/sh, sed very well - my awk is extremely weak. Perl hasn't let me down all these years. It is still my go-to scripting language whether I'm hacking some crap code together OR performing a system design and pushing out a beautiful website with XML or json se
Re: (Score:2)
Or you can learn all the utilities if you want to learn right and be a better sysadmin and work with other sysadmins.
Perl is for programmers that like to pretend they know how to be sysadmins and don't need to share their tools with other sysadmins.
Re: (Score:2)
Perl is for anyone, sure, but it's certainly not for everyone.
If the author and the user are the same singular individual, and always will be, then sure, perl can be a fun toy and a timesaver.
But if you're mentoring junior sysadmins as part of your succession plan in a collaborative and evolving ecosystem, Perl is pretty much the worst choice available.
Re: (Score:2)
Extra extra extra bonus points if you compress the changed files when storing them on the flash drive.
Hint for the bonus question: gzip ;)
Re: (Score:2)
bzip is better than gzip if space is at a premium. There are even multi-core versions of bzip2 that are very efficient. You could also look at p7zip. If you want a really efficient compressor, try nanozip as well, although its page says it is still experimental, but it seems to be at the top of several compression benchmarks.
And when you're done reinventing the wheel... (Score:1, Offtopic)
You can get started on fire.
Re: (Score:3)
Re: (Score:2)
Interesting, since I used rdiff-backup in the past and found it a pain. If files are stored as diffs of diffs of diffs of diffs of a full copy, it is rather easy to corrupt the backup. These days, I make backups using rsync, with
For the first backup, omit the --link-target argument. Only modified files are stored. As long as you don
Re: (Score:2)
Replying to myself: of course, I realize that the OP cannot use a hard-link backup if the usb drive cannot hold all his important data. It's too long ago that I used rdiff-backup; can you reliably split the master backup and the differential backups to different filesystems (say the drive at home and the usb stick)? Preferably without risking corrupted backups if it involves manually merging diff trees.
Re: (Score:2)
Re: (Score:2)
If you want to "copy" junk with its metajunk from an ext3 filesystem on to a FAT32 filesystem, remember that you can always create an 8GB file with dd from /dev/zero, run mkfs.ext3 against that file, and then mount that file as an ext3 filesystem thanks to the loopback adapter. You won't be able to read that junk from a Windows machine, but you probably won't care, and if you create an 8GB file on a 16GB FAT32 flash disk, you'll still have 8GB of space available for use in Windows -- and Windows will be ab
The right tool for the task (Score:1)
For the former, use archiving tools to perform full and incremental backup. How is it done? You could use find to list files with certain criteria, e.g. last modified timestamps. Pass that list to using the -T flag, where you also use -X to exclude files and directories like "*/.thumbnails" and "*/.[cC]ache*". Once the tar is done, use your favourite checksum tool; md5sum, shasum to store a checksum of the archive in a separate file.
dar? (Score:3, Informative)
Re: (Score:1)
I agree, dar is definitely the way to go. You need to learn how it works but once you do it's incredible all the things you can do. What safetyinnumbers is referring to is called an isolated catalogue. See also: dar_manager.
support.marsupials.info? (Score:2)
They'd have the skinny on pouches for sure.
Comment removed (Score:3)
LuckyBackup or BackInTime + an online sync service (Score:1)
I don't know if I'm one of the "smart kids" or not, but I'm a standard non-technical user and have found LuckyBackup [wikipedia.org] or BackInTime [le-web.org] run along with an online sync/backup service like DropBox or SpiderOak the most handy options.
Both LuckyBackup & BackInTime are GUI tools that set up rsync rules (even complicated ones) with an easy point-and-click interface, then schedule them in cron. They can do anything rsync can: synchronize the drives so the backup matches the current, or make a backup of everything p
does this need to be so complicated? (Score:2)
From what I have parsed the OP wants to have a full back-up on a USB-HDD and the diffs on the USB-Flash, because the Flash is limited in size.
Just write two rsync (or grsync) scenarios: one for HDD and the other for the Flash. On the HDD you will have a directory that is a mirror copy of your laptop. On the Flash you will keep the diffs for the time between syncs to the HDD.
When at home
1. rsync your laptop to the HDD (mirror).
2. copy the incremental stuff from the Flash to a separate directory (e.g. diff-20
Re: (Score:1)
OK, I got it, so the diff is the problem.
I see two solutions
1. write the date of the latest mirror-sync to a file in laptop and copy all newer or modified files to flash with a "find".
2. have a second copy (mirror) on a separate partition the laptop HDD and diff against it. Actually helpful in case you really need to restore a (deleted) file while on the road.
I believe having a complicated script and set up is more "expensive" at the end of the day as just to have a second (or a third copy).
unison is bi-directional (Score:2)
unison has already been suggested multiple times.
I used unison. It's perfect to sync from A to B (it only syncs the diffs) then modify B and later sync B to A
You also can modify A and B at the same time as long as it's not the same file, then sync and then A and B are identical.
You can even sync in cycles: A->B->C->A with modifications on all three directory trees and it still works
Unison also handles deletions on both sides fine.
Hint: use the -group -owner -times flags
dump, snapshots, rsync batch mode (Score:1)
If your
rsync (Score:2)
For years I have used rsync scripts.
My problem was syncing a desktop and a laptop. So I made upload_to and download_from scripts to sync as needed.
I also try to keep a third master backup copy on a different server so all three are synced.
One problem comes when trying to work on both desktop and laptop simultaneously. Just map a drive and modify files on one side.
git (Score:1)
ZFS (Score:1)
In the grand internet tradition of answering a loosely related question which is no use at all to the asker, I will say that the "smart kids" might use something like ZFS, which almost handles this for you. (Take snapshots, save delta streams on your USB stick. Requires the backup to be a ZFS copy, not just the same files.)
Useless right now at least. But I've been pretty happy with switching my storage to ZFS, even if the Linux version sucks. (I mostly don't use the Linux version.) I'd recommend it to anyon
Re: (Score:2)
That's a terrific idea, and it would be a much cleaner and more reliable solution than some kind of cobbled script. Maybe he could make his working directory on Linux a ZFS volume using FUSE. Snapshot it before you go on the road, export the snapshot to the external HDD, and then periodically export the snapshot delta to the USB as a backup. When you get home, re-snapshot to the external HDD and delete the delta on the USB. The snapshot creation/destruction and export could (and probably should) be automate
tar (Score:2)
Easiest solution is probably to use tar's incremental backups. The -g argument creates a relatively small file listing the files already backed up, so future incremental runs can skip them. If you keep the incremental files on the laptop then you can put each days actual tar backup on whatever devices you have handy.
I might be crazy (Score:1)
and I know this is not what he asked for, but wouldn't the simplest solution be to purchase a second external drive (maybe an SSD for durability) and actually have a complete backup on the road... Or even just take his current external with him - he has it backed up in the cloud any way...
I ask because he never stated why that external drive was stuck at home..
If that won't do, another possible solution.
1) I don't see a need to sync the USB stick when he returns - just perform your usual backup when you ret
What about Git? (Score:1)
Us a watch program (Score:2)
Thete are multiple "watch" apps out there in various languages that will run a script every time a directory changes.
Google "watching files with ruby"
Substitute ruby for python or perl or...
Git and Super Flexible File Synchronizer (Score:1)
unionfs (Score:2)
Since you can't use dump(8) as others have pointed out, maybe you can do something with UnionFS. After you do your full backup to USB HDD and are about to leave on a trip, mount a unionfs over top of your critical filesystem(s)... Then every day, copy the union layer off to thumb-drive.
Autorun (Score:2)
I know people hate it, but setting up an auto-running batch script for backup upon plug-in. I've had no issues doing this from Windows to Linux.
Bitpocket? Unison? (Score:2)
Easy script to grab changed files into an archive (Score:2)
find . -newer last_backup_timestamp | cpio -o snapshot$(date +%Y%m%d) && touch last_backup_timestamp
You're looking for dump, not rsync (Score:2)
Make full "level 0" dumps at home to the big disk. Make delta "level 1" or incrementing levels dumps to the flash drive. Each level will back up everything that changed since the previous level.
Now, you want the backups at home to be a file copy - there's no reason you can't do that and then do level 1 dump backups on the road - ie, never actually make a level 0. Just do your rsync and then update /etc/dumpdates to reflect your rsync-level-0.
This is exactly what dump was designed to do, and it's going to
Re: (Score:2)
Did you read the summary? Obviously Skydrive is of no use but there are several other alternatives that would be better suited to this purpose although if, as he says, it is for use while travelling an internet based system is useless. I would suggest reading up more on what you can do with dd and writing a couple of scripts to suit your needs. The problems are more likely to be around using USB but you can always write a script that puts a compressed file on your desktop that you manually copy to your U
Re: (Score:2)
rsync and other low level solutions are much more work and on top of that you n
Re:SkyDrive (Score:5, Insightful)
As you say, the internet is really crap at times when you are travelling so why make life difficult? It is also fair to say that you obviously think of travelling as a bit of wandering around in the US. Once you broaden your horizons you will find that the internet is often not even an option.
Skydrive is not going to integrate with Ubuntu (have you read the summary yet?) so it is a stupid option whereas there is a dropbox client. It is still flakey and not going to be easy to use as required so he is still better off doing something that will work well and therefore get done regularly. If he is using some client for a service that sometimes works and sometimes doesn't you can guarantee that the time when he needs that backup will be one of the times that it did not work.
Re:SkyDrive (Score:5, Insightful)
Yet, I've found Dropbox to be the best backup solution. Files will get there eventually, and I don't need to do anything. There's also revision history of files, so if you upload corrupted files or something like that you can reverse it. You can access them from other computers in case your laptop goes poof (happened to me). And the most important thing - if you get robbed or lose your luggage, you will still have access to your files (and of course, I keep my laptop encrypted).
The good sides of online cloud backup far outweights the negative ones or worries about bandwidth. Especially since most of the time the files that need backup aren't large. No one in their right mind would try to sync their media files.
Re: (Score:2, Insightful)
The good sides of online cloud backup far outweights the negative ones
Until your cloud backup provider goes out of business or stops offering the service. You think rsync is a lot of work? Try keeping current on the status of Dropbox and SkyDrive services so you can pull your data before they disappear. I guarantee you that a properly stored external drive will outlive either of them.
Oh, and if you were trolling with that first post, kudos on playing it out so long.
Re: (Score:3, Insightful)
Try keeping current on the status of Dropbox and SkyDrive services so you can pull your data before they disappear.
Email? Twitter? Facebook? All kind of "push notification" technologies where you don't really need to do anything if you use them.
Besides, we are talking about Microsoft here. A company that has ridiculously long phase outs for their products as a standard practice so businesses feel safe using them (seriously, they announced that a version 4.0 of SilverLight will see end of support in two years from now). If there is any tech company in the world that you can trust not just going to end support suddenly,
Re: (Score:2)
Re: (Score:2)
Except their hosting of Windows 7 Widgets. Already gone except for the top 10.
Re: (Score:2)
Besides, we are talking about Microsoft here. A company that has ridiculously long phase outs for their products as a standard practice so businesses feel safe using them (snip) If there is any tech company in the world that you can trust not just going to end support suddenly, it's Microsoft.
"End support suddenly" is pretty much what they did with Office Live Small Business. A couple of emails of warning, then in about two weeks (maybe a month) the plug was pulled. No migration plan to move to Office 365. Oh, MSFT suggested moving to Office 365, but one would just have to start over. That's just one, personal example of a Microsoft business product that I have experience with (and I'm not too broken-hearted, other than the hassle, as their offering kind of sucked). Consumer-level products? Exam
Re: (Score:1)
Not to rain on your "old paranoid guy" parade, but dropbox (and I assume most others are too) IS local storage (sync'd to the cloud)...They go out of business, just uninstall, and install the next one. Probably works exactly the same.
Re: (Score:2)
Re:SkyDrive (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
The chances that something would happen right then in the time-frame that the cloud provider fails and you make another copy with another provider are incredibly low.
Noticing that your backups have failed isn't the time to go searching for another backup solution.
Re: (Score:3)
Try keeping current on the status of Dropbox and SkyDrive services so you can pull your data before they disappear.
You clearly have never used DropBox. It's just a shared folder that populates on every computer you install it on. If DropBox were to die right this instant, you would still have all of your data on every one of your computers - it would just stop syncing.
If you are worried about the revision history, you could pick one computer to run a rsync job between the DropBox folder and another folder of your choice, or if you are on a Mac just use TimeMachine, or if you are on Windows run something like Areca backu
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
His account is obviously to troll the real kdawson, whose UID is 3715.
Re: (Score:1, Informative)
Did you read the summary?
Did you read his history.
Partofme is another of the Bonch/Sharklaser/Tech* etc etc sockpuppet team.
They're a group of Burson-Marsteller reputation managers working for Microsoft. They always get early postings so they can divert discussion to their pro-MS agenda.
Re: (Score:1)
Re: (Score:2)
I'll take this one: Apple is just a puppet through which the will of Google is manifested. Even... though... Google is... much... smaller than... Apple... or something.
Re:SkyDrive (Score:5, Informative)
Obviously Skydrive is of no use but there are several other alternatives that would be better suited to this purpose although if, as he says, it is for use while travelling an internet based system is useless.
That's why I liked Crashplan when i first saw it. This may sound like a sales pitch but I'm just a happy customer.
With Crashplan you can have multiple destinations for your backup set. I usually have three:
- same HD in case I accidentally deleted some files.
- USB HD for faster recovery in case my primary HD breaks.
- Online "in the cloud", in case my house burns down etc.
Crashplan detects when I plug in the USB HD and automatically starts running updating the backup on it. If there's no internet the first two destinations will still keep me pretty safe. Once the internet is back it catches up on the cloud destination.
It works just fine on my Linux Mint laptop as well as my Windows desktop pc.
Re: (Score:2)
Re: (Score:2)
I would also like to second Crashplan. Mozy broke my cloud backup cherry, and Crashplan has been completely problem-free for me. I run it on every computers that I own and backup to the "cloud" as well as to my basement server. They have a great feature where you can send them a hard drive to seed your initial backup so that it doesn't take a month to do your initial backup.
Another thing that I do is install it on family and friend's computers whenever they ask me to fix them. I just point it to my basement
Re: (Score:1)
Re: (Score:2)
Or rsnapshot
Re: (Score:2)
or shootsnap [typinganimal.net]
Re: (Score:2)
Re: (Score:2)
Yes, git-annex could be coaxed into doing this, but it's not completely trivial.
Re: (Score:2)