Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NetBSD - Live Network Backup

Zonk posted more than 9 years ago | from the de-backup-mouse dept.

Software 156

dvl writes "It is possible but inconvenient to manually clone a hard disk drive remotely, using dd and netcat. der Mouse, a Montreal-based NetBSD developer, has developed tools that allow for automated, remote partition-level cloning to occur automatically on an opportunistic basis. A high-level description of the system has been posted at KernelTrap. This facility can be used to maintain complete duplicates of remote client laptop drives to a server system. This network mirroring facility will be presented at BSDCAN 2005 in Ottawa, ON on May 13-15."

cancel ×

156 comments

Sorry! There are no comments related to the filter you selected.

Rsync (0)

Anonymous Coward | more than 9 years ago | (#12383568)

Why not just use rsync and ssh ?

Re:Rsync (0)

Anonymous Coward | more than 9 years ago | (#12384100)

Why not just use rsync and ssh ?

FTA: partition-level cloning

Re:Rsync (-1, Flamebait)

alexhohio (871747) | more than 9 years ago | (#12384647)

This seems like a good thing, but any time I see something that saves time, I think that also saves labor, and at some point, will require a smaller staff. Good for corporate, but I dont want to be out of work because programs are doing my job!!! Just my two cents.

fp (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#12383569)

fp

Mac OS X (1, Interesting)

ytsejam-ppc (134620) | more than 9 years ago | (#12383582)

I'm not up on my xBSD's, so can someone explain how hard this would be to port to the Mac? This would be perfect for cloning my son's Mac Mini.

Re:Mac OS X (3, Informative)

Anonymous Coward | more than 9 years ago | (#12383805)

If you want something for OSX
I'd suggest either
CCC (Carbon Copy Cloner)
ASR (Apple System Restore)
Rsync
Radmind

Have fun on version tracker....

BSD is 10 years too old (-1, Troll)

Anonymous Coward | more than 9 years ago | (#12383584)

Sad but true.

Linux and OSX as supplanted BSDead.

use rsync (1, Informative)

dtfinch (661405) | more than 9 years ago | (#12383586)

It's much less network and hardware intensitive and with the right parameters, will keep past revisions of every changed file. Your hard disks will live longer.

Re:use rsync (1)

liquidpele (663430) | more than 9 years ago | (#12383606)

I was about to say the same thing...
Are there really places where rsync is not enough and this bit by bit backup would be needed?

Re:use rsync (4, Informative)

FreeLinux (555387) | more than 9 years ago | (#12383656)

This is a block level operation, whereas rsync is file level. With this system you can restore the disk image including partitions. Restoring from rsync would require you to create the partition, format the partition and the restore the files. Also, if you need the MBR...

As the article says, this is drive imaging whereas rsync is file copying.

Re:use rsync (3, Insightful)

Skapare (16644) | more than 9 years ago | (#12384206)

In most cases, file backups are better. Imaging a drive that is currently mounted writable and actively updated can produce a corrupt image on the backup. This is worse that what can happen when a machine is powered off and restarted. Because the sectors are read from the partition over a span of time, things can be extremely inconsistent. Drive imaging is safest only when the partition being copied is unmounted.

The way I make backups is to run duplicate servers. Then I let rsync keep the data files in sync on the backups. If the primary machine has any problems, the secondary can take over. There are other things that need to be done for this, like separate IP addresses for administrative access, and the network services being provided (so that the service addresses can be moved between machines as needed while the administrator can still SSH in to each one individually).

Re:use rsync (2, Interesting)

spun (1352) | more than 9 years ago | (#12385066)

From the article, it sounds like they are using a custom kernel module to intercept all output to the drive. This would keep things from getting corrupted, yes?

Re:use rsync (1)

dougmc (70836) | more than 9 years ago | (#12384576)

Restoring from rsync would require you to create the partition, format the partition and the restore the files.
Sure, but that's not difficult. Systemimager [systemimager.org] for Linux keeps images of disks of remote systems via rsync, and has scripts that take care of partition tables and such.

Yes, it's written for Linux, but it wouldn't be difficult to update it to work with NetBSD or any other OS. The reason it's Linux specific is that it makes some efforts to customize the image to match the destination machines.

Also, if you need the MBR...
It's not like you can't just handle the MBR seperately. It's not difficult.

As the article says, this is drive imaging whereas rsync is file copying.
Whatever you want to call it. In any event, file `copying' is more flexible than merely keeping dd'd images of disks -- you can update systems on the fly (without even rebooting), you can use normal *nix commands on the contents of the images themselves, you can do incremental backups on the images themselves (and only get the changes) and the list goes on.

The big advantage to making images with dd or a similar tool and using that is that 1) it can deal with raw partition formats, where you can't just mount them -- I guess this would be useful for a Tivo, or maybe for an Oracle database (but in that case, you'd be better off using the Oracle backup utilities) or 2) If you had an application that required that files not move around on the disk (pretty much unheard of in *nix, but somewhat common as a copy protection on Windows) dd'ing images would be better.

Overall, I'd think that rsync would be a lot better, and while Systemimager isn't perfect, it's architecture is pretty sound and I'd start there.

Re:use rsync (2, Insightful)

x8 (879751) | more than 9 years ago | (#12383764)

What's the fastest way to get a server running again after a disk crash? With rsync, if I backup /home and /etc, I still have to install and configure the OS and other software. That could take a significant amount of time (possibly days). Not to mention the time spent answering the phone (is the server down? when will it be back up?)

But if I have a drive image, I could just put it on a spare server and be back up and running almost immediately. That would require an identical spare server though.

What do the big enterprises who can't afford downtime do to handle this?

Re:use rsync (2, Informative)

dtfinch (661405) | more than 9 years ago | (#12384171)

Just make sure the backup server is properly configured (or very nearly so) I guess.

Our nightly rsync backups have saved us many times from user mistakes (oops, I deleted this 3 months ago and I need it now), but we haven't had a chance to test our backup server in the event of losing one of our main servers. We figure we could have it up and running in a couple hours or less, since it's configured very closely to our other servers, be we won't know until we need it.

Re:use rsync (0)

Anonymous Coward | more than 9 years ago | (#12385461)

Absolutely test it and find out.

Trust me, the last opportunity you want to have to test it is when you need it work flawlessly.

Untested backups are worse than no backups at all!

Re:use rsync (1)

rainman_bc (735332) | more than 9 years ago | (#12385895)

I recall the last place I was a developer at, we tested our IT department like that a few times haha.... We'd "simulate" a hardware failure. Usually by pulilng the power, but sometimes we'd get a little more scientific with it... Or we'd simulate a database crash and ask for a backup from our IT department.

We were developers plagued with an IT department that wanted to take control of the application and add red tape to our deployment cycle. While we understood there was a place for it, we worked for a company bleeding red ink, and making it harder to adapt to site changes quickly added unnecessary costs to a cash-strapped org. We believed it to be a typical IT practice: bitch and moan to get control over the servers, and then bitch and moan when there's a lack of resources.

FWIW, I now work in a company that has an IT department blocking developer access. We need it here and it makes sense, but the response time is way slower than if I could do things myself. But we are profitable and I don't care. Much different situation than my last employer.

Anyway, we handed over control of the backups, and made four restore from backup requests to IT. Three were duds. So IT couldn't handle backing up a SQL server succesfully. A backup is useless if you cannot restore from it. IT should have done a restore from backup to validate them, and they failed to do so.

Then, with our "simulated" hardware crashes, IT was unable to get the sites up and running without our help. We had to walk them through all the steps to restore it, even though it was documented. Their excuse? It was IIS instead of Apache, and IIS is a piece of shit, so no one wanted to bother learning it because it was a piece of shit.

Validating a disaster recovery plan before you implement it is crucial. Sooner or later you'll have to do it...

Re:use rsync (0)

Anonymous Coward | more than 9 years ago | (#12385155)

Or dd and netcat.

Pros and Cons (4, Insightful)

teiresias (101481) | more than 9 years ago | (#12383593)

This would be an extremely sensitive server system. With everyones harddrive image just waiting to be blasted to a blank harddrive, the potential for misdeeds is staggering. Even in an offical capacity, I really feel uneasy if my boss was able to take a copy of my harddrive image and see what I've been working on. Admittely, yes it should all be work but here we are allowed a certain amount of freedom with our laptops and I wouldn't want to have that data at my bosses fingertips.

On the flipside, this would be a boon to company network admins especially with employees at remote sites who have a hard crash.

Another reason to build a high speed backbone. Getting my 80GB harddrive image from Seattle, while I'm in Norfolk would be a lot of downtime.

Re:Pros and Cons (0)

Anonymous Coward | more than 9 years ago | (#12383680)

I really feel uneasy if my boss was able to take a copy of my harddrive image and see what I've been working on

your fault for not encrypting.

if you have the right tools it's easy to keep all your important things encrypted so when you get up that smartcard,USB dongle,iButton with the encryption keys goes with you, they can image it all they want. it's wirthless random letters and numbers until they get the keys from you.

It's easy to do under windows and even easier under linux or bsd.

The only thing holding you back is laziness.

Re:Pros and Cons (1)

pintpusher (854001) | more than 9 years ago | (#12383752)

The duplication is done right away the modification occured in the main disk.
(from the comments [kerneltrap.org] below article)

Another reason to build a high speed backbone. Getting my 80GB harddrive image from Seattle, while I'm in Norfolk would be a lot of downtime. (parent)

Seems that this thing will sync up everytime you call home. So when you're on the road downloading that just updated massive PPT presentation for your conference.... you'll be downloading one copy from the server while the server is desperately trying update its image of your disk back the other way. Lets just arbitrarily double our bandwidth requirements!

Re:Pros and Cons (1)

xxavierg (538582) | more than 9 years ago | (#12383921)

Even in an offical capacity, I really feel uneasy if my boss was able to take a copy of my harddrive image and see what I've been working on.


your boss has the right and the ability (at least at my company) to do that. plus, i leave my personal and secret stuff on my box at home, not at work, where it belongs. if i was a boss, i would want the ability to see what my employees are working on. that's why i pay them.

Re:Pros and Cons (0)

Anonymous Coward | more than 9 years ago | (#12385085)

that just means you are a typical PHB that is out of touch with reality.

Re:Pros and Cons (1)

nharmon (97591) | more than 9 years ago | (#12385288)

Off-topic. But do you pay your employees for thinking about work while not at work?

Re:Pros and Cons (0)

Anonymous Coward | more than 9 years ago | (#12385853)

my point was i do not keep my personal budget and my emails to my wife on my computer at work because that is the property of my company, which is to be used for work purposes only. and anything on the computer is there property (e.g. code i am working on) sure, i surf the web and such, but if they tell me to stop or snoop on me, that is within there right.

See what I've been working on... (1)

glrotate (300695) | more than 9 years ago | (#12384827)

Sorry. As an IT guy I routinely peruse people's harddrives looking for interesting material. I use Windows scripting host to search everyone's drives for mp3's wma'a avi's and mpg's.

It isn't your laptop. You have noe freedom to do anything with it.

Re:Pros and Cons (1)

Matt Clare (692178) | more than 9 years ago | (#12385341)

This isn't a magic wand your boss waves over your box. If s/he has access to your box s/he has access to your box - regardless of how perfect the copy of your stuff will be.

Montreal? (-1, Troll)

Anonymous Coward | more than 9 years ago | (#12383596)

hahaha

Re:Montreal? (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#12383633)

You've obviously never been to Montreal, or much better yet, lived there. It's one of the coolest cosmopolitan cities in the world.

Re:Montreal? (0)

Anonymous Coward | more than 9 years ago | (#12383661)

lived there?
muahahahahahahahahahaha
stop that's too much

Re:Montreal? (0)

Anonymous Coward | more than 9 years ago | (#12383679)

It's so cool, Rush is almost from there!

Perfect for those moments... (3, Interesting)

LegendOfLink (574790) | more than 9 years ago | (#12383599)

...when you get that idiot (and EVERY company has at least 1 of these guys) who calls you up asking if it's OK to defrag their hard-drive after downloading a virus or installing spyware. Then, when you tell them "NO", they just tell you that they did it anyways.

Now we can just hit a button and restore everything, a few thousand miles away.

The only thing left is to write code to block stupid people from reproducing.

Re:Perfect for those moments... (1)

Andrewkov (140579) | more than 9 years ago | (#12383699)

Can defragging really cause the spread of a virus? I always assumed defraggers worked at the sector level.

Re:Perfect for those moments... (1)

LegendOfLink (574790) | more than 9 years ago | (#12383817)

The biggest problem usually is the virus and/or spyware will corrupt files. Inept Windows users for some reason think defragging a harddrive is the answer to every computer problem in the universe. They defrag, and next thing you know, you can't boot the machine up.

Theoretically, a drive defrag should have no effect on how an operating system runs, only that it is re-sorting the physical drive to make file access faster. But for some reason, it messes things up.

Re:Perfect for those moments... (0)

Anonymous Coward | more than 9 years ago | (#12384355)

I think the parents was saying this due to the fact that when you defrag a drive you can no longer recover lost data.

Basically, the "sorting" that defragging does effectively destroys any data that is in the unused portions of a disk.

So, when you defrag, anything the virus could have deleted, is gone forever.. not even the best recovery tools can get it.

Re:Perfect for those moments... (1)

bcmm (768152) | more than 9 years ago | (#12384787)

not even the best recovery tools can get it
There are forms of forensic data recovery which can sometimes work out the bit that was written before the current bit on a certain disk location. I've forgotten the details, but, it involves dismantling the drive and working on the platters with very expensive equipment.

Re:Perfect for those moments... (0)

Anonymous Coward | more than 9 years ago | (#12384874)

that seems a little off-base to me, but I guess it's possible... I just have no clue how it would work.

It would have to be VERY expensive equipment.

If what you're saying is accurate, I'm impressed.

[OT] Sig (0)

Anonymous Coward | more than 9 years ago | (#12384972)

UUOC [catb.org]

Re:Perfect for those moments... (1)

tacarat (696339) | more than 9 years ago | (#12385164)

Defragging won't spread a virus unless the virus attached itself to the defragger. I haven't heard of any viruses that do this. If the virus is the sort that will delete files, then defragging is the worst thing to do. After removing the virus, it's easiest to reclaim lost data with the correct tools when nothing new is written to disk. The files are still "there", but if the file is written over where it physically occupies drive space, then salvage becomes much harder, or even impossible, for most.

Re:Perfect for those moments... (3, Funny)

SecurityGuy (217807) | more than 9 years ago | (#12384358)

The only thing left is to write code to block stupid people from reproducing.


Unfortunately the user interface for the relevant hardware has a very intuitive point and shoot interface.

Re:Perfect for those moments... (1)

rob_squared (821479) | more than 9 years ago | (#12384424)

Just read bash. Give people crack, apparently it's an off button idiots.

We were going to try something like this... (0)

Uptown Joe (819388) | more than 9 years ago | (#12383600)

Using a product by Riverbed called a Steelhead. Basically a caching device, the company promised that we would get almost LAN speeds over out WAN. It's currently on my counter in my office waiting to be boxed up and sent back. In a pure MS environment it probably would have worked great, and delivered the speed increases. We however are running a MS/ Novel environment... with only Netware servers at the remote locations.

DOS of the backup server (0)

Anonymous Coward | more than 9 years ago | (#12383602)

...512 byte blocks as a lowest common denominator unit of exchange between client and server. At each client to server connection, the application identifies and maps changes to disk block states. Changed blocks are then encrypted and sent to the server. This indicates that a user could open his or her laptop in an airport, establish a WiFi link to an open access point, and remotely update their laptop backup without effort, knowledge or even good intentions.

What happens if you try to update while running heavy disk writes? Try to back up your swap?

How long before this becomes a hack? (4, Insightful)

Bret Tobey (844402) | more than 9 years ago | (#12383603)

Assuming you can get around bandwidth monitoring, how long before this becomes incorporated into hacking tools. Add this to a little spyware and a zombie network and things get very interesting for poorly secured networks & computers.

Re:How long before this becomes a hack? (0)

Anonymous Coward | more than 9 years ago | (#12383913)

Because spyware and Zombies are such a problem with netBSD...

How long before rsync becomes a hack? (0)

Anonymous Coward | more than 9 years ago | (#12383946)

Assuming you can get around bandwidth monitoring, how long before rsync becomes incorporated into hacking tools. Add it to a little spyware and a zombie network and things get very interesting for poorly secured networks & computers.

Re:How long before rsync becomes a hack? (1)

Bret Tobey (844402) | more than 9 years ago | (#12385885)

RTFA..."The code is being released into the public domain free of license restrictions in any form. The initial proof of concept code has been written to NetBSD, but der Mouse expects the code to be easily portable to systems that allow hooks to be inserted into disk driver code. The code can be accessed via anonymous FTP at ftp.rodents.montreal.qc.ca:/mouse/livebackup/." So once again, how long before this becomes a hack, since it isn't a problem for netBSD but it will be for Windows.

SEE! OUR PEOPLE IN THE FIELD ARE.. (-1, Offtopic)

MrAnnoyanceToYou (654053) | more than 9 years ago | (#12383618)

Browsing porn on those expensive hotel internet lines!

Done this for years (5, Funny)

OutOfMemory (879817) | more than 9 years ago | (#12383625)

I've been using der Mouse to copy files for years. First I user der Mouse to click on the file, then I use der Mouse to drag it to a new location!

Re:Done this for years (1)

mat catastrophe (105256) | more than 9 years ago | (#12383689)

I've been using der Mouse to copy files for years. First I user der Mouse to click on the file, then I use der Mouse to drag it to a new location!

Best. Comment. Ever. Wish I still had the mod points from yesterday.

What is the origin of "der" in "der Mouse" (2, Interesting)

benhocking (724439) | more than 9 years ago | (#12383755)

I, too, immediately thought of German when I saw "der Mouse" (although in German it would be "die Maus", since Maus is feminine). Since they're located in Montreal, however, it seems unlikely that they'd be inclined to use German, and would be more likely to go for a French reference. So I ask, where does the "der" come from?

Should be obvious. (1)

jcuervo (715139) | more than 9 years ago | (#12384927)

From der Swedish Chef.

Bork, bork, bork.

Dump? (1)

wirelessbuzzers (552513) | more than 9 years ago | (#12383629)

Doesn't NetBSD support dump -L the way FreeBSD does? This strikes me as a much more powerful and general solution than this custom tool...

Maybe setup is inconvenient. (2, Informative)

hal2814 (725639) | more than 9 years ago | (#12383631)

Maybe setup is inconvenient. Remote backups using dd and ssh (our method) was a bit of a bear to initially setup, but thanks to shell scripting and cron and key agents, it hasn't given us any problems. I've seen a few guides with pretty straightforward and mostly universal instructions for this type of thing. That being said, I do hope this software will at least get people to start looking seriously at this type of backup since it lets you store a copy off-site.

NetBSD, distributed rigor mortis? (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#12383632)

Synchronization and secure writes (0)

Anonymous Coward | more than 9 years ago | (#12383634)

NFS will eventually bite you in the ass if successful writes are assumed by the client. Without digging through the code, can someone address WRT the article referenced 'stuff'.

How does this handle active filesystems? (1)

G4from128k (686170) | more than 9 years ago | (#12383700)

If one tries to clone an FS that is active, can this cloning tool handle open/changin files (often the most important/recent-in-use files on the system)? I remember an odd bug in an Mac OS X cloning tool that would create massive/expanding copies of large files that were mid-download during a cloning.

Automatic Backup for Paranoids? (1)

Cinquero (174242) | more than 9 years ago | (#12383720)

Isn't there an automated network disk backup tool for paranoids like me?

Well, I'm not really paranoid, but I had some cases where faulty file system drivers or bad RAM modules changed the content of some of my files and where I have then overwritten my backup with these bad files.

Isn't there any automatic backup solution that avoids such a thing? What I have in mind: there should be several autonomous instances of backup servers (which may actually reside on desktop PCs linked via LAN) that control each other on a regular basis. They should also keep back old versions of files as far as disk space allows.

Then, there should be a KDE tray applet showing me the state of the backup server network. It would indicate if servers haven't been cross-checked for some time or if CRC errors or general malfunction problems have occurred.

Wouldn't that be nice? Never ever care again for your backups. It's all done in the background and in a total paranoid manner.

Re:Automatic Backup for Paranoids? (0)

Anonymous Coward | more than 9 years ago | (#12383800)

Of course there is, but only for MS Windows of course.

You Linux guys can write your own - go and start a project.

Re:Automatic Backup for Paranoids? (2, Interesting)

cloudmaster (10662) | more than 9 years ago | (#12384789)

Use rsync and hardlinked snapshots. There are lots of examples out there. I rolled my own a while back, but if you want something relatively nicely polished and based on that idea, check out dirvish [dirvish.org] (I didn't find that until after I already had my system set up).

I really like having several months worth of nightly snapshots, all conveniently accessible just like any other filesystem, and just taking up slightly more than the space of the changed files.

Meh. You can use DRBD on Linux anyway. (1, Informative)

Anonymous Coward | more than 9 years ago | (#12383728)

Wel, not a solution for BSD people (unless you're running a bsd under Xen and the toplevel linux kernel is doing the DRBD).

Right solution, wrong problem (2, Interesting)

RealProgrammer (723725) | more than 9 years ago | (#12383846)

While this is cool, as I thought when I saw it on KernelTrap, disk mirroring is useful in situations where the hardware is less reliable than the transaction. If you have e.g., an application-level way to back out of a write (an "undo" feature), then disk mirroring is your huckleberry.

Most (all) of my quick restore needs result from users deleting or overwriting files - the hardware is more reliable than the transaction. I do have on-disk backups of the most important stuff, but sometimes they surprise me.

I'd like a system library that would modify the rename(2), truncate(2), unlink(2), and write(2) calls to move the deleted stuff to some private directory (/.Trash, /.Recycler, whatever). Obviously the underlying routine would have to do its own garhage collection, deleting trash files by some FIFO or largest-older-first algorithm.

Just a thought.

Re:Right solution, wrong problem (1)

justins (80659) | more than 9 years ago | (#12384117)

disk mirroring is your huckleberry

WTF?

Re:Right solution, wrong problem (1)

RealProgrammer (723725) | more than 9 years ago | (#12384187)

>>huckleberry
>WTF?

Right tool for the right job. See this [phrases.org.uk] .

Re:Right solution, wrong problem (1)

Wiwi Jumbo (105640) | more than 9 years ago | (#12385952)

Thank you, I've never heard of that before...

Re:Right solution, wrong problem (5, Informative)

gordon_schumway (154192) | more than 9 years ago | (#12384303)

I'd like a system library that would modify the rename(2), truncate(2), unlink(2), and write(2) calls to move the deleted stuff to some private directory (/.Trash, /.Recycler, whatever). Obviously the underlying routine would have to do its own garhage collection, deleting trash files by some FIFO or largest-older-first algorithm.

Done. [netcabo.pt]

Re:Right solution, wrong problem (1)

quamaretto (666270) | more than 9 years ago | (#12384564)

I'd like a system library that would modify the rename(2), truncate(2), unlink(2), and write(2) calls to move the deleted stuff to some private directory (/.Trash, /.Recycler, whatever). Obviously the underlying routine would have to do its own garhage collection, deleting trash files by some FIFO or largest-older-first algorithm.

Why modify the system calls? Keep the system calls simple and orthogonal, so the kernel codebase stays small(er). Write this functionality in userland, starting wherever you are most likely to use it; if that is in programming tasks, write wrappers to the C calls to do this. If it is at the prompt, write a shell script. (Or an alias... [cmu.edu] ) If multiple places, write it in the way that keeps it the most centralized. IMHO, this should have been standard 30 years ago, but there's no reason not to do it now. :)

As for the block-level mirroring matter, clearly if you need this sort of mirroring it should be done wherever block-level disk access is done. Still, I would object much less if the driver could live in userland. And I agree that my data-loss problems are minimally related to hard drive failure, and far less likely to fail than my home DSL connection.

Re:Right solution, wrong problem (1)

JacobKreutzfeld (614589) | more than 9 years ago | (#12385949)

This seems very similar to Network Appliance's Filer "SnapMirror" product. It copies changed disk blocks across the net to another system, for disaster recover purposes mainly, but could also be used for read-only use (e.g., publishing). NetApp's license fees for this feature are huge, like $40K per side I think.

I'd really like to use this for backup and disaster recovery. Couple it with FreeBSD's snapshot and you have a large part of the NetApp functionality.

nothing new (2, Interesting)

Afroplex (243562) | more than 9 years ago | (#12383859)

Novell Zenworks has had this capability for sometime in production environments. It also integrates with their management tools so it is easy to use on an entire network. To say this technology is newly discovered is a far cry from the truth. They also use Linux on the back end of the client to move the data to the server.

It is nice though to have something like this in the open source world though. Competition is good.

Re:nothing new (0)

Anonymous Coward | more than 9 years ago | (#12384042)

You are right. Such products are available comerrically for a long time

Another example is the Softek Replicator [softek.com] This too works at the block level and supports many Unix variants.

Re:nothing new (0)

Anonymous Coward | more than 9 years ago | (#12384297)

zenworks doesn't do this at all
zfd has pretty good remote imaging from pxe/cd etc
but that's from a dead machine, this is a live clone of a running system
for fast, reliable multicasting of images to multiple machines
nothing touches frisbee (emulab.net)

Re:nothing new (0)

Anonymous Coward | more than 9 years ago | (#12384604)

(xfsdump || xfsrestore) seems to work okay for me.
When did NetBSD start supporting XFS? (Oops, me bad!)

How Soon (1)

defore (691193) | more than 9 years ago | (#12383901)

How soon do you think this will this be available in the Major Linux distros? I would love to have this for my debian machine. Perhaps I wouldn't have had to spend all last Saturday rebuilding my machine and restoring individual files.

SIGS!!!We don't need no stinkin sigs

lol...linux/bsd has such a long way to go (-1, Troll)

Anonymous Coward | more than 9 years ago | (#12383955)

Sweet! Welcome to 10 year old Windows technology.

Re:lol...linux/bsd has such a long way to go (0)

Anonymous Coward | more than 9 years ago | (#12384729)

Reading comprehension > You

I'm sure you were referring to Ghost, which is great stuff, however, I would hardly consider that "Windows" technology, considering that you can clone Linux systems as well.

You also fail to realise this can be done *live*, while the system still runs, where Ghost can not.

Wacky idea (2, Insightful)

JediTrainer (314273) | more than 9 years ago | (#12384080)

Maybe I should patent this. Ah well, I figure if I mention it now it should prevent someone else from doing so...

I was thinking - I know how Ghost supports multicasting and such. I was thinking about how to take that to the next level. Something like Ghost meets BitTorrent.

Wouldn't it be great to be able to image a drive, use multicast to get the data to as many machines as possible, but then use BitTorrent to get pieces to any machines that weren't able to listen to the multicast (ie it's on another subnet or something) and to pick up any pieces that were missed in the broadcast, or get the rest of the disk image if that particular machine joined in the session a little late and missed the first part?

I think that would really rock if someone wanted to image hundreds of machines quickly and reliably.

I'm thinking it'd be pretty cool to have that server set up, and find a way to cram the client onto a floppy or some sort of custom Knoppix. Find server, choose image, and now you're part of both the multicast AND the torrent. That should take care of error checking too, I guess.

Anybody care to take thus further and/or shoot down the idea? :)

Re:Wacky idea (1)

jmcneill (256391) | more than 9 years ago | (#12384214)

Wouldn't it be great to be able to image a drive, use multicast to get the data to as many machines as possible, but then use BitTorrent to get pieces to any machines that weren't able to listen to the multicast (ie it's on another subnet or something) and to pick up any pieces that were missed in the broadcast, or get the rest of the disk image if that particular machine joined in the session a little late and missed the first part?

Multicast will work across subnets (you just need to set the TTL > 1). Typically what you would do is to enable multicast through your organization's network, and allow everybody to join this group. BitTorrent would not be required, as you probably don't want to be distributing your custom OS images to the outside world.

Re:Wacky idea (1)

squallbsr (826163) | more than 9 years ago | (#12384365)

BitTorrent would not be required, as you probably don't want to be distributing your custom OS images to the outside world.

With BitTorrent you could set up your server as the tracker and multicaster for your images. BitTorrent doesn't HAVE to make it out onto the internet, you just keep the BT traffic inside your corporate network. The BT would be extremely helpful to distribute the load across multiple computers instead of just hitting one machine.

Another thing, I was thinking (usually a bad thing), shouldn't one pick between BT or Multicasting? The multicast server is just spitting out the same bits to everybody on the network (every other machine has to be on the same page at the same time), this should cause the server to not be bottlenecked. However if one were to choose BT, the bottleneck would be the network (not the server) because the file download would be distributed, but not syncronized. It would be interesting to see how the network would respond running Multicasted BitTorrent.

Don't mind me, just talking out my a$$. Its not like I'm a network guru or anything (even though my job title is Network Applications Developer, it just means that I write windoze software that people will use on their networked computer)

Re:Wacky idea (1)

jmcneill (256391) | more than 9 years ago | (#12385165)

Multicasted BitTorrent is a complete waste. The idea with multicast is that there is no real "load" on the sender -- you can run an open-loop multicaster with your image, and people can join the group to download it. Alternately, you can use a protocol like MTFTP to make it a bit more "on-demand".

Either way, bittorrent is completely useless in an environment where multicast is available.

Re:Wacky idea (1, Informative)

Anonymous Coward | more than 9 years ago | (#12384323)

check frisbee (emulab.net) for fast reliable
multi/unicasting system images

How does this compare to md over a network block (1, Insightful)

Anonymous Coward | more than 9 years ago | (#12384279)

I've used Linux for years to do this using md running RAID1 over a network block device. It works very well unless you have to do a resync. Is this better than that?

I'm asking because I'm backing-up about a dozen servers in real-time using this method, and if this method is more efficient, then I might be able to drop my bandwidth usage and save money.

should be better (0)

Anonymous Coward | more than 9 years ago | (#12385469)

This tool should be better in the case where you are more interested in backups. RAID1 also insures data integrety when doing reads.

dd over a LAN (1)

ndverdo (799508) | more than 9 years ago | (#12384306)

I have done that 12 years ago on AIX with no problems as long as (a) the hd you dd it off from and to are sound and (b) there are no transmission failures beyond what rsh (at that time) would retry and mask.

Enterprise solution of the year (-1, Troll)

penguin phil (880033) | more than 9 years ago | (#12384444)

"This facility can be used to maintain complete duplicates of remote client laptop drives to a server system" What great news! Now I can totally ditch any sensible file-backup scripts I was using & just cram every last byte, relevant or not, down some tiny little hotel connection (on the way making sure as much of my data is captured by dodgy proxies as possible). Then I'll force all of my lusers to save their images in bitmap format! Another BSD innovation then.

This is great (1)

raddan (519638) | more than 9 years ago | (#12384534)

I just took one of our mailservers offline a minute ago to do a block-level copy, so this would be fantastic. I develop images for our machines, e.g., mailserver, etc, and then dd them onto other drives. When I update one machine, I then go around and update the others with the new image. This saves me tons of time, and we do a similar thing with desktops and Norton Ghost (although, if I'm not mistaken, this actually a file level copy).

And since we're running OpenBSD on those machines, porting this should be fairly straightforward... although now that I look at it, he adds some patches for sockets... eugh...

Scalability Forking? (1)

Doc Ruby (173196) | more than 9 years ago | (#12384690)

How about disk cloning across servers, for on-demand scalability? As a single server reaches some operating limit, like monthly bandwidth quota, disk capacity, CPU load, etc, a watchdog process clones its disks to a fresh new server. The accumulating data partition may be omitted. A final script downs the old server's TCP/IP interface, and ups the new one with the old IP# (/etc/hostname has already been cloned over). It's like forking the whole server. A little more hacking could clone servers to handle load spikes (not just filling total capacity), running simultaneously under DNS load balancing scheme, like simple round-robin host/IP resolution. And cloning across a WAN could offer geographical distribution for disaster preemption. Is this stuff close to being a .deb package yet?

Itanium imaging (0)

Anonymous Coward | more than 9 years ago | (#12384880)

Does anyone know if this, or any other product for that matter, can be used for making images on itanium machines?

WTF (4, Informative)

multipartmixed (163409) | more than 9 years ago | (#12384963)

Why on earth are people always so insistent on doing raw-level dupes of disks?

First of all, it means backing up a 40GB with 2 GB of data may actually take 40GB of bandwidth.

Second of all, it means the disk geometries have to be compatible.

Then, I have to wonder if there will be any wackiness with things like journals if you're only restoring a data drive and the kernel versions are different...

I have been using ufsdump / ufsrestore on UNIX for ...decades!. It works great, and its trivial to pump over ssh:

# ssh user@machine ufsdump 0f - /dev/rdsk/c0t0d0s0 | (cd /newdisk && ufsrestore f -)

or


# ufsdump 0f - /dev/rdsk/c0t0d0s0 | ssh user@machine 'cd /newdisk && ufsrestore 0f -' .. it even supports incremental dumps (see: "dump level"), which is the main reason to use it over tar (tar can to incremental with find . -newer X | tar -cf filename -T -, but it won't handle deletes).

So -- WHY are you people so keen on bit-level dumps? Forensics? That doesn't seem to be what the folks above are commenting on.

Is it just that open source UNIX derivative and clones don't have dump/restore utilities?

Re:WTF (1)

Devi0s (759123) | more than 9 years ago | (#12385125)

WHY are you people so keen on bit-level dumps? Forensics?

Yes!

EnCase Enterprise Edition costs $10,000 per license. This software basically mimmicks EnCase's functionality for free.

If der Mouse were to port this to the Windoze world, and get CFTT (http://www.cftt.nist.gov/ [nist.gov] to validate it's forensic soundness, he could make a fortune undercutting Guidance Software.

Re:WTF (0)

Anonymous Coward | more than 9 years ago | (#12385670)

Linux has dump/restore utilities, that is if you're using SGI's XFS filesystem.

Check out xfsdump/xfsrestore ahref=http://oss.sgi.com/projects/xfs/ [slashdot.org] http://oss.s gi.com/projects/xfs/>

mo3 0p (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#12384989)

trouble. It

The Dark Side of Image Backups (4, Informative)

RonBurk (543988) | more than 9 years ago | (#12385069)

Image backups have great attraction. Restoring is done in one big whack, without having to deal with individual applications. Absolutely everything is backed up, so no worries about missing an individual file. etc. So why haven't image backups replaced all other forms of backup? The reason is the long list of drawbacks.

  • All your eggs are in one basket. If a single bit of your backup is wrong, then the restore could be screwed -- perhaps in subtle ways that you won't notice until it's too late to undo the damage.
  • Absolutely everything is backed up. If you've been root kitted, then that's backed up too. If you just destroyed a crucial file prior to the image backup, then that will be missing in the restore.
  • You really need the partition to be "dead" (unmounted) while it's being backed up. Beware solutions that claim to do "hot" image backups! It is not possible, in the general case, for a backup utility to handle the problem of data consistency. E.g., your application stores some configuration information on disk that happens to require two disk writes. The "hot" image backup software happens to backup the state of the disk after the first write, but before the second. If you then do an install, the disk is corrupted as far as that application is concerned. How many of your applications are paranoid enough to survive arbitrary disk corruption gracefully?
  • Size versus speed. Look at the curve of how fast disks are getting bigger. Then look at the curve of how fast disk transfer speeds are getting faster. As Jim Gray [microsoft.com] says, disks are starting to behave more like serial devices. If you've got a 200GB disk to image and you want to keep your backup window down to an hour, you're out of luck.
  • Lack of versioning. Most disk image backups don't offer versioning, certainly not at the file level. Yet that is perhaps the most common need for a backup -- I just messed up this file and would like to get yesterday's version back, preferably in a few seconds by just pointing and clicking.
  • Decreased testing. If you're using a versioned form of file backup, you probably get to test it on a fairly regular basis, as people restore accidental file deletions and the like. How often will you get to test your image backup this month? Then how much confidence can you have that the restore process will work when you really need it?

Image backups certainly have their place for people who can understand their limitations. However, a good, automatic, versioning file backup is almost certainly a higher priority for most computer users. And under some circumstances, they might also want to go with RAID for home computers [backupcritic.com] .

Re:The Dark Side of Image Backups (2, Interesting)

adolf (21054) | more than 9 years ago | (#12385618)

Image backups certainly have their place for people who can understand their limitations. However, a good, automatic, versioning file backup is almost certainly a higher priority for most computer users.

Great. Now, could you please enlighten us as to what a good, automatic, versioning file-based backup system might consist of?

AFAICT, this doesn't seem to exist. It doesn't matter how much sense it makes, or how perfect the idea is. It is simply unavailable.

In fact, the glaring lack of such a capable system almost seems to indicate that it is a victim of the "pick any two" rule.

So where is it?

(And, no. A few programs tied together with a ream of Perl or shell script that needs modified in order to function does not constitute a working system, and nor does a HOWTO with instructions on coding one.

Non-programmers, believe it or not, often have important data to back up, too, and being able to code should not be a prerequisite for keeping important stuff backed up. That is, unless you programmers really do think that it'd be no big deal if your loan officer lost your mortgage just hours before closing, or when the accountant's machine trashes your financials.)

It is official; Netcraft confirms: BSD is dying (-1, Troll)

Anonymous Coward | more than 9 years ago | (#12385452)

It is official; Netcraft confirms: BSD is dying

One more crippling bombshell hit the already beleaguered BSD community when IDC confirmed that BSD market share has dropped yet again, now down to less than a fraction of 1 percent of all servers. Coming on the heels of a recent Netcraft survey which plainly states that BSD has lost more market share, this news serves to reinforce what we've known all along. BSD is collapsing in complete disarray, as fittingly exemplified by failing dead last in the recent Sys Admin comprehensive networking test.

You don't need to be a Kreskin to predict BSD's future. The hand writing is on the wall: BSD faces a bleak future. In fact there won't be any future at all for BSD because BSD is dying. Things are looking very bad for BSD. As many of us are already aware, BSD continues to lose market share. Red ink flows like a river of blood.

BSD is the most endangered of them all, having lost 93% of its core developers. The sudden and unpleasant departures of long time Corel developers Jordan Hubbard and Mike Smith only serve to underscore the point more clearly. There can no longer be any doubt: BSD is dying.

Let's keep to the facts and look at the numbers.

BSD Admin leader Theo states that there are 7000 users of BSD Admin. How many users of ConsoleOne are there? Let's see. The number of BSD Admin versus ConsoleOne posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 ConsoleOne users. BSD posts on Usenet are about half of the volume of ConsoleOne posts. Therefore there are about 700 users of BSD. A recent article put BSD at about 80 percent of the BSD market. Therefore there are (7000+1400+700)*4 = 36400 BSD users. This is consistent with the number of BSD Usenet posts.

Due to the troubles of Word Perfect, abysmal sales and so on, Corel is going out of business and will probably be taken over by BSD who sell another troubled OS. Now BSD is also dead, its corpse turned over to yet another charnel house.

All major surveys show that BSD has steadily declined in market share. BSD is very sick and its long term survival prospects are very dim. If BSD is to survive at all it will be among OS dilettante dabblers. BSD continues to decay. Nothing short of a miracle could save it at this point in time. For all practical purposes, BSD is dead.

Fact: BSD is dying

the shared secret (1)

digitaldc (879047) | more than 9 years ago | (#12385474)

The facility today supports symmetric cryptography, based on a shared secret. The secret is established out-of-band of the network mirror facility today. User identification, authentication and session encryption are all based on leveraging the pre-established shared secret.
----------- Confucious say: "The shared secret is no longer a secret."

Requiem for the FUD (0)

Anonymous Coward | more than 9 years ago | (#12385497)

// Please *don't* mod this up. It has [slashdot.org] already [slashdot.org] been [slashdot.org] done! [slashdot.org] Thx

... facts are facts. ;)

FreeBSD:
FreeBSD, Stealth-Growth Open Source Project (Jun 2004) [internetnews.com]
"FreeBSD has dramatically increased its market penetration over the last year."
Nearly 2.5 Million Active Sites running FreeBSD (Jun 2004) [netcraft.com]
"[FreeBSD] has secured a strong foothold with the hosting community and continues to grow, gaining over a million hostnames and half a million active sites since July 2003."
What's New in the FreeBSD Network Stack (Sep 2004) [slashdot.org]
"FreeBSD can now route 1Mpps on a 2.8GHz Xeon whilst Linux can't do much more than 100kpps."

NetBSD:
NetBSD, for When Portability and Stability Matter (Oct 2004) [serverwatch.com]
NetBSD sets Internet2 Land Speed World Record (May 2004) [slashdot.org]
NetBSD again sets Internet2 Land Speed World Record (Sep 2004) [netbsd.org]

OpenBSD:
OpenBSD Widens Its Scope (Nov 2004) [eweek.com]
Review: OpenBSD 3.6 shows steady improvement (Nov 2004) [newsforge.com]
OpenSSH (OpenBSD subproject) has become a de facto Internet standard. [openssh.org]

*BSD in general:
Deep study: The world's safest computing environment (Nov 2004) [mi2g.com]
"The world's safest and most secure 24/7 online computing environment - operating system plus applications - is proving to be the Open Source platform of BSD (Berkeley Software Distribution) and the Mac OS X based on Darwin."
BSD Success Stories (O'Reilly, 2004) (pdf) [oreilly.com] ~ from Onlamp BSD DevCenter [onlamp.com]
"The BSDs - FreeBSD, OpenBSD, NetBSD, Darwin, and others - have earned a reputation for stability, security, performance, and ease of administration."
..and last but not least, we have the cutest mascot as well - undisputedly. ;) [keltia.net]

--
Being able to read *other people's* source code is a nice thing, not a 'fundamental freedom'.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>