Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Software Linux

Ext3cow Versioning File System Released For 2.6 241

Zachary Peterson writes "Ext3cow, an open-source versioning file system based on ext3, has been released for the 2.6 Linux kernel. Ext3cow allows users to view their file system as it appeared at any point in time through a natural, time-shifting interface. This is can be very useful for revision control, intrusion detection, preventing data loss, and meeting the requirements of data retention legislation. See the link for kernel patches and details."
This discussion has been archived. No new comments can be posted.

Ext3cow Versioning File System Released For 2.6

Comments Filter:
  • So which is it? (Score:3, Interesting)

    by EveryNickIsTaken ( 1054794 ) on Wednesday May 02, 2007 @08:06AM (#18954749)

    Ext3cow, an open-source versioning file system based on ext3, has been released for the 2.6 Linux kernel. Ext3cow allows users to view their file system...
    Well, is it the file system, or the file system manager?
    • Re:So which is it? (Score:5, Informative)

      by Bob54321 ( 911744 ) on Wednesday May 02, 2007 @08:10AM (#18954789)
      From the example screenshot [www.ext3cow.com] it appears it is a file system. You take a snapshot of your system at some point in time and it stores this data even when files change. Of course, with any file system it is important to have functionality that allows you to view the files as well...
    • Well, is it the file system, or the file system manager?

      I can't tell, the site is experiencing the /. effect.

      Mirror of the patch (I grabbed it when I saw this in the firehose) can be grabbed here [echoreply.us] until my server gets sluggish too.

      in /usr/src type : patch -p1 linux-2.6.20.3-ext3cow.patch

      The site said its not been tested with other kernel versions, but if you feel brave just s/linux-2\.6\.20\.3/your-version/g. Haven't tried it, but should work.

      It wen't dark just around the time I was getting the docs and uti

  • What a name (Score:3, Funny)

    by Anonymous Coward on Wednesday May 02, 2007 @08:07AM (#18954761)
    So is it EXT or is it just a FAT cow?
    • Well, they were originally going to call it "Rosie O'Donell Versioning File System" but the name was too long and the acronym ROVFS just conjured images of that awful rap [youtube.com] by "MC Rove" at the awards dinner.

  • Overhead? (Score:3, Interesting)

    by HateBreeder ( 656491 ) on Wednesday May 02, 2007 @08:07AM (#18954765)
    Couldn't find real-world information about space and performance overhead.

    Does it store many copies of each file? or only the differences between the old and the new version?
    • Re:Overhead? (Score:4, Informative)

      by JoeD ( 12073 ) on Wednesday May 02, 2007 @08:31AM (#18954963) Homepage
      Check the "Publications" link. The first one is an article in "ACM Transactions on Storage".

      It's a bit dry, but there is an explanation of how it stores the versions, plus some performance benchmarks.
    • Re: (Score:3, Informative)

      by DaveCar ( 189300 )

      Couldn't read TFA (slashdotted), but I would *imagine* that 'cow' is copy on write and that it just uses new blocks for the changes - so only the differences, but not minimal differences.
      • Re: (Score:3, Informative)

        by anilg ( 961244 )
        COW has been present for a long time in ZFS [opensolaris.org] since Solaris 10. The overhead there is negligible and its quite stable. Last I heard, it was being ported to FUSE on linux. Upcoming in the next releases of FreeBSD and OSX. Wiki has a pretty good article [wikipedia.org].
  • by BuR4N ( 512430 ) on Wednesday May 02, 2007 @08:08AM (#18954769) Journal
    This might be far fetched but how far off is it to use these filesystems as a revision control system replacement ?

    Never tinkered with any of these filesystems, but wouldnt it be very comfortable for at least us developers to have a filesystem that worked something like Subversion. Just hook up something on the network and use it as the central code repository.
    • The C in CVS. (Score:5, Informative)

      by SharpFang ( 651121 ) on Wednesday May 02, 2007 @08:28AM (#18954929) Homepage Journal
      Concurrent...

      Sure you can "go back in time", but two users working on the same file at the same time would be a pain. Networking would require additional layers - even plain SAMBA/NFS, but still. Plus a bunch of userspace utilities as UI to access it easily.

      It's not bad as a backend for such a system, just like MySQL is good as a backend for a website, but by itself it's pretty much worthless.
    • It's a bad idea to use this kind of thing for version control, IMX. The documentation through TFA is very... sparse.

      Q: What happens to old snapshots when the disk begins to fill up?
      Q: How do I manage snapshots?
      Q: Are snapshots atomic?
      Q: What happens when a snapshot fails? What can cause a snapshot to fail?

      Windows Server 2003's Shadow Copies works in much the same way, AFAICT, and MS goes out of their way to caution against using Shadow Copies as a replacement for backup or version control. I expect this
    • by hey! ( 33014 )
      Version control is to this thing as keeping your vehicle under control while you drive is to having an airbag.

      The point of version control is embodied in the name -- it gives you control. Not only does it give you the power to time travel to specific dates, it gives you the ability to find specific versions, to branch and merge, to mediate cooperation between developers.

      This sort of thing would be useful in certain version control scenarios, e.g. the guy who checked out the software and has been modifying i
    • by cduffy ( 652 )
      Subversion's backend is a transactional filesystem (though it sits on top of a BDB interface or a separate FS), and many of the tools it provides work by describing a set of changes as filesystem operations (go down this directory, now go down that directory, now open this file, now seek to this position, now write this text...)

      That said, revision control is about much, much more than just storing snapshots that can be retrieved later. Think about branching and merging -- particularly intelligent merge algo
    • The first thing I thought when I saw the headline was this: Don't we already have GIT?

      Take a look at this: http://kerneltrap.org/node/4982 [kerneltrap.org] Note particularly the bit where Linus says

      In many ways you can just see git as a filesystem - it's content-addressable, and it has a notion of versioning, but I really really designed it coming at the problem from the viewpoint of a _filesystem_ person (hey, kernels is what I do), and I actually have absolutely _zero_ interest in creating a traditional SCM system.

    • This might be far fetched but how far off is it to use these filesystems as a revision control system replacement ?

      We should probably ask some VMS users about that. They had a versioned filesystem 20 years ago.
      • Re: (Score:3, Interesting)

        by scottv67 ( 731709 )
        We should probably ask some VMS users about that. They had a versioned filesystem 20 years ago.

        It's actually closer to 30 years ago. I can't believe VMS is celebrating it's thirtieth birthday this year.

        http://h71000.www7.hp.com/openvms/25th/index.html [hp.com]

        Having multiple versions of a file is *extremely* handy. That feature saved me bacon many-a-time. For those of you who have never been fortunate enough to login to a VMS system, the file versioning looks like this to the user: scott_file.txt;5 s
  • True undelete (Score:5, Insightful)

    by ex-geek ( 847495 ) on Wednesday May 02, 2007 @08:13AM (#18954827)
    Undelete, not half-assed, desktop based trash can implementations, is something I've always been missing on Linux. And yes, I generally know what I'm doing, but i'm also human and do make mistakes.
    • I've always wondered about this. Aren't files always eventually deleted with an unlink() call? What reason is there that unlink() can't be modified to instead move the link to a .Trash/ which is then scrounged when more space is needed? You could either auto-delete the oldest files, or if you wanted to not affect FS fragmentation delete a file whenever you needed to clobber one of its sectors. Sure, performance will drop when you get a drive full of deleted files that have to be cleared every time you write

      • Re:True undelete (Score:4, Informative)

        by xenocide2 ( 231786 ) on Wednesday May 02, 2007 @09:07AM (#18955373) Homepage
        There's a couple reasons for it not being in the kernel. First, it misleads users who expect some degree of data security. The good news is that sort of person likely follows kernel patches to the FS and would likely be aware of the problem, possibly even writing a script that replaces rm with a real-rm.

        The second argument is that it's better handled in user space, so the OS doesn't have to make that sort of policy. There's no reason you can't just alias rm to some .Trash, or configure your Desktop Environment to do so (GNOME does, for example). There's all sorts of things you have to decide that might not suit everyone. For example, if I delete a file on a USB drive, does it go in a .Trash storage in the USB drive, or do we copy it over to a main .Trash folder? Many people don't realize they have to empty the trash to reclaim space on their thumbdrive in GNOME.

        The final argument I can come up with is security problems. We can't have one global .Trash bin in a multiuser system. And quotas. And permissions.

        Reading historic archives of the LKML [iu.edu] suggests it's at least come up once. I guess Torvald's opinion is that anything that CAN go in the userspace SHOULD. Can't explain the webserver in kernel though. Perhaps that opinion has changed some time in the last 10 years?
        • by Cyberax ( 705495 )
          Kernel-level webserver has real performance benefits. And it is not enabled by default.
        • I still miss the "salvage" from Netware-- the ability to restore any revision to a file as disk space permits. Just hacking rm doesn't fix someone overwriting a file.

          As for security, you could disable salvage for sensitive volumes or directories, or have firm policy based wipes of deleted files on a scheduled basis. Often times, Salvage was most useful when a problem was discovered 20 minutes after it occurred.

          It sounds like ZFS will do a better job of allowing snapshots to support something like this, bu
        • by Ant P. ( 974313 )

          Can't explain the webserver in kernel though.

          The Tux server has never been a part of the official tree. What's there to explain?
    • Re: (Score:2, Interesting)

      by jonadab ( 583620 )
      Undelete isn't what makes this really cool, IMO. I don't generally delete stuff I still want, so that isn't really a big issue.

      What I want, that a versioning filesystem can deliver, is the ability to revert a file back to an earlier version, after I've saved changes that turn out to be undesirable. This is a mistake I *do* make from time to time, often enough that I have been really hoping for a versioning filesystem in modern operating systems. This, to me, is a killer feature. I'm currently using Free
      • and I've been waiting, waiting, hoping, wondering why we don't have it in modern operating systems. I *want* this

        Look up Windows 2003 Server, WindowsXP, Vista...
      • I have been wanting it ever since I saw the automatic versioning on OpenVMS, and I've been waiting, waiting, hoping, wondering why we don't have it in modern operating systems.

        Someone I know has the email signature "DIGITAL had it *then*. Don't you wish you could buy it *now*?"
    • by gmack ( 197796 )
      Undelete in windows is also desktop based. Ever notice that uninstallers don't delete to the "Recycle bin"? You can also try opening a cmd window and deleting something with del and notice that it does not appear in the "recycle bin"

    • You mean, for example, remapping the unlink calls to libc to actually move things to ~/Trash?

      Surely you wouldn't want that on all accounts. Only for users. It'd be chaos if every single script on your computer that generated temp files had them moved rather than deleted.
      How about putting it into the .bashrc (or .zshrc, or whatever) file to be loaded using the preload trick?

      That way, all users that have that .bashrc file can have it on, and everything else won't.

      There is a library for this. [nyu.edu]

      something I've a
  • Well done to all who worked on this patch. Guess this means you've almost caught up with OpenVMS [wikipedia.org] now, then? [throws another log of karma on the fire].

    All joking aside, I never really liked VMS much. It was extremely good at being very verbose whilst being extremely bad at clear English.
    • Well done to all who worked on this patch. Guess this means you've almost caught up with OpenVMS now, then?

      In the sense that you had multiple versions of every file? Well yeah but it is on a per file basis rather than a per volume basis so you can't ask it to give you the entire volume (or even a directory) as it was at a particular time.

      And I remember being caught by the 32000 version number limit, with a batch job which maintained a status file and purged the file after every run. The version number sti

  • by ntufar ( 712060 ) on Wednesday May 02, 2007 @08:22AM (#18954881) Homepage Journal
    It reminds me of VMS file versions.

    In VMS if you had a file named article.txt, each time you modified and saved it in editor, a new version was created named article.txt;1 article.txt;2 article.txt;3 and so forth. So after a long session of edit and saves you could end up with a hundred copies of file in your directory. A lot of clutter in the directory but easy access to older versions of the files.

    With Ext2cow you basically get the same functionality in a bit different way. By default you see only article.txt file. If you need to access a previous version of the file you need to specify a cryptic code like this: article.txt@10233745. A bit cumbersome but, hey, how often you access older version of your file anyways. Looks better than VMS' approach.

    This filesystem seems like a perfect solution for me as I am writing my Ph.D thesis. Currently I take backup every day and name it thesis20070420.tar.bz2, thesis200070421.tar.bz2, thesis20070422.tar.bz2 and so forth in case I need to go back and see how it looked some time ago.

    However, in my home directory I have a lot of large audio and video files that I would never want to be versioned. I wander if Ext3cow keeps extra copies of the files if I move them around, change file named but do not modify the content. Probably I would have to make a new partition and put my text files I am working on there under Ext3cow and leave my media files on ext3.

    • Why don't you use svn?
      • by osgeek ( 239988 )
        Or better yet, SVK [bestpractical.com].
    • This solution certainly helps if you accidentally delete something or need to go back to an older version. SVN is one solution, but it is a bit more explicit, while solutions like this and Apple's Time Machine help avoid needing to remember to update your repository. It should be noted that this doesn't replace backups, since this does not protect against hard-drive corruption. I do have a few of questions though:
      - what are the security considerations here?
      - can you delete the
    • by GauteL ( 29207 )
      "If you need to access a previous version of the file you need to specify a cryptic code like this: article.txt@10233745. A bit cumbersome but, hey, how often you access older version of your file anyways. Looks better than VMS' approach."

      This is exactly what a graphical file manager should abstract away through concepts such as time machine [apple.com].

      This announcement is just Linux file systems starting to catch up with features from file systems such as ZFS. Very good news.
      • This is exactly what a graphical file manager should abstract away through concepts such as time machine [apple.com].


        I know this is SlashDot, but why reference a non-shipping product as the GUI standard example for this feature when it has been being used in Windows 2003 and WindowsXP for over 4 years?

        Vista even goes a few steps beyond previous Windows versions and Time Machine.

        PS from the last Beta I played with, Time Machine's UI has a ways to go to catch up to the simplicity of right click -> previous
    • by arivanov ( 12034 )
      Not quite.

      This is more like NetApp and other high-end NAS and SAN systems where a facility like this is used for backup. The backup system looks at a snapshot taken at X:00 and backs it up at leisure while the users continue to read/write to the filesystem on top of it. Once the backup is complete you obsolete the checkpoint on which the backup was operating. As a result you have a true backup of the filesystem at point X, not something that spread from X to X+N hours.

      This is a killer feature as far as any
    • by cortana ( 588495 )
      Interesting... but tracking the revisions of a file by name has some limitations. What happens if I rename a file (also to another directory)? What happens if I rename a directory itself? Is the file metadata (owner, access permissions, modification times, extended attributes (including selinux labels, ACLs and user extended attributes)) versioned?

      I guess some of this info is on the project's home page, which is down at the moment...
    • by rbanffy ( 584143 )
      If I got it correctly, you would only have a new copy of the directory when you rename or move the file. The file will only be copied if it is changed.

      And it is only necessary if you are doing it based on files. If you do it based on blocks, then only the blocks that were changed get copied.

      It seems quite cool. Too bad all servers even remotely related to it appear to have been slashdotted.
    • VMS was my first real OS, and I don't miss it at all. Its versioning was fairly useless--one of the first commands everyone learned was PURGE, to get rid of all of the clutter. In order to be useful, other versions have to be out of view during normal operation...
    • by caseih ( 160668 )
      ext3 and ext3cow are inode file systems. So if you rename the file or move it anywhere on the disk, the inodes allocated to the file stay the same. With ext3cow, the inodes that make up the versions would stay the same too.
    • by physicsnick ( 1031656 ) on Wednesday May 02, 2007 @11:33AM (#18957503)
      Hmm, when I read your post I thought I'd come here and suggest Subversion. Seems everyone else has done the same.

      You really should use it. It's much easier to set up than you'd think, especially if you're on a Debian/Ubuntu box. If you use the file:/// syntax, you don't even need any kind of daemon or http server running; the client can do everything on its own. Say your thesis is currently sitting in ~/thesis, it's this easy to set up:

      sudo apt-get install subversion
      svnadmin create ~/thesisrepo
      svn import ~/thesis file:///home/${USER}/thesisrepo -m "Initial import"
      mv thesis thesisbackup
      svn co file:///home/${USER}/thesisrepo thesis


      That's it, you're done. ~/thesis is now a working copy of your repository, the repository itself (which will hold all versions of your files) is contained in ~/thesisrepo, and your original folder is backed up as ~/thesisbackup.

      To work on your thesis, go into ~/thesis and start writing as you've always done. When you want to save a snapshot of the current state of your thesis (i.e. commit your changes), open a bash terminal, go into ~/thesis and type svn ci -m "some message". That's it. Much easier than running a backup; you can just stick it in a daily (even hourly) cron job. To back up all versions of the thesis on removable media, tar up the ~/thesisrepo folder and put it somewhere safe.

      There's a bit more to know about it; namely you need to tell subversion when you add, remove, move or rename files. A good source for that is the Subversion Book [red-bean.com], specifically Chapter 2.
  • Smells like dirvish (Score:2, Interesting)

    by Zekat ( 596172 )
    This sounds like http://www.dirvish.org/ [dirvish.org], which is nearly as nice as the automatic file snapshots done by the "Network Appliance" fileserver boxes I've used at the last 2 out of 3 workplaces.
  • Done it, been there.
    Guess, this is the first step to approach ZFS, which for some stupid licence reason doesn't seem to have an easy path into the Linux kernel.
    ZFS does a few, actually a lot, more. But why not write a different solution, for a plurality of choice.
    May the best win !
    • IIRC the main reason ZFS wont make it into the kernel is that a non-trivial amount of the filesystem kernel code would need to be re-written.
  • some background (Score:5, Informative)

    by pikine ( 771084 ) on Wednesday May 02, 2007 @08:59AM (#18955281) Journal

    I'm answering questions that people posted so far altogether.

    Is it a file system or a file manager?

    It is a file system. You access old snapshot by appending '@timestamp' to your file name. You have to first instruct ext3cow to take a snapshot first before you can retrieve old copies, otherwise it simply behaves like ext3. It appears that snapshot is always performed on a directory and applies to all inodes (files and subdirectories) under it.

    My complaint is its use of '@' to access snapshot. Why not use '?' and make it look like a url query? Better yet, use a special prefix '.snapshot/' like NetApp file servers.

    Does it store many copies of each file? or only the differences between the old and the new version?

    How far off is it to use these filesystems as a revision control system replacement?

    ext3cow takes it's name from "copy on write," and it does this on the block level. When you modify a file, it appears to the file system that you're modifying a block of e.g. 4096 bytes. COW preserves the old block while constructing a new file using the blocks you modified plus the blocks you didn't modify.

    You can think about it as block-level version control. However, when you save a file, most programs simply write a whole new file (I'm only aware of mailbox programs that try to append or modify in-place). Block-level copy on write is unlikely to buy you anything in practical use.

    Does it provide undelete?

    Only when you remember to make a snapshot of your whole directory. An hourly cron-job would do, maybe. There is always the possibility you delete a file before a snapshot is made.

    • My first thought was the same as yours, why not use the ".snapshot" prefix from netapp, so that scriopts and tools written for Netapp servers will continue to work.

      Second, I have hundreds of mail folders saved in files with names like "user@example.com". Oops.

      Block-level copy on write is unlikely to buy you anything in practical use.

      For binary files (eg, databases) it will. And it's pretty cheap to implement... for a whole-file write operation where the file is first truncated the cost is the same as if the
  • I can't see anything linked from the ext3cow.com site, save for the near-silent mailing lists. I'm tagging this 'slashdotted'. There's not even a huge amount on the Wayback Machine: http://web.archive.org/web/*/http://ext3cow.com [archive.org]

    I guess that this is a fork of the ext3 code with Copy On Write functionality and userland tools to make snapshots and time-travel the snapshots. Wikipedia's article on Ext3cow [wikipedia.org] names Zachary Peterson, the submitter of the article, and links to an ACM Transactions on Storage paper
  • BSD operating systems had filesystem snapshots [wikipedia.org] functionality for several years now... Linux is catching up — in a usual Linux way with patches, which one has to collect from all over...

    Or am I misreading the write-up and this new ext3cow thingy is much more than that?

    • by Jokkey ( 555838 )
      Linux has had filesystem snapshots (via LVM) for quite a while too. Ext3cow, as I understand it, differs in that it lets users access previous versions of individual files from within the current filesystem, rather than creating a snapshot of an entire filesystem or disk. As far as I know, it takes space out of the existing ext3 filesystem to do this, rather than using previously unallocated space within the disk volume group.
    • This is not even close to the same thing that is a BSD filesystem snapshot [freebsd.org], but don't let interrupt your furious fanboy wankfest.

      BSD snapshots are a lot like LVM snapshots (that have been available in Linux since 1998), except that under Linux, you are not limited to 20 snapshots.

      What ext3cow does, which you would realize if you would have opened your ears before your mouth, give you true point in time recovery. In other words, without ever manually "taking a snapshot", like you'd have to under BSD, you ca
      • by mi ( 197448 )

        you can simply revert your filesystem to where it was at any arbitrary point in time.

        No, you can't. According to this example [www.ext3cow.com] you need to issue an explicit "snapshot" command — I checked my facts before posting, as well as I could, anyway. There is no word yet on the maximum number of snapshots — they may well be limited to 20 as well.

        What a major oopsie, I might add... I mean, you could've come up humbly with something "As far as I know, ext3cow is better, because it requires no explicit sn

    • Actually Linux has supported snapshots through the LVM layer for several years as well. This isn't a filesystem snapshot, it's a per file snapshot system.
  • I heard Ubuntu was planning to upgrade to Ext4 for Feisty, and then it fell through, and instead they were planning on Ext4 to be available as a patch approximately the same time Feisty was released. Is Ext3cow the change that Ubuntu was planning to impliment? (I realize Ext4 is different from Ext3cow, but I'm wondering if Ubuntu's getting this as an automatic update)
  • Anybody use the similarly featured NILFS?

    NILFS is a log-structured file system developed for the Linux, and it is downloadable on this site as open-source software.


    http://www.nilfs.org/en/index.html [nilfs.org]
  • It's simply a filesystem with snapshots. Big deal. It'll only do cool stuff when you tell it to make a snapshot, not every time a file changes.
  • No flaming -- I don't have the time to research this, so I'll just post the questions!

    1 - What happens to large databases? I am assuming a delta storage method, but that might slow down the database (specifically, I use mysql).

    2 - Large files? Specifically, deletion (I store lots of videos)

    3 - Usenet spools? (Lots of small files, deleted regularly).

    I suspect that I would have to segregate my files...

The use of money is all the advantage there is to having money. -- B. Franklin

Working...