Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

A Good Filesystem for Storing Large Binaries? 214

jZnat asks: "I own hundreds of gigabytes of binary data, usually backed up from other mediums such as CDs and DVDs. However, I cannot figure out which filesystem would be best for storing all this reliably. What I'm looking for is a WORM-optimized FS that also has good journaling methods to prevent data loss due to some natural disaster while data is being shifted around. Trying something new for once, I tried using SGI's XFS due to its promising details, but I was met with countless IO errors after trying to write large amounts of data to it. I feel that Ext3 is not optimal for this; ReiserFS is too slow when it comes to reading large data files; and Reiser4 isn't mature enough to entrust my digital assets to. What filesystem would be most appropriate for these needs?"
This discussion has been archived. No new comments can be posted.

A Good Filesystem for Storing Large Binaries?

Comments Filter:
  • JFS (Score:4, Informative)

    by member57 ( 680279 ) on Sunday February 12, 2006 @12:51AM (#14698177)
    I use JFS on RAID 5, no errors, uptime of 200+ days currently. Handling large files 200-300MB each all day long. Excellent performance.
    • No errors in 200 days is normal with most filesystems I think. All our servers and my home desktop use ext3, which I've had no problems with. I've used JFS, but only recently. Short of other posts in this thread, I've heard only good things about JFS though.

      I did see a Red Hat bug report a while back about very large file write performance issues on ext3: https://bugzilla.redhat.com/bugzilla/show_bug.cgi? id=156437 [redhat.com]. I don't know if the fix is in the official kernel yet, or if it was just a RHEL specific bug
    • 200-300MB are small files.

      How big are the files? Hundred of gigabytes? terrabytes? Most non-commercial filesystems are very inefficient when it comes to handling files in the hundreds of gigabyte size.

      Efficient handling of such large files requires that you also have disk hardware and I/O subsystems to handle it. If you have a large number of small files, the I have no idea, I only deal with VLF (Very Large Files, starting at 100GB+)
      • This question isn't meant in a negative way at all, so please don't take it that way...

        What type of data requires a file 100Gb in size?
        • Re:JFS (Score:3, Informative)

          by IdleTime ( 561841 )
          many different things...

          Drilling data and real-time sensor data from oil-wells in the North-Sea is one example. Observational data from particle accelerators, weather data.... many areas have HUGE data storage requirements.
        • Re:JFS (Score:3, Informative)

          by Nutria ( 679911 )
          What type of data requires a file 100Gb in size?

          Thermouclear explosion simulations, for one.

          On a more prosaic level, we've got databases that have hundreds of millions of rows, regularly growing into the billion+ size. Yes, it's partitioned, but the partitions are still huge.
        • Re:JFS (Score:2, Informative)

          by chris234 ( 59958 )
          While not 100Gig, HDTV recordings can easily run 20-30Gig for long shows (I think my recording of the Olympic opening ceremonies was 21Gig for 3.5 hours).
  • by benploni ( 125649 ) on Sunday February 12, 2006 @12:54AM (#14698186) Journal
    Google made a filesystem [google.com] for exactly that purpose: storing HUGE files highly reliably. OK, so it's not publically available, but it's still perfect for you (other than that).
  • Try JFS? (Score:2, Informative)

    by hunterkll ( 949515 )
    I've been using JFS for about two months now, and it's been quite a plesant experiance with my anime storage. I run it on a 1.6TB array, four 400GB harddrives. It's preformance is damn fast from what I've observed copying too/from a JFS firewire drive. I trust it enough to keep data that I can't back up on it until I can get another identicle array to mirror with - only drive failure will seem to kill this FS, and these drives are about ~3 months old, so failure isn't that much of a concern anymore. I'd r
    • You run a 4 drive array, without redundancy? That takes guts I suppose...
      • balls of solid brass i tell ya, balls of solid brass.

        Funny, b/c I too have a 4 drive array . . . 4 to make 1, the true paranoid array.
        -nB
  • by strredwolf ( 532 ) on Sunday February 12, 2006 @12:57AM (#14698200) Homepage Journal
    So lets get this straight:

    You need a filesystem that can be "burned" to a medium, yet have error correction capability.

    Journaling doesn't do this. Journaling is for when you get a power surge in the middle of a write, you can get some of the data back. Currently no regular FS can do that.

    • Okay, why is this modded troll again? He misunderstood the direction of the backups, he wasn't trolling! If anything, it's -1 WRONG, not troll! Anyway - he backs up data FROM DVDs and CDs, not TO
    • by jd ( 1658 )
      I know that's the case for metadata journalling, but if you use full data journalling, I would have thought you'd be fine.

      Actually, I wouldn't use ANY filesystem for this sort of work. The files won't change in size and I doubt they'll be deleted. It would seem more sensible to battery-back the RAM on the computer and the hard drive, use a raw partition for the data and a "sequential index" database to figure out where the data starts and how long it is. Batteries guarantee that the state of the computer wi

  • by aralin ( 107264 ) on Sunday February 12, 2006 @01:00AM (#14698211)
    I know this is not exactly, what you are looking for, but database companies have very similar problem on their hands and since filesystems usually are not quite good for this type of work, they usually come up with their own systems for handling raw disks. For example Oracle has its ASM (Automated Storage Management). You might want to look into these if they are not customizable for your problem or contact the relevant companies for specifics.
    • I've dealt with said database problems albeit in a probably smaller manner. The database in question was a MyISAM (MySQL) format reaching a size of at least 2 GB and increasing at probably 50 MB/day or so. Our only solution we could work with at the time was splitting the database up into smaller files and even across multiple harddrives to decrease access time (some funky hacks that could've been better if we were using a better cluster, I'm sure).

      Then again, IANADBA, so I don't know the extent of how la
    • In that vein, OCFS2 [oracle.com] should do exactly what you would like from a FS. It is easier to set up than RAW/ASM, and if configured properly, it can be rock solid. There are good resources available for installing and configuring.
  • reiser or jfs (Score:3, Informative)

    by william_w_bush ( 817571 ) on Sunday February 12, 2006 @01:09AM (#14698253)
    reiser or jfs are both solid for this kind of work, with large file and volume support. personally i swear by reiser for my 2tb volume, and have had no problems so far, although there is a minor speed penalty when working with several multi-gigabyte files at once, something to do with shared fs locks/mutexes i'd imagine.

    OTOH JFS is quite stable, and though it has less of the elegance in feature set I find in reiser, tends to make up for it with enhanced ruggedness and its handling of large volume/files.

    Really can't recommend anything else, as you say, reiser4 is still untested for reliability imho, xfs has issues that vary from kernel to kernel, and ext3 appears quite primitive in comparision, although its journaling seems comparable to the other choices.

    JFS if you need the speed, its dead fast in large scales, slower with small files, otherwise Reiser3 is an excellent all-round performer.
    • if solaris is an option go zfs. hands down best fs if you can run it, incredible flexibility and scalability.

      also, you mentioned something about burning to dvd, filesystems won't really help with that, i'd look more into taring your filesystem/ fs segments into disc sized segments, then making extra par2 files for error-resilience.

      really, backing up 2tb of live data is a f*ing nightmare however you look at it, usually when i reach that hurdle i just build another machine, copy over active data, and put the
  • Possibly... (Score:4, Funny)

    by dcapel ( 913969 ) on Sunday February 12, 2006 @01:10AM (#14698254) Homepage
    p2pfs?

    Just upload to bittorrent, ftp, or some other p2p system, and redownload it if you need it again!

    Some small security issues may apply though...
  • by duffbeer703 ( 177751 ) on Sunday February 12, 2006 @01:10AM (#14698256)
    ZFS has some built-in volume management & data integrity functions that would probably work for you. I don't believe that it is available for Linux, but is freely available via Solaris & OpenSolaris

    http://www.sun.com/software/solaris/zfs.jsp

    • Unfortunately, ZFS is still in a beta stage; I wouldn't want to trust anything critical to it just yet. Production release as part of Solaris is expected by the middle of this year, I think.
  • If you're not writing to the files extensively, ext3 is perfectly fine. If you don't need the data journalling, read `man mount` regarding ext3 mount options.

    Personally, I'm using XFS for the same task. Why? Because it segments the filesystem, allowing segment locking instead of filesystem locking, which is nice if you're writing multiple big files at once. I've never had a problem with it.

    If you are getting countless IO errors, have you done a `badblocks` on the disk? Have you tried a different IO card or
    • XFS and JFS should both work fine. I also would suspect hardware problems as a possible issue.

      Stacking drives and not adequately cooling them is another possible cause of your IO errors with XFS.

      Another possibility is checking your hard disks with the manufacturers drive test utility.

       
  • by Vo0k ( 760020 ) on Sunday February 12, 2006 @01:12AM (#14698269) Journal
    Most fancy filesystems like ReiserFS are optimized for performance with lots and lots of tiny files where the disk reads little at a time, seeking, sorting, assembling, slicing etc take most of the time. Here you have few big files, so performance is your least worry - the harddrive read/write speed will be the bottleneck, and all the seeks, directory reads etc will be scarce and fast. Therefore the filesystem won't change much in the means of speed. (it MAY break a lot in the department, like, say compressed filesystems, but won't speed it up above what the harddisk does, and most of filesystems will perform just the same in the means of speed.) What you can do is to optimize the filesystem for capacity, reducing its overhead and allowing to get closer to "advertised disk capacity".

    Just use tune[23]fs to reduce number of inodes significantly on the ext3fs. Or look for -simple- filesystems that don't do tricks in optimization of speed (because these usually waste diskspace), just store your files in a straightforward manner.
  • by thegrassyknowl ( 762218 ) on Sunday February 12, 2006 @01:12AM (#14698270)

    journaling methods to prevent data loss due to some natural disaster while data is being shifted around

    Journalling doesn't do this!. Journalling helps reduce file system corruption in the event of a catastrophic failure while modifying the file system - ie, it's possible to bring it back to the last clean state before it crashed - journalling does not prevent data loss. You might say "well filesystem corruption and data loss are the same", but they are not. If the filesystem is corrupted, the data is not lost. It just becomes not easily retreivable. If the data is lost then it becomes entirely irretreivable.

    I tried using SGI's XFS due to its promising details, but I was met with countless IO errors

    Have you considered your hardware is shit? I use XFS on terabytes of raided disks and have been for more years than I remember... 5 or so? I don't see any I/O errors. XFS is very reliable and I trust it with my data.

    I feel that Ext3 is not optimal for this

    Well not all of your post was dumb!

    ReiserFS is too slow when it comes to reading large data files

    How is it slow? It takes a few microseconds longer to access the first data sector because it does some extra processing first? Give me a break. Filesystem performance for journalled filesystems is mostly bound by writing speed, and this is a function of how the journal is updated. I doubt you would notice the difference in read speed unless you ran a million tests over a million different files, took some sort of average for the filesystems and quibbled over a few milliseconds.

    Reiser4 isn't mature enough to entrust my digital assets to

    You entrust your assets digitally? Shit, why do you trust any filesystem? They are all buggy. Give me a break.

    If you don't like it, keep backups on other media; buy a tape drive and a robot and get in bed with a good archiving company to securely store the backups. Don't come one here and poo poo all of the file systems known to man then tell me "is there anything better"? About the only 4 in common use you left out were JFS (good for large databases but not much use if you have a lot of small files), FAT[12/16/32] (not much good for anything really), NTFS (see FAT, but more complex) and ISO9660. I'll concede there are others, but if you want something that's in common use so you can actually retreive your data when the world turns to shit...

    Anywho!

    • XFS has issues that go version by version. I've had volumes work perfectly for months suddenly start spitting out I/O spam after a simple kernel upgrade. My guess is poorly applied patches, as many xfs kernel patches are added downstream by a distro or kernel contributor (MM/gentoo). just my 2c, YMMV.
    • UDF is commonly used on DVD media. I think it fits the criteria, being writable (unlike iso9660) but optimized for reading.
    • What's your complaint with NTFS, other than that it is closed-source? Are there any reliable comparisons of NTFS with traditional open-source alternatives?
      • No complaint apart from the fact it's from Microsoft and they have a history of being unreliable... and there is no way to write NTFS unless you're using Windows.

        I haven't seen any unbiassed comparisons of NTFS against some of the other file systems, but if you want to limit your liability I'd go with something that's open. MS have a habbit of breaking backwards compatibility, and Vista seems to want to do a lot of that. Who says the version of Windows in the future will support today's version of NTFS?

        At
      • Just personal experience here. I have a Windows 2003 server where the drive was something like 83% fragmented after only about 3 months of use. I ran defrag on friday and was surprised to see just a tiny blip of red, with the entire rest of the drive being unused. The drive was only about 10% full, but most of it was data files which grow constantly. Most large data files all had thousands of fragments each. Windows could not have done a worse job at avoiding fragmentation. It's not slow yet, as the databas
  • by Anonymous Coward
    I have a 3 TB XFS file system and a 10 TB XFS file system that are regularly accessed by multiple processes that read and write hundreds of gigabytes each without write errors and with excellent performance (several hundred MByte/s sustained). You may have other hardware or software issues if you're seeing errors with XFS. Try to figure out the root cause of your problem before you try another FS.
  • by NuclearDog ( 775495 ) on Sunday February 12, 2006 @01:38AM (#14698365) Homepage
    Comparison of FileSystems [wikipedia.org] (from Wikipedia)

    Personally, I run two 300GB drives in RAID1 on UFS and am quite satisfied with it, but you seem to be incredibly, incredibly picky, so I'm sure you could find something wrong with it ;P

    ND
  • I/O Errors??? (Score:5, Informative)

    by Stephen Samuel ( 106962 ) <samuel@NOsPaM.bcgreen.com> on Sunday February 12, 2006 @01:53AM (#14698415) Homepage Journal
    If you're getting lots of I/O errors with XFS, I'd be inclined to look at a hardware problem (unless the I/O errors consist of attempts to read past the end of the partition -- which could be caused by you manually specifying the partition size, rather than letting mkfs.xfs figure it out).

    Like someone else said -- try using badblocks(8) -- or just use dd to make sure you can read the entire partition without errors.
    Bad disks do happen -- even new ones. Production code in Linux is generally very stable, and (unlike with windows), you can usually start with the presumption that things like I/O errors are caused by real hardware problems of some sort (even if it's just bad/loose cables).

  • by Radak ( 126696 ) on Sunday February 12, 2006 @02:09AM (#14698465) Journal
    If you're looking for a filesystem to archive things indefinitely, avoid exotic new kids on the block with limited OS support and even more limited toolkit support.

    You want a filesystem you'll be able to read at any point in the future and, should the worst happen, one which you'll have a reasonable chance of being able to recover.

    ext2 and fat32 tend to write files in nice large chunks and there are lots and lots of recovery tools for damaged filesystems. Journaled filesystems like to put little pieces all over the place, and recovery of a badly damaged filesystem is next to hopeless.

    There is no call for a complex filesystem just because you want to store large files. ext2 (and to some extent fat32) will do just fine, and you'll be glad for them someday in the future when something breaks.
    • I think that I could second this motion.
      ext{2,3} can handle nice, big files, and just about any version of linux can read it. You can even get modules for the Microsoft world to read ext2 filesystems. If you're looking at a read-mostly filesystem, then journaling won't get you much (other than making for a fast FSCK if you lose power).
      Just remember to specify '-T largefile' on the mke2fs command line to optimize for larger files.

      If you haven't thought about it yet, I'd also suggest raid5 rather than r

    • by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Sunday February 12, 2006 @03:10AM (#14698649) Homepage
      There is no call for a complex filesystem just because you want to store large files. ext2 (and to some extent fat32) will do just fine
      fat32 cannot handle files over 4 GB in size at all. That alone probably renders it totally unsuitable for this person's needs.

      Beyond that, I'd say pretty much anything will work fine -- most of the optimizations found in filesystems are needed for lots of small files, not a few large files. For large files, the speeds they can be accessed by various filesystems are not likely to vary more than a few percent unless you let the files get fragmented (which probably isn't a big concern here.)

      And you are right -- if something does go wrong, ext2 or ext3 will probably give you the most options for recovering it. NTFS probably has even more recovery options (and FAT even more, as mentioned), but I'm guessing the OS will be *nix. But really, if your goal is reliability, you don't want some esoteric filesystem that can recover from disk errors (because ultimately, none can, though I guess one could be designed to keep ECC codes on the same disk transparantly -- but I'm aware of no such filesystem existing) -- you want multiple copies of your data. Keeping 5-10% (or more) par2 files [sourceforge.net] for your archive can help a lot in recovering it if your media goes partially bad, and having md5sums or CRC32s of all archived files can help determine if you did recover something accurately, but really there's little subsitute for multiple copies of important data in multiple geographical locations. (And no -- RAID is not a subsitute for backups, no matter how many mirrored drives you have. Not that I saw anybody suggest this yet, but it seems to always come up in response to questions like this, so consider this to be a premptive mention of that.)

  • by Matt Perry ( 793115 ) <perry.matt54@ya[ ].com ['hoo' in gap]> on Sunday February 12, 2006 @02:54AM (#14698602)
    I feel that Ext3 is not optimal for this
    Did you try it with ext3? I have 688G in a RAID5 array spread across four 250GB drives. I use ext3 and I store lots of large files (15GB free on the array right now). I have about 156GB of DVD images, mostly movies that I own and have ripped to watch using daemon tools on Windows. Some of them are rips of training video DVDs I bought for software that I use like Adobe Premiere and Audition. I frequently move large AVI files to and from the array for video projects that I'm working on. These files originate on my Windows box and can be as large as 13GB (for an hour of video footage). I've been using ext3 for years and it's never let me down or given me any problems.
  • Since nearly all modern drives remap bad sectors automagically unless they're in truly pitiful shape, I'd check my cabling, connectors, termination (depending on the storage hardware platform you're using). I/O errors are not the result of inappropriateness of a filesystem for a given task, they're the result of lost data long before your filesystem ever gets a chance to f*ck it up.
    • Since nearly all modern drives remap bad sectors automagically unless they're in truly pitiful shape

      Not quite automagically - every time I've had a bad block develop, the drive has only remapped it on writing to it. Reading just returns various types of error (makes sense - if you're trying to recover the block in question, it might read successfully one time in a thousand, then you can write it back, and all is well again). I'm pretty sure that SCSI can return warnings that a block was hard to read, allo

  • I can't seem to make a growable RAID 5 configuration; at least not growable in a useful way.

    My plan was to migrate the small striped set to a larger set that included the old drives by making a raid set of the new drives, copy the data, then add drives (I would call that 'Horizontal'); or move all the data to the tops of the drives, and make a raid set of the lower segments of all the drives, and expand the segments. ('Vertical')

    I'm up to using EVMS; but no useful Expand options appear to be availab
    • (I'm assuming software RAID here, obviously.)

      Use LVM on top of MD/RAID. When enough of your physical drives have the requisite extra free space you can just create a *new* MD volume from the free space and add that to your logical volume. Example: If you had a 100GB drive with 1 partition in the RAID and replace it with a 150GB drive, say, you'd just allocate 100GB to 1 partition and allocate 50GB to a second partition, readding the first partition into the old RAID volume and waiting for it to rebuild. Do
    • http://ask.slashdot.org/article.pl?sid=05/11/26/0 3 37226 [slashdot.org]

      Look for swillden's post a little bit less than halfway down. This is the approach i'm using now.

      In short - LVM on top of multile "small" (50 GB per drive) RAID 5 partitions. LVM will let you automagically move all data off of a given "physical volume" if there is sufficient free space on a volume group. Note that in this case the "physical volume" is not actually physical, but is a RAID 5 mdX device instead. Once all data is moved off of one of th
  • Ext2 rw,sync (Score:5, Interesting)

    by evilviper ( 135110 ) on Sunday February 12, 2006 @03:14AM (#14698660) Journal
    What I'm looking for is a WORM-optimized FS that also has good journaling methods to prevent data loss due to some natural disaster while data is being shifted around.

    Ext2fs mounted with the 'sync' option.

    For large sequential writes, nothing could possibly be more reliable or any faster. Your hard drive's pure IO speed will be the bottleneck unless you are writing to multiple files simultaneously, in which case fancy filesystems come in handy.

    If that doesn't suit your needs, you haven't described them well enough for anyone to understand.

    I feel that Ext3 is not optimal for this;

    I feel hungry.
    • Ext2fs mounted with the 'sync' option.

      For large sequential writes, nothing could possibly be more reliable or any faster. Your hard drive's pure IO speed will be the bottleneck unless you are writing to multiple files simultaneously, in which case fancy filesystems come in handy.


      But ext2 (and ext3) store a block list that grows in proportion to the size of the file itself. That's fine for small or highly fragmented files, but it's a waste of space and I/O bandwidth when you have big, unfragmented files.
      • I can attest to this. I was backing up an image of an 80G drive. It took forever... I suspect a tree-based file system would be much better.
      • But ext2 (and ext3) store a block list that grows in proportion to the size of the file itself. That's fine for small or highly fragmented files, but it's a waste of space and I/O bandwidth when you have big, unfragmented files.

        You can use a large block-size and greatly reduce the ammount of wasted space and overhead (which is rather small to begin with, actually). You've got to expect that kind of overhead from just about any filesystem, and something like journaling will only add more.

        If you've got somet

  • At the risk of sounding silly, I will suggest HFS+, Journaled (but without ACL's). This, of course, requires that you use Mac OS X. However, some very good disk repair utilities exist for Mac OS X. As far as performance is concerned, it should be capable of easily handling uncompressed NTSC video without dropouts in a proper configuation, so I can't see that it would be a problem for storing DVD's.

    • Why does it require OS X? Linux deals well with HFS+ in my personal experience.

      You're right about the tools though. DiskWarrior is *godlike*. I cannot believe the stuff it has pulled off... stuff that made fsck cry.
    • Eh, if you read the various forums for Mac these days, the fix for most problems is always suggested to be: fsck, then fix permissions. I've yet to see another OS that need a 'fix permissions' tool at all. Oh, and HFS+ has lost me important system files. This with the most recent version of Panther. It doesn't impress me at all.
  • If you need WORM, it will usually be for compliance purposes. And if you need it for compliance purposes, the choice of filesystem simply isn't yours. Netapp, EMC, IBM and others all provide out-of-the-box WORM solutions for not to much money that are certified WORM devices - i.e. you can turn around to your local regulator with a piece of paper in your hand that proves your WORM is really WORM. From my experience with Netapp WORM devices, you are going to be stuck with XFS - incidentally, I spent a good po
  • Well, I guess the best filesystem for this kind of stuff is ISO9660. Very optimized for WORM access, no file fragmentation, and anything new you write will not destroy any existing data. Guaranteed.
  • by wfberg ( 24378 ) on Sunday February 12, 2006 @07:22AM (#14699206)
    For all my WORM disks, I use either ISO 9660 or ISO 9660/UDF bridge format.
    Yeah, I simply burn CD-Rs or DVDs. DVDs have the nice property of being easily stored off-site. And files are in nice large contiguous block so even if the filesystem dies you can still recover a lot. Unlike XJFReiFS 2.3.1.5, DVDs will be readable in 50 years time.
    And if you need to burn really large files, just use, well, zip. And perhaps some par2 files.

    Though, seriously, they're coming up with a UDF variant for hard drives too.
  • First of all, you shouldn't worry about losing data while "shifting it around". The source data shouldn't be unlinked until the operation has completed. A "sync" mounted filesystem on a drive without write caching should guarantee that the data has actually been written. Then the biggest threat to your data is going to be drive failure, which can be lessened by the use of RAID5.

    As for which filesystem, I would humbly suggest the tried-and-true time-tested UFS. I don't think UDF is really what you want, beca
  • Don't laugh! Most (if not all) filesystems are optimized to handle the opposite of what you want. TAR files are designed for tape, so you won't be seeking all over the disk to get meta information, instead you'll get your data at the maximum speed supported by your hardware. TAR files are designed so that you can append files to them later, so you can use a *big* partition and just keep dumping stuff into it.

    The only drawbacks are that you have to read the entire partitioin sequentially to find things, and you can't delete files. Both of these can be fixed with a bit of Perl. Write a program that maintains an index of offsets to the files, then you can use "dd" to skip to the correct offset and read from there. More dangerously, write a program that deletes files from the middle of an archive and shuffles everything backwards to fill in the gaps. You'll want to make sure that no one is trying to read the TAR partition while this is running.

  • I can't see why it would matter one whit what file system you use. The only problem you list is ReiserFS being too slow (which I doubt, but anyway...). Is speed the worry?? Well, just pick a bigger block size. Make a 64kb block size or something, and just go with whatever file system you would normally use.
  • by turgid ( 580780 ) on Sunday February 12, 2006 @12:24PM (#14700157) Journal

    What you're looking for is Universal Disk Format [afterdawn.com] or UDF.

    It is an open standard [osta.org] supported by all of the major OSes and manufacturers and is the filesystem of choise for Ultra Density Optical [plasmon.com] WORM and rewritable disks.

    There a drivers for Linux, Windows and all of the major UNIXes. Here [wikipedia.org] is the obligatory Wikipedia entry.

    Hard disk filesystems like XFS, JFS, Reiser, ZFS etc. are all wonderful at what they do but they are unsuitable for WORM disks.

  • by Steve Hamlin ( 29353 ) on Sunday February 12, 2006 @12:54PM (#14700277) Homepage

    Others have said good things in general (XFS,JFS,ext3).

    I looked into filesystem comparisons in setting up a MythTV box.

    My issues were:
    (1) efficient use of hard drive space, and
    (2) performance.

    Efficient use = filesystem settings have a big effect on amount of usable space.

    For ext2/3:

    -m 0 = setting 'reserved space for root' to 0%. Default is 5%, which can be 10-20 GB these days, all unusable to non-root users

    -T ____ = can tell ext2/3 to optimize inodes and byte-per-inode for different size average files. Largefile versus news spools (tons of small files). Because of the way that a file can be spread out and mapped across the filesystem, this has an effect on 'wasted' space, and maybe performance (# of inode entries per file to lookup).

    -b, -i - can set total # of inodes and bytes-per-inode directly. Advanced control over filesystem creation

    I never got around to looking into this detail for XFS/JFS - they seem have fewer such options.

    Performance I'll leave it to others to talk about filesystem performance with largefiles in general.

    MythTV takes a lot of writing, and as it turns out, deleting, of large temporary files for the TV features (records, pause, FF/RR). After some reading online, I've found MythTV performance is drastically impacted by filesystem choice due to all of the deleting.

    http://www.mythtv.org/docs/mythtv-HOWTO-24.html#ss 24.2 [mythtv.org]

    http://www.gossamer-threads.com/lists/mythtv/users /52672 [gossamer-threads.com] ---SNIP---


    > My last reply to myself. Based on a Googled reference, I was able to
    > break my XFS 4G file size barrier by formatting the partition 'mkfs.xfs
    > -dagsize=4g'. So, here are the complete results:
    >
    > Time to delete a 10G file, fastest to slowest:
    >
    > JFS: 0.9s, 0.9s
    > XFS: 1.3s
    > EXT3: 1.4s, 2.3s
    > EXT2: 1.6s
    > REISERFS: 6.2s
    > EXT3 -T largefile4: 5.9s, 10.2s
    >
    > After running the XFS test, there didn't seem to be any point in
    > reformatting the partition again, so I left it on XFS, but I think I
    > would be happy with JFS, XFS, or EXT3 w/o '-T largefile4'.
    >>>>
    wepprop at sbcglobal
    Feb 8, 2004, 2:33 AM
    Post #21 of 22 (4121 views)
    Re: Changing filesystems? [In reply to]
    Robert Kulagowski wrote:


    > Interesting. If others care to weigh in, I can either re-write the
    > "Advanced Partitioning" section in the HOWTO, or whack it completely.
    >
    > William, can you give some background on the hardware used for your
    > tests? I'd be curious if this data holds up across various drive types,
    > LVM, etc. (Without trying to exhaustively test all the possibilities,
    > that is)


    It appears, based on my personal experience alone, that file deletes are
    the only system operations that can stress the hard drive enough to
    produce dropped frames. Unfortunately, as others have pointed out,
    recordings and deletions go together in Myth. So, unusual as it may be,
    it does make at least some sense to take file deletion performance into
    account when deciding which filesystem to use for a video partition,
    especially for people with multiple tuners.


    The really ironic result from my personal perspective is that it would
    appear that using the '-T largefile4' setting for ext3, which I was so
    pleased with because it give me an extra 2G of storage, may well have
    been responsible for all those recordings I had ruined by frame drops.


    Assuming it works out, though, I could really get to like this XFS
    filesystem because it appears to give me slightly more storage space
    than ext3 w/ '-T largefile4' did and it has pretty fast deletes as well.
    ---SNIP---

  • PFS (Score:2, Funny)

    by JamesTKirk ( 876319 )
    You're obviously looking for a filesystem optimized for porn. I'm impressed that you've managed to accumulate hundreds of gigs of the stuff. Perhaps there is a Porn File System out there somewhere?
  • LOL (Score:3, Funny)

    by rainer_d ( 115765 ) on Sunday February 12, 2006 @01:54PM (#14700560) Homepage
    > usually backed up from other mediums such as CDs and DVDs

    This is a very euphemistic way of saying:
    "I download moviez, mp3 and porn via P2P all day and even though I usually don't view any movie twice, I still don't want to throw away anything, because I just can't delete anything".

    How that could get an "ask slashdot"-posting is left as an exercise to the reader.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Sunday February 12, 2006 @02:24PM (#14700673)
    Comment removed based on user account deletion
  • I see a lot of people advocating XFS. Which distro supports XFS the best?
  • FMWORM (Score:4, Informative)

    by davecb ( 6526 ) * <davecb@spamcop.net> on Sunday February 12, 2006 @06:22PM (#14701602) Homepage Journal
    Also spelled FM-WORM, a filesystem which looks like anormal NFS server but knows intimately whaqt needsto be done to deal with WORM disks.

    It's a commercial product from Siemens, which I used years ago for Sietec's large-scale imaging product.

    There is probably a Linux port: We ran it on almost everything in existance (;-))

    --dave

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...