Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage

Auditing Large Unix File Systems? 21

jstockdale asks: "The recent article on perpendicular recording hard drive technology brought me, as a unix(tm) admin, to reflect on the management of data systems and file servers of capacities >1TB (which exist today and tomorrow will become commonplace). Since Google for once seems useless, what suggestions does the Slashdot crowd have on methods and software to audit changes, visualize file system usage, and in general to determine the qualitative and quantitative nature of the content of large unix file systems?"
This discussion has been archived. No new comments can be posted.

Auditing Large Unix File Systems?

Comments Filter:
  • by Tom7 ( 102298 )
    It's the same as visualizing a small file system, except that the files are an order-of-magnitude bigger (DVD rips, databases...)
    • No, Different (Score:4, Insightful)

      by ArmorFiend ( 151674 ) on Tuesday July 29, 2003 @06:58PM (#6565443) Homepage Journal
      To make a guess, perhaps as storage is growing fast, but read times are not, his drives are getting filled up faster than he can run "du -s -m | sort -n " to figure out whose's filling them up?

      Also, whose to say file size increases with storage capacity? Perhaps at his site, number-of-files increases with storage capacity, with the file size staying statistically constant. MB for MB, traversing lots of little files is harder than traversing a few big files.
    • Re:Same (Score:3, Insightful)

      by rthille ( 8526 )
      This could be especially wrong if he's running things like a news server, and as more disk space is available, he up's the retention time. Archival storage of email (in a 1 file/message system, like Maildirs) would have the same problem.
    • What you want is a simulacrum of Ztree, which is a shareware version of XTree Gold, the version you are looking for would of course run on your preferred OS (which IIRC Ztree does not.)

      The arena gets larger, the things you are tracking get larger, but the ability to view branches of your directory structure, treat all the files on the entire set of drives as if they were all a congruent bunch (sort them all by size, date, extension for temporary files, etc...) will be invaluable.

      If nothing exists that rep
  • I like treemaps (Score:5, Informative)

    by Krellan ( 107440 ) <krellan@NOspAm.krellan.com> on Tuesday July 29, 2003 @06:57PM (#6565432) Homepage Journal
    I like the idea of treemaps.

    [umd.edu]
    http://www.cs.umd.edu/hcil/treemap-history/index .s html


    Hehe, it was originally made to see what was taking up all the room on an 80MB hard disk :)

    There's various software available based on this concept, most working like "du", except that you get the results graphically. You typically see a large picture on screen of what directories and files are taking up the most space. It looks like a piece of Mondrian artwork, with the size of rectangles corresponding to the size of space taken, so it is easy at a glance to see what is hogging all of the disk space. It can be drilled down, of course, by clicking to zoom in.

    A quick Google search revealed SequoiaView:

    [win.tue.nl]
    http://www.win.tue.nl/sequoiaview/


    Unfortunately this only runs on Windows, but I'm sure there are similar Linux programs available.
    • Re:I like treemaps (Score:4, Interesting)

      by cicadia ( 231571 ) on Tuesday July 29, 2003 @09:30PM (#6566721)
      A similar program for linux is FSV [sourceforge.net]. In one mode, it gives you a 3D (OpenGL) view of just such a map of your filesystem.

      Screenshots [sourceforge.net]

    • Anyone know of a similar OS X program?
      • maybe try either:
        1. tkdu

        http://unpythonic.dhs.org/~jepler/tkdu/

        tkinter (= python + tk)

        gpl
        2. treemap

        http://www.cs.umd.edu/hcil/treemap

        java

        mentioned here [uni-bielefeld.de]

        can't charge for redistribution
        3. xdiskusage

        http://xdiskusage.sourceforge.net/

        gpl

        *nix only

        i tried treemap first, and it was REALLY slow, and consumed tons of memory. i'm testing it under windows, but will also use it for linux. matter of fact, rerunning it on another smaller directory (though still gigabytes large with thousands of files) caused

    • Re:I like treemaps (Score:2, Interesting)

      by hellgate ( 85557 )
      > Unfortunately this only runs on Windows, but I'm sure there
      > are similar Linux programs available.

      kdirstat [sourceforge.net].
      The currently available devel version comes with treemap.

      With TB sized installations you will probably want some
      additional tools (or at least have kdirstat import a database
      built by a daily cronjob [1]), running a complete scan will
      take forever.

      [1] Some coding involved :-)
  • by hubertf ( 124995 )
    I always found xdu nice to see where the disk space is burned, but I understand this doesn't scale too well for large filesystems.

    Maybe the quota subsystem can be used for something... :-)

    - Hubert
  • Tivoli (Score:3, Informative)

    by keesh ( 202812 ) on Tuesday July 29, 2003 @07:42PM (#6565843) Homepage
    produce a load of products that do all that kinda thing for you. Expensive, but then you could always employ a bunch of monkeys, erm, summer students to do the same stuff manually...
  • Huh? (Score:4, Insightful)

    by pi_rules ( 123171 ) on Tuesday July 29, 2003 @07:53PM (#6565951)
    Don't get me wrong here -- in the event that you have a multi TB system keeping track of usage with 'du' really just isn't practical, but do you really even have to ASK the box where the data is?

    Our backend storage system for my project is 1TB, or at least very close to it. I don't manage the box, but I do work with it. It holds three things:

    1) It's OS (small)
    2) It's Oracle Database files (300GB on disk, about 200GB used now)
    3) Files. Word documents, CAD drawings, TIF, GIF, etc. A whole slew of them.

    The admin knows what's using what. Under /oradata there's the database. It gets it's own space. When that gets full he doesn't do an 'rm' to clean things up -- He has to use Oracle tools to do that.

    Under /filestores there's a giant mess of crap that nobody can make heads or tails of. They're hashed filename from a PDM system. He can't do anything to clean that out without -- you guessed it - using the PDM system. The filesystem itself has NO idea what's going in either of it's major usage sections. It's just "stuff" and to rm -rf a directory because you're running out of space would be foolish.

    When filesystems can actually hold metadata regarding their contents then I'd give this question some though. We could have a whole new set of Unix tools to modify our everything-is-a-file-with-badass-meta-data system. Until then I don't see any way for filesystem maintence to be a huge issue on this multi TB systems. All you can really do with the FS is determine which system needs morespace and order more disks. You can't trim or manage it with the FS.

    I'm wrong a lot though, but that's my take on the "issue".
  • by Desmoden ( 221564 ) on Tuesday July 29, 2003 @09:21PM (#6566662) Homepage

    I have a 8.8TB (raw) EMC Symm here, and another one in Austin TX. Then I have another 600GB or so in Sun jbod. Most hosts are connected to the EMC over a 2Gb SAN using brocades.

    I just got this environment a couple weeks ago, and there was NO documentation. So figuring out disk usage over 15 systems has been a nightmare.

    As much as I hate to admit it, EMC's ECC and Storage Scope have been a huge help. I could have done it using Veritas as well, but the EMC tools are nice.

    And I'm soon going to add another 2TB SATA array over iSCSI so then we'll see if ECC can really manage "other peoples disk" :)

    But welcome to the new exciting field of Storage/SAN architect/admin. With arrays from HDS and EMC coming in the 46TB flavor and more, resorce management is a big job.

    For instance my counterpart (he handles windows, I handle *nix) found that we had 1.2TB of unused BCV's (it's a 3rd mirror, weird EMC'izm) that had never been used!

    So it's all about documentation. And do yourself a huge favor, and come up with a clean scalable disk/lun/volume/mountpoint naming convention. This can be so critical. Not sure what other people do, but I'll have something like:

    host 1: dg001-dg005
    host 2: dg006-dg010
    host 3: dg011-dg015

    the disks are enclosure_name_id_lun#

    volumes are v{dg}{v#} so first volume for disk group dg006 would be v1601.

    Then mount points are based on either oracle{SID} or port, or app name or something logical.

    Keep everything unique! So then you can move luns form host to host without having issues. This also allows you to generate usage reports and know that if v3206 is at 68% exactly what host, disk group, volume, and app are involved.

    Ug, sorry, went off, but I'm in this mess right now myself and so I have some very strong feelings about it right now =)
  • by Anonymous Coward on Wednesday July 30, 2003 @01:20AM (#6568061)
    Just take all the data on your disks, and do a low-level scan. Basically, take each byte and perform a parity check so you get a 1 if there's an odd number of 1's in the binary representation, and a zero otherwise.

    Now, here's the secret: take all these zeros and ones, and do a parity check on THEM. BLAM! Your entire array is now down to ONE status bit!!!

    Now take a big crayon and write that status bit on a piece of your favorite color paper. Put it up in the machine room for all to see. Or just slip it in your drawer if you think that letting this kind of information out is a security leak. Your call.

    Then, repeat the process once an hour or so. Today's arrays are so fast that it shouldn't take long. Each time you get the digit, the zero or the one, compare it to the last output. If it's changed (for example, from 1 to 0 or 0 to 1), than WHAMO, you've got SOMETHING going on, better check it out!!!

    This "early warning system" gave me a "heads up" to some serious probablems more than once. You might want to check it out, so-called "storage experts" EMC didn't even have a package to do this so you might do a little coding in VB, but it's worth it!
  • by Chagatai ( 524580 ) on Wednesday July 30, 2003 @12:57PM (#6571762) Homepage
    In the Unix environments in which I've worked (predominantly AIX and Lnux), large disk/file system management comes down to two things: granularity and documentation. First, you have to consider how specific you want to be when laying out your data.

    For example, on one recent project on which I worked, a PeopleSoft/Oracle environment was built on a pSeries system. There were instances for every conceivable piece of architecture which led to 50+ file systems. And we're not talking about 50+ file systems off of /, but file systems within file systems within file systems. This was good for separating data but made df an ugly mess.

    Conversely, I worked on another project for with a homebrew app designed to track tickets. Rather than using a database the genius who designed the methodology stored every ticket as a 1024-byte file. This caused the system to eat up inodes even though the NBPI was set to 1024, and it caused an additional fun feature: the ls command could not work. With over 200,000 files per file system (all in one directory), ls could not parse in all of the files. The homebrew guy actually had to write an app to crack open the inode table to list the files. When you set up your environment, first consider what degree of granularity you need.

    Next, document everything. Consider this situation: an HP Virtual Array with 100 LUNs each cabled to two brocade switches for redundancy going to ten different systems. Would you know just by popping a cable what effects would happen? Documentation is key for managing large disk/file system environments. This also applies to naming file systems, logical volumes, volume groups, and any other pat of your system.

  • You could check commerical software such as Veritas' new Storage Reporter [veritas.com](formerly Precision Software's product).

    Should be available for Linux among other OSes.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...