Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Auditing Large Unix File Systems?

Cliff posted more than 10 years ago | from the getting-the-whole-picture dept.

Data Storage 21

jstockdale asks: "The recent article on perpendicular recording hard drive technology brought me, as a unix(tm) admin, to reflect on the management of data systems and file servers of capacities >1TB (which exist today and tomorrow will become commonplace). Since Google for once seems useless, what suggestions does the Slashdot crowd have on methods and software to audit changes, visualize file system usage, and in general to determine the qualitative and quantitative nature of the content of large unix file systems?"

cancel ×

21 comments

Boobies! (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#6565230)

(That's first post to farkers!)

What I do (0)

Anonymous Coward | more than 10 years ago | (#6565235)

Visualize file system usage: df

Audit changes: diff

Determine the qualitative nature of a large unix file system: How many positive reviews I see of it on googlefight, how shiny the box for the drive looks.

Determine the quantitative nature of a large unix file system: Number of cylinders, number of heads, number of platters, number of sectors, bytes in a sector, weight, physical dimensions.

Thank you. I'll be here all week. This is not funny.

Same (0)

Tom7 (102298) | more than 10 years ago | (#6565279)

It's the same as visualizing a small file system, except that the files are an order-of-magnitude bigger (DVD rips, databases...)

No, Different (3, Insightful)

ArmorFiend (151674) | more than 10 years ago | (#6565443)

To make a guess, perhaps as storage is growing fast, but read times are not, his drives are getting filled up faster than he can run "du -s -m | sort -n " to figure out whose's filling them up?

Also, whose to say file size increases with storage capacity? Perhaps at his site, number-of-files increases with storage capacity, with the file size staying statistically constant. MB for MB, traversing lots of little files is harder than traversing a few big files.

Re:Same (2, Insightful)

rthille (8526) | more than 10 years ago | (#6566132)

This could be especially wrong if he's running things like a news server, and as more disk space is available, he up's the retention time. Archival storage of email (in a 1 file/message system, like Maildirs) would have the same problem.

OP : see also, ZTree (1)

Glonoinha (587375) | more than 10 years ago | (#6566881)

What you want is a simulacrum of Ztree, which is a shareware version of XTree Gold, the version you are looking for would of course run on your preferred OS (which IIRC Ztree does not.)

The arena gets larger, the things you are tracking get larger, but the ability to view branches of your directory structure, treat all the files on the entire set of drives as if they were all a congruent bunch (sort them all by size, date, extension for temporary files, etc...) will be invaluable.

If nothing exists that replicates the functionality of XTree Gold on your OS (ie, Ztree) then learn the functionality and replicate it - all it does is recursively read the directory contents of each dir on the drive and keep them all in memory, then let you work on them irrespective of directory (global access to your files, across all the logged drives.)

www.ztree.com

And no, I don't work for them, but I have been an XTree fan since about 1986 or thereabouts.

I like treemaps (5, Informative)

Krellan (107440) | more than 10 years ago | (#6565432)

I like the idea of treemaps.

[umd.edu]
http://www.cs.umd.edu/hcil/treemap-history/index .s html


Hehe, it was originally made to see what was taking up all the room on an 80MB hard disk :)

There's various software available based on this concept, most working like "du", except that you get the results graphically. You typically see a large picture on screen of what directories and files are taking up the most space. It looks like a piece of Mondrian artwork, with the size of rectangles corresponding to the size of space taken, so it is easy at a glance to see what is hogging all of the disk space. It can be drilled down, of course, by clicking to zoom in.

A quick Google search revealed SequoiaView:

[win.tue.nl]
http://www.win.tue.nl/sequoiaview/


Unfortunately this only runs on Windows, but I'm sure there are similar Linux programs available.

Re:I like treemaps (3, Interesting)

cicadia (231571) | more than 10 years ago | (#6566721)

A similar program for linux is FSV [sourceforge.net] . In one mode, it gives you a 3D (OpenGL) view of just such a map of your filesystem.

Screenshots [sourceforge.net]

Re:I like treemaps (1)

addaon (41825) | more than 10 years ago | (#6567920)

Anyone know of a similar OS X program?

Re:I like treemaps (1)

#undefined (150241) | more than 10 years ago | (#6583177)

maybe try either:
1. tkdu

http://unpythonic.dhs.org/~jepler/tkdu/

tkinter (= python + tk)

gpl
2. treemap

http://www.cs.umd.edu/hcil/treemap

java

mentioned here [uni-bielefeld.de]

can't charge for redistribution
3. xdiskusage

http://xdiskusage.sourceforge.net/

gpl

*nix only

i tried treemap first, and it was REALLY slow, and consumed tons of memory. i'm testing it under windows, but will also use it for linux. matter of fact, rerunning it on another smaller directory (though still gigabytes large with thousands of files) caused the program to bail (java had exceeded it's stack size or something like that). granted, treemap can also display tree-map formatted "database" files, but i just need it for viewing filesystem usage.

tkdu has all the features i need (ascend/descend into directories, change displayed filesystem depth, etc), a simpler interface, executed in a few seconds (as compared to a few minutes for treeview), and used ~ 80% less memory (20 MB vs 90 MB).

for headless servers, i'm thinking about modifying tkdu to save the filesystem data as pickles, so that it can be displayed on a workstation. this way i could run tkdu nightly from a cron job, and analyze filesystem usage the next morning on my workstation.

of course, i don't know how any of them work with mac packages/bundles? the web page that mentions treeview says that it has problems under *nix-like systems because it doesn't understand symlinks. i have yet to test any of them under linux.

all of the above research was prompted by reading this article, so i don't know that much (yet!).

Re:I like treemaps (2, Interesting)

hellgate (85557) | more than 10 years ago | (#6568556)

> Unfortunately this only runs on Windows, but I'm sure there
> are similar Linux programs available.

kdirstat [sourceforge.net] .
The currently available devel version comes with treemap.

With TB sized installations you will probably want some
additional tools (or at least have kdirstat import a database
built by a daily cronjob [1]), running a complete scan will
take forever.

[1] Some coding involved :-)

xdu (1)

hubertf (124995) | more than 10 years ago | (#6565491)

I always found xdu nice to see where the disk space is burned, but I understand this doesn't scale too well for large filesystems.

Maybe the quota subsystem can be used for something... :-)

- Hubert

Rapid advances in technology (0, Offtopic)

VisorGuy (548245) | more than 10 years ago | (#6565743)

capacities >1TB (which exist today and tomorrow will become commonplace)

>1TB capacities commonplace tomorrow!? Wow!!
Too bad for me that payday is almost two weeks away...

Re:Rapid advances in technology (1)

innosent (618233) | more than 10 years ago | (#6565892)

>1TB capacities commonplace tomorrow!? Wow!!

Damn right, wow. That's a lot of MP3's. What's the RIAA going to do then?

Tivoli (2, Informative)

keesh (202812) | more than 10 years ago | (#6565843)

produce a load of products that do all that kinda thing for you. Expensive, but then you could always employ a bunch of monkeys, erm, summer students to do the same stuff manually...

Huh? (3, Insightful)

pi_rules (123171) | more than 10 years ago | (#6565951)

Don't get me wrong here -- in the event that you have a multi TB system keeping track of usage with 'du' really just isn't practical, but do you really even have to ASK the box where the data is?

Our backend storage system for my project is 1TB, or at least very close to it. I don't manage the box, but I do work with it. It holds three things:

1) It's OS (small)
2) It's Oracle Database files (300GB on disk, about 200GB used now)
3) Files. Word documents, CAD drawings, TIF, GIF, etc. A whole slew of them.

The admin knows what's using what. Under /oradata there's the database. It gets it's own space. When that gets full he doesn't do an 'rm' to clean things up -- He has to use Oracle tools to do that.

Under /filestores there's a giant mess of crap that nobody can make heads or tails of. They're hashed filename from a PDM system. He can't do anything to clean that out without -- you guessed it - using the PDM system. The filesystem itself has NO idea what's going in either of it's major usage sections. It's just "stuff" and to rm -rf a directory because you're running out of space would be foolish.

When filesystems can actually hold metadata regarding their contents then I'd give this question some though. We could have a whole new set of Unix tools to modify our everything-is-a-file-with-badass-meta-data system. Until then I don't see any way for filesystem maintence to be a huge issue on this multi TB systems. All you can really do with the FS is determine which system needs morespace and order more disks. You can't trim or manage it with the FS.

I'm wrong a lot though, but that's my take on the "issue".

(Warning: Here there be Trolls!) (2, Funny)

cookd (72933) | more than 10 years ago | (#6566511)

WinFS?

I am facing the same problem... (4, Informative)

Desmoden (221564) | more than 10 years ago | (#6566662)


I have a 8.8TB (raw) EMC Symm here, and another one in Austin TX. Then I have another 600GB or so in Sun jbod. Most hosts are connected to the EMC over a 2Gb SAN using brocades.

I just got this environment a couple weeks ago, and there was NO documentation. So figuring out disk usage over 15 systems has been a nightmare.

As much as I hate to admit it, EMC's ECC and Storage Scope have been a huge help. I could have done it using Veritas as well, but the EMC tools are nice.

And I'm soon going to add another 2TB SATA array over iSCSI so then we'll see if ECC can really manage "other peoples disk" :)

But welcome to the new exciting field of Storage/SAN architect/admin. With arrays from HDS and EMC coming in the 46TB flavor and more, resorce management is a big job.

For instance my counterpart (he handles windows, I handle *nix) found that we had 1.2TB of unused BCV's (it's a 3rd mirror, weird EMC'izm) that had never been used!

So it's all about documentation. And do yourself a huge favor, and come up with a clean scalable disk/lun/volume/mountpoint naming convention. This can be so critical. Not sure what other people do, but I'll have something like:

host 1: dg001-dg005
host 2: dg006-dg010
host 3: dg011-dg015

the disks are enclosure_name_id_lun#

volumes are v{dg}{v#} so first volume for disk group dg006 would be v1601.

Then mount points are based on either oracle{SID} or port, or app name or something logical.

Keep everything unique! So then you can move luns form host to host without having issues. This also allows you to generate usage reports and know that if v3206 is at 68% exactly what host, disk group, volume, and app are involved.

Ug, sorry, went off, but I'm in this mess right now myself and so I have some very strong feelings about it right now =)

easy! use The Parity System (5, Funny)

Anonymous Coward | more than 10 years ago | (#6568061)

Just take all the data on your disks, and do a low-level scan. Basically, take each byte and perform a parity check so you get a 1 if there's an odd number of 1's in the binary representation, and a zero otherwise.

Now, here's the secret: take all these zeros and ones, and do a parity check on THEM. BLAM! Your entire array is now down to ONE status bit!!!

Now take a big crayon and write that status bit on a piece of your favorite color paper. Put it up in the machine room for all to see. Or just slip it in your drawer if you think that letting this kind of information out is a security leak. Your call.

Then, repeat the process once an hour or so. Today's arrays are so fast that it shouldn't take long. Each time you get the digit, the zero or the one, compare it to the last output. If it's changed (for example, from 1 to 0 or 0 to 1), than WHAMO, you've got SOMETHING going on, better check it out!!!

This "early warning system" gave me a "heads up" to some serious probablems more than once. You might want to check it out, so-called "storage experts" EMC didn't even have a package to do this so you might do a little coding in VB, but it's worth it!

Granularity and documentation (2, Interesting)

Chagatai (524580) | more than 10 years ago | (#6571762)

In the Unix environments in which I've worked (predominantly AIX and Lnux), large disk/file system management comes down to two things: granularity and documentation. First, you have to consider how specific you want to be when laying out your data.

For example, on one recent project on which I worked, a PeopleSoft/Oracle environment was built on a pSeries system. There were instances for every conceivable piece of architecture which led to 50+ file systems. And we're not talking about 50+ file systems off of /, but file systems within file systems within file systems. This was good for separating data but made df an ugly mess.

Conversely, I worked on another project for with a homebrew app designed to track tickets. Rather than using a database the genius who designed the methodology stored every ticket as a 1024-byte file. This caused the system to eat up inodes even though the NBPI was set to 1024, and it caused an additional fun feature: the ls command could not work. With over 200,000 files per file system (all in one directory), ls could not parse in all of the files. The homebrew guy actually had to write an app to crack open the inode table to list the files. When you set up your environment, first consider what degree of granularity you need.

Next, document everything. Consider this situation: an HP Virtual Array with 100 LUNs each cabled to two brocade switches for redundancy going to ten different systems. Would you know just by popping a cable what effects would happen? Documentation is key for managing large disk/file system environments. This also applies to naming file systems, logical volumes, volume groups, and any other pat of your system.

commercial solution (1)

kerubi (144146) | more than 10 years ago | (#6572237)

You could check commerical software such as Veritas' new Storage Reporter [veritas.com] (formerly Precision Software's product).

Should be available for Linux among other OSes.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...