Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Rage Against the File System Standard

CmdrTaco posted more than 12 years ago | from the stuff-to-read dept.

Linux 612

pwagland submitted a rant by Mosfet on file system standards. I think he's sort of over simplified the whole issue, and definitely wrongly assigned blame, but it definitely warrants discussion. Why does my /usr/bin need 1500 files in it? Is it the fault of lazy distribution package management? Or is it irrelevant?

cancel ×

612 comments

Sorry! There are no comments related to the filter you selected.

fp (-1)

kahuna720 (56586) | more than 12 years ago | (#2595632)

f p

fp (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2595635)

props to #!

lameness filter SUCKS!! (-1, Offtopic)

Faulty Dreamer (259659) | more than 12 years ago | (#2595640)

|.- - - -- - - -.|
| |
| Eat My Nuts |
| |
| _ _ _ _ __ _ |
' - -- . . - - - '
| _|/
| ." ".
| /(o)-(o)\
/_)| / |
|_)| '- |
\_)\ '.___.' / |\/|_
| \ \_/ / _| '/
|_\ \.___./ \ ) /
\ \_/\__/\__ ==|
\ \ /\ /\ `\ |
\ \\// \ |
`\ /\ / |
; | \____/
| | |

Re:lameness filter SUCKS!! (-1, Offtopic)

robvasquez (411139) | more than 12 years ago | (#2595665)

it's about time we had some good ascii art again.

Why not go the extra step (2, Insightful)

Hektor_Troy (262592) | more than 12 years ago | (#2595641)

and just install in /?

Who in their right mind places stuff outside of a program specific folder, if it's not gonna be used in multiple programs (like shared libraries)?

Re:Why not go the extra step (0)

Angry White Guy (521337) | more than 12 years ago | (#2595696)

That's what I thought symlinks were for. Put everything in separate folders, then link what you need.

Avoid Software Bloat: rm -rf * daily

AWG

Still new to GNU/Linux (2, Interesting)

PigeonGB (515576) | more than 12 years ago | (#2595642)

Is it really that bad? Would I not have much control over where programs get installed to?
I would think that even without a package handler to do it for me, the program itself would allow me to say where it should be installed...or is that just the Windows user in me talking?

Re:Still new to GNU/Linux (0)

CounterZer0 (199086) | more than 12 years ago | (#2595649)

You can compile most things with a --use-prefix=/path/to/executable , which I believe works with RPM too...I'm not sure about RPM's though.

The Alternative? (4, Redundant)

Mike Connell (81274) | more than 12 years ago | (#2595647)

I'd much rather have 2000 binaries in /usr/bin than 2000 path entries in my $PATH

Mike

Re:The Alternative? (0)

k4m3 (259891) | more than 12 years ago | (#2595670)

Who needs 2000 applications (aka entries in PATH) on one computer ?
By the way, are you sure it's still a microcomputer ?

Re:The Alternative? (2, Interesting)

dattaway (3088) | more than 12 years ago | (#2595672)

Is there such thing as a recursive PATH directive for executables? Like the ls -R or something for searching into subdirectories?

Re:The Alternative? (3, Interesting)

kaisyain (15013) | more than 12 years ago | (#2595676)

You would only need 2000 path entries if your expect your shell to have the same exact semantics that it does today. There is no reason whatsoever that PATH couldn't mean "for every entry in my PATH environment variable look for executables in */bin". A smart shell could even hide all of these behind the scenes for you and provide a shell variable SMART_PATH that gets expanded to the big path for legacy apps.

Or you could do what DJB does with /command and symlink everything to one place. Although I'm not sure if that solves the original complaint. Actually, I'm not sure what the original complaint is, having re-read the article.

Re:The Alternative? (5, Insightful)

Meleneth (104287) | more than 12 years ago | (#2595688)

*sigh*

has anyone heard of symlinks? the theory is very simple - install the app into /opt/foo or wherever, then symlink to /usr/local/bin. yawn.

or is that one of those secrets we're not supposed to tell the newbies?

Re:The Alternative? (0)

Anonymous Coward | more than 12 years ago | (#2595718)

ya still have 2000+ files in /usr/bin mr nonnewbie

Re:The Alternative? (1)

swright (202401) | more than 12 years ago | (#2595806)

the point is though, that the applications themselves live in good places and are easy to manage.

It doesnt really matter having 2000 files in a directory the point is organisation of the important things [read: apps].

apps in somewhere sensible and symlinks in /usr/bin or wherever sound like the best way to me :)

Re:The Alternative? (0)

Anonymous Coward | more than 12 years ago | (#2595690)

Why the heck slopware need to access the system path anyway? Doesn't argv[0] (or whatever Windoze equivalence) tells you where you are running your code from ?

It is a matter of a few lines of code to get to your directory path and all the application related files.

Re:The Alternative? No Alternative! (2, Interesting)

CrazySecurityGuy (529210) | more than 12 years ago | (#2595704)

Uh huh. And when something goes terribly wrong, how do you determine what went wrong? Our production servers (HPUX, Solaris, AIX) have in the /usr/* only what the system supplied. Everything else gets put in it's "proper place"- either /opt/, or /usr/local/ (it's own filesystem) or similar. The paths are not so bad- and the system is healty and clean. The alternative? A system easily attacked with a trojan horse.

Re:The Alternative? (1)

chad_r (79875) | more than 12 years ago | (#2595708)

I'd much rather have 2000 binaries in /usr/bin than 2000 path entries in my $PATH

... and/or $LD_LIBRARY_PATH, plus $KDEDIR, $QTDIR, etc.

But as a Slackware user compiling things myself under /opt using --prefix=/opt/whatever, I haven't had much problem with this. I can always symlink if I need to, or set LD_LIBRARY_PATH specific to the application as needed.

Re:The Alternative? (1)

Oo.et.oO (6530) | more than 12 years ago | (#2595709)

AMEN!

but as one user pointed out maybe we need a smarter shell to deal with recursive PATHs.

i hate package managers as much as the next person, but i feel like a smart shell to expand recursive variables would help even though it does take _some_ control away from the power user just as package managers do. (aptget the exception).

the symlink thing doesn't work for a lot of apps because they are too dumb to load libraries from their "home" directory if it's not where the binary is. (ahem, NETSCAPE)
so the symlink thing doesn't complicate your PATH just gives you 2000 $APP_HOME variables.

Re:The Alternative? Easy. (-1, Redundant)

Anonymous Coward | more than 12 years ago | (#2595712)

Make a directory to put all your program launch scripts that set up the environment (including path) and launch (exec) each application.

GNU and /usr/local (0)

Anonymous Coward | more than 12 years ago | (#2595728)

All GNU software gets dumped into /usr/local/* by default unless you pass configure an option. That is probably why most of this /usr directory dumping started in the first place.

Re:The Alternative? (5, Informative)

Anonymous Coward | more than 12 years ago | (#2595732)

I'd much rather have 2000 binaries in /usr/bin than 2000 path entries in my $PATH



Here's what every unix administrator I know (including myself) does:

  1. everything is installed in /opt, in its own directory:

    example$ ls /opt
    apache emacs krb5 lsof mysql openssl pico ucspi-tcp
    cvs iptables lprng make-links openssh php qmail

    (pico is for the PHBs, by the way)
  2. Every version of every program gets its own directory

    example$ ls /opt/emacs
    default emacs-21.1

  3. Each directory in /opt has a 'default' symlink to the version we're currently using

    example$ ls -ld /opt/emacs/default
    lrwxrwxrwx 1 root root 10 Oct 23 16:33 /opt/emacs/default -> emacs-21.1

  4. You write a small shell script that links everything in /opt/*/default/bin to /usr/local/bin, /opt/*/default/lib to /usr/local/lib, etc.

Uninstalling software is 'rm -rf' and a find command to delete broken links. Upgrading software is making one link and running the script to make links again. No need to update anyone's PATH on a multi-user system and no need to mess with ld.so.conf. You can split /opt across multiple disks if you want. NO NEED FOR A PACKAGE MANAGER. This makes life much easier, trust me.

Re:The Alternative? (2, Informative)

El Prebso (135671) | more than 12 years ago | (#2595759)

There is actually a Package Manager that does all this for you, only it make everything alot easier.

http://pack.sunsite.dk/

Re:The Alternative? (1)

pipacs (179230) | more than 12 years ago | (#2595737)

For many (graphical) applications, a desktop icon or menu entry should be enough - there is no need to add them to $PATH

Re:The Alternative? (2)

Zocalo (252965) | more than 12 years ago | (#2595764)

I don't recall LFS saying you couldn't use "/usr/appname", so the article title is a bit misleading, but you certainly don't need 2000 entries in your path. The best solution for the problem that I can see is for coders of the multi-binary applications to take a leaf out of Windows' book and use the equivalent of "C:\Program Files\Common Files". Using an application (or environment, or vendor) specific directory for programs that only other programs need to use. The best I can see would be to use "/usr/appname/" for binaries and "/usr/lib/appname/" for libraries.

Re:The Alternative? (do we actually need one) (0, Flamebait)

A_Milne (303821) | more than 12 years ago | (#2595798)

Do you really need to have all your apps in the path at all.

I mean Their are hundreds of apps that are installed that I (you?) will never use. I dont really need them in my path as long as their is a logical place to find them.

So if I have a good directory organisation, and then only add to my path the apps I really want to use a lot/quicklly (ll, vi, emacs, nedit, aCC, perl etc).

The rest I can just refer to by full path. Its not much harder to type /usr/sbin/swlist.

Stupidlly complicated sugestion update the shell to add any command you run to the path and keep it their until its been unused for a week. So first time I enter /usr/sbin/swlist. After that if I keep using it its just swlist.

Andrew

better command path system? (3, Insightful)

TechnoVooDooDaddy (470187) | more than 12 years ago | (#2595648)

imo, we need a better command path system thingy that allows easier categorization of executables and other stuff... Win32 has the System32 (or System) directory, *nix has /usr/bin, /usr/share/bin, /usr/local/bin etc...

I don't have a solution, but i'll devote a few idle cycles to it...

Re:better command path system? (1)

nochops (522181) | more than 12 years ago | (#2595682)

Since when do binaries go in the System or System32 directory?

Havent you heard of \Program Files?

Re:better command path system? (2)

Segfault 11 (201269) | more than 12 years ago | (#2595748)

Binaries have gone into COMMAND or SYSTEM32 for ages. It's where small Windows support programs (SYSEDIT, MSCONFIG, etc.), and MS-DOS console apps like XCOPY live:

C:\WINDOWS\COMMAND (9x)
C:\WINNT\SYSTEM32 (NT)

Re:better command path system? (2)

snake_dad (311844) | more than 12 years ago | (#2595786)

c:\windows\system...

oh yes, this is the way to go. Hundreds of applications, each storing different versions of the same needed system or application dll's in one dir, overwriting the one version that worked....
</sarcasm>

There is a reason that binaries are spread over different partitions on Real Operating Systems....

btw, it's nice to see that html-formatting is actually making sense in my first line..: <br><br> :-)

he's pretty far off base (5, Interesting)

kaisyain (15013) | more than 12 years ago | (#2595652)

Anyone who claims that RedHat started the use of /usr/bin/ as a dumping ground can't be taken seriously. Pretty sure slackware and SLS did the same thing. Same goes for Solaris, AIX, AUX, Sun/OS, Irix, and HPUX.

It's not about lazy distributors. It's about administrators who are used to doing things this way and distributors going along with tradition.

I think it is better... (5, Insightful)

nll8802 (536577) | more than 12 years ago | (#2595657)

I think it is better to install all your programs binaries under a subdirectory, then symlink the executables to the /bin /usr/bin or /usr/local/bin directorys. This gives you a lot easier way to remove programs that don't have an uninstall script included, and Is a lot more organized.

Re:I think it is better... (2)

sbeitzel (33479) | more than 12 years ago | (#2595684)

You'd still have to clean up all the symlinks, so you're not really buying yourself anything.

It's true that having all the files associated with a given package in a single location makes it easy to see what-all you've got and which files belong to which package, but you'll still require something that will clean up all the symlinks that point off to nowhere.

Re:I think it is better... (0)

Anonymous Coward | more than 12 years ago | (#2595725)

GNU STOW does a pretty good job of this.

Re:I think it is better... (3, Insightful)

ichimunki (194887) | more than 12 years ago | (#2595734)

Yes, but dead symlinks are easy to see (on my system they make an annoying blinking action) and scripts can be written that recurse down the directory tree looking for invalid links. Another positive argument in favor of this approach is that many packages include several binaries, only one or two of which are ever going to be called directly from the command line in a situation where using a full path is not convenient. This also makes version control a lot more obvious (and having simultaneous multiple versions a lot easier, too).

Re:I think it is better... (5, Informative)

Daniel Serodio (74295) | more than 12 years ago | (#2595766)

No need to do the dirty work by hand, that's what GNU Stow [gnu.org] is for. Quoting from the Debian package's description:
GNU Stow helps the system administrator organise files under /usr/local/ by allowing each piece of software to be installed in its own tree under /usr/local/stow/, and then using symlinks to create the illusion that all the software is installed in the same place.

Re:I think it is better... (1)

rhost89 (522547) | more than 12 years ago | (#2595800)

Thats what symlinks -d /path/to/symlink/bin is for ;9

Re:I think it is better... (0)

Anonymous Coward | more than 12 years ago | (#2595686)

But you still have the entry. A much better way would be a PATH that has a recursive directory flag.

Re:I think it is better... (1)

EnderWiggnz (39214) | more than 12 years ago | (#2595758)

PATH with recusion?

no way. way too open to trojan attacks. This basically means, that you dont know what, exactly, is in your path.

What about Directory symlinks? someone puts a link in /usr/bin/foo -> /nasty/trojan/dir and then all of a sudden, your path searches through there.

It would also be a horrible waste of resources to search recusively through all these directories.

no, bad idea.

Re:I think it is better... (2)

cpuffer_hammer (31542) | more than 12 years ago | (#2595750)

Could this not be done with some kind of auto mirroring?

Each application would have its own tree and could have bin, sbin, lib and/or other directories.
These directories would be marked or registered so that they would appear as if they were part of /bin or /sbin exec... That way we only need a short path but we still maintain application separation.

Re:I think it is better... (2)

aozilla (133143) | more than 12 years ago | (#2595787)

I'd go one step further. Chroot the programs and hard link the required libraries into the chroot directory. Then you don't have to worry about annoying upgrade problems when one package insists on one set of libraries and another package insists on another. Also, when the hard link count goes down to 1 (the one in the master /lib directory), you can delete the file.

sounds like Encap (5, Informative)

_|()|\| (159991) | more than 12 years ago | (#2595830)

I think it is better to install all your programs binaries under a subdirectory, then symlink the executables

You want the Encap package management system [uiuc.edu] . From the FAQ [uiuc.edu] :

When you install an Encap package, the files are placed in their own subdirectory, usually under
/usr/local/encap. For example, if you install GNU sed version 3.02, the following files will be included:
  • /usr/local/encap/sed-3.02/bin/sed
  • /usr/local/encap/sed-3.02/man/man1/sed.1
Once these files have been installed, the Encap package manager will create the following symlinks:
  • /usr/local/bin/sed -> ../encap/sed-3.02/bin/sed
  • /usr/local/man/man1/sed.1 -> ../../encap/sed-3.02/man/man1/sed.1
The normal user will have /usr/local/bin in his PATH and /usr/local/man in his MANPATH, so he will not even know that the Encap system is being used.
The technique is essentially compatible with RPM, but Encap goes so far as to define a package format, which probably is not. If you like RPM, you might do better to simply follow the same convention.

Let me get this straight... (-1, Flamebait)

Anonymous Coward | more than 12 years ago | (#2595658)

The worst terrorist attack in recorded history occurred in September, and now we're involved in a WAR against Islam (against the holiest of Muslim scholars, the Taliban) during the holy month of Ramadan and you people have the gall to be complaining about the number of files you have in /usr/bin??? My *god*, people, GET SOME PRIORITIES!

The bodies of the thousands of innocent civilians who died (and will die) in these unprecedented events could give a good god damn about poorly places files, your childish Lego models, your nerf toy guns and whining about the lack of a "fun" workplace, your Everquest/Diablo/D&D fixation, the latest Cowboy Bebop rerun, or any of the other ways you are "getting on with your life" (here's a hint: watching Cowboy Bebop in your jammies and eating a bowl of Shreddies is *not* "getting on with your life"). The souls of the victims are watching in horror as you people squander your finite, precious time on this earth playing video games!

You people disgust me!

Package Management (4, Insightful)

Fiznarp (233) | more than 12 years ago | (#2595659)

...makes this unnecessary. When I can use RPM to verify the purpose and integrity of every binary in /usr/bin, I don't see a need for separating software into a meaningless directory structure.

DOS put programs in different folders because there was no other way to tell what package the software belonged to.

Re:Package Management (3, Interesting)

kramerj (161379) | more than 12 years ago | (#2595736)

And then you get into naming conflicts down the road.. MS has this problem now, and is dealing with it partly with the new fandangled "Private Packages" or whatever in XP.. Basically unsharing shared libraries.. There DOES need to be separation that can be controlled more than it can be now, or we are going to see problems in the future. Have you ever installed a package and a file was already there? Were they the same file? Do you know? Version? Its a bad idea to clump everything together... what we need is to make a path statement extension, that basically says /usr/bin/*/ to allow everything one directory down, OR, allow packages to register their own paths in their install directories (ie, a file that gets installed and then pointed to to say "search here for executables as well"). Make it an config in /etc that points to these other little files that contain places to look, then at boot time enumerate that all out and make a tree of the executables.. fast and easy to manage..

Jay

Re:Package Management (2)

tjansen (2845) | more than 12 years ago | (#2595821)

No, you don't. It's the package manager's job do avoid any conflicts. Windows has these problems because each piec of software comes with its own installation program and does not know anything about the others.

Re:Package Management (1)

kaisyain (15013) | more than 12 years ago | (#2595779)

Then why do you separate bin and lib and man and etc? Just have a directory called /system and put everything in there. Everything just gets looked up by name anyway and that would simplify your PATHs, MANPATHs, and LIBRARY_PATHs.

Re:Package Management (1)

nicklott (533496) | more than 12 years ago | (#2595820)

So what's the point of have directories at all?
Why not just dump everything into the root? Why not re-work the filesystem to only have files, not directories?
Of course after a while you'd find that you needed to put a tag on all the files that belonged to an package, just so your package manager could see which files belonged to which app...

Re:Package Management (1)

eskimoe (518369) | more than 12 years ago | (#2595823)

careful with that. if you actually care about the integrity of your binaries (wherever they reside) use something like tripwire, never trust rpm. dpkg is superior for package management anyways. your package manager is responsible for keeping depencies in shape. assuring file-integrity on your box exceeds the capabilities of any package managment system available todate. nevertheless i agree with you (and some other posters) regarding the initial topic.

because unix is unix (2, Insightful)

TheM0cktor (536124) | more than 12 years ago | (#2595661)

in the dark old unixish days whenever you bought a bit of commercial software (remember that? buying? :) it'd install itself into /usr/local/daftname/ or /opt/daftname/ or somewhere. This meant there'd be a huge path variable to manage which was a nightmare. The reason the windows equivalent isn't a problem is that windows is not commandline based - users access peograms through a link in a start menu (gross oversimplification but you get the idea). This simply doesn't translate to the command line paradigm. So a simple answer - nice path variables, neat directory structures, usable command line interfaces, pick any two. ~mocktor

Linux From Scratch (4, Interesting)

MadCamel (193459) | more than 12 years ago | (#2595664)

This is _EXACTLY_ why I use LinuxFromScratch [linuxfromscratch.org] . You do not HAVE to use the package managment system, you can install anything *just* the way *you* want it. X applications in /usr/bin? No way jose! (My appoligies to anyone named Jose, I'm sure you are sick of hearing that one), /usr/X11 it is! If you are not happy with the standards, make your own, it just takes a little time and in-depth knowledge.

Nah... (1)

LinuxGeek8 (184023) | more than 12 years ago | (#2595678)

Well, if he thinks the windows installs are clean, then let him just install 1000 programs, and deinstall them.
Then check how much space you used before and after, and just start to panic.

Some Windows applications have become lax at this and started installing into the "windows" directory.

And I thought all Windows programs did this.

In a way I like to have all my programs in some /bin dir (/bin, /usr/bin, etc).
And if I want to know what program a certain file belongs to, I just do a rpm -qf file.

He's a bit ranting about RedHat, but I assume he means more Unixes than RedHat alone though.

Re:Nah... (1)

keath_milligan (521186) | more than 12 years ago | (#2595841)

Well, if he thinks the windows installs are clean, then let him just install 1000 programs, and deinstall them. Then check how much space you used before and after, and just start to panic.

Aside from apps that have outright broken uninstalls, there is a semi-legitimate reason for this: Many Window sapplications install shared DLLs like msvcrt, mfcxxx, etc. - once these are installed, you aren't supposed to remove them (and there is no good reason to).

Some Windows applications have become lax at this and started installing into the "windows" directory.

And I thought all Windows programs did this.

Not since the Windows 3.x days.

Response (3, Insightful)

uslinux.net (152591) | more than 12 years ago | (#2595679)

You have to use the package manager.


And you should, normally. If you system installs binutils as an RPM, DEB, Sun/HP/SGI package, well, you _should_ use the package manage to upgrade/remove. After all, if you don't, you're going to start breaking your dependencies for other packages. That's why package managers exist!


In some respects, Linux is better than many commercial unices. SGI uses /usr/freeware for GNU software. Solaris created /opt for "optional" packages (what the hell is an optional package? isn't that what /usr/local is for?!?!) At least all your system software gets installed in /usr/bin (well, unless you're using Caldera, which puts KDE in /opt... go figure), and if you use a package manager like they were intended, it's easy to clean them up. The difference between Windows and Linux/Unix is that the Linux/Unix package managers ARE SMART ENOUGH not to remove shared libraries unless NOTHING ELSE IS DEPENDING ON THEM! In Windows (and I haven't used it since 98 and NT 4), if you remove a package and there's a shared library (DLL), you have the option of removing it or leaving it - but you never KNOW if you can safely remove it, overwrite it, etc.


I agree that there should be a new, standard directory structure, but I disagree that every package in the world should have its own directory. If you're using a decent package manager, included with ANY distro or commercial/free Unix variant, there's little need to do so.

Re:Response (1)

nochops (522181) | more than 12 years ago | (#2595723)

You can't blame this on Windows. Not this time, buster!

While this is true, it really has nothing to do with Windows. It has everything to do with the installer/uninstaller program.

*nix install / uninstall programs could be just as sloppy at removing files, but would you then call *nix sloppy? I think not.

The fact is, the responsibility of removing files during an uninstall rests on the uninstall program, and the schmo who wrote it, not the Operating System.

Re:Response (3, Insightful)

brunes69 (86786) | more than 12 years ago | (#2595744)

Ok, we all hate windows, but spreading FUD is useless, and makes you look as bad as they do. Every windows app I have _EVER_ uninstalled (and there has been alot!) _ALWAYS_ says something along the lines of "This is a shared DLL. The registry indicates no other programs are using it. I will delete it now unless you say otherwise". This sounds pretty much like it knows whats being used and what isn't. Unless you get your registry corrupted, which wouldn't be any different from having your package database (RPM or dpkg) corrupted.

Re:Response (1)

vrt3 (62368) | more than 12 years ago | (#2595827)

Actually, the uninstaller says something like "I think no other programs are using it, but I recommend not to delete it. Do you want to delete it?". I don't know the exact text, but that's what it means. It doesn't sound like it knows what's being used and what isn't.

hmmmm.... (3, Informative)

Ender Ryan (79406) | more than 12 years ago | (#2595683)

My /usr/bin has ~1,500 files in it. A whole bunch of it is gnome stuff, because Slack 7.1 didn't put gnome in a completely separate dir. But then there is also all kinds of crap that I have absolutely no clue what it does. Just looking at some of the filenames I think I know what they are for, but I have other utilities on my machine that do the same thing.

So, I'd say yes, it probably is partly because of lazy distro package management, but then again some people might still use some of this stuff and expect it to be there.

On most new distrubutions I've see this is actually getting better. The latest Slack at least completely separates gnome by putting it in /opt/gnome.

In any case though, I think there are more important things to worry about, such as all-purpose configuration tools, or at least lump them all together into a graphical management tool. You should be able to configure everything from sound/video to printers all in the same place.

the BeOS filesystem (5, Insightful)

codexus (538087) | more than 12 years ago | (#2595687)

The database-like features of attributes/index of the BeOS filesystem could be an interesting solution to the problem of the PATH variable.

BeOS keeps a record of all executables files on the disk and is able to find which one to use to open a specific file type. You don't have to register it with the system or anything, if it's on the disk it will be found. That makes it easy to install BeOS applications in their own directories. However, BeOS doesn't use this system to replace the PATH variable in the shell but one could imagine a system that does just that.

Everyone's guilty, noone has a solution. (3, Insightful)

Haeleth (414428) | more than 12 years ago | (#2595694)

This is somewhat parallel to the situation common in Windows, where every new application tries to place its shortcuts in a separate folder off Start Menu/Programs. It's common to see start menus that take up two screens or more, whereas everything could be found much faster if properly categorised. MS made things worse in Win98 by having the menu nonalphabetical by default.

Limiting bad organisation to Red Hat is silly. The only Linux distros I've tried are Red Hat and Mandrake, both of which are equally poor in this regard. Nor, I have to say, does the FSS make it any easier to organise a hard drive properly. Is the /usr/local distinction useful, for example? Wouldn't it make more sense to have a setup like /usr/apps, /usr/utils, /usr/games, /usr/wm, and so on - to categorise items by their function, rather than by who compiled them?

The whole /home thing is equally confusing to a Windows migrant. Yes, *nix is a multi-user OS. But is that a useful feature for the majority of home users? Providing irrelevant directories is a sure-fire way to confusion.

It's impossible to have a perfectly organised hard disk, of course. You can't fight entropy.

Why when you have a package manager? (1)

Chanc_Gorkon (94133) | more than 12 years ago | (#2595695)

You know, I hate it too, but hey at least with Linux you have a package manager. With Windows you don't have that! Also, KDE, GNOME and others are ALL dependant on shared libraries and you sure as heck don't want 40 copies of the libraries for all of the programs you run! Also, even if a program is in a sub directory under /usr, isn't everything in the path when you include /usr in your statement?? /usr is the parent of everything underneath it. If /usr is in the path, then so are it's children. At least I think that's the way it works. Anyway, with packagemanagers such as Debian's apt, this is moot! Who cares! Why would I delete it the hard way when I can press a button (in the graphical tool) or do a apt-get remove package???

Re:Why when you have a package manager? (0)

Anonymous Coward | more than 12 years ago | (#2595757)

I'll think that you will find that Windows does have this

Regarding paths... (2)

mindstrm (20013) | more than 12 years ago | (#2595829)

No.. subdirectories are NOT included.
The searchapth ($PATH) are just explicit directories.

I don't see what all the fuss is about though...

Ah, yes... (5, Funny)

Corgha (60478) | more than 12 years ago | (#2595698)

/opt/LINWgrep/bin/grep
/opt/LINWsed/bin/sed
/opt/LINWdate/bin/date....

Why? (5, Insightful)

DaveBarr (35447) | more than 12 years ago | (#2595719)

The one thing this guy fails to answer is "why is it bad that I have 2000 files in /usr/bin?". There are no tangible benefits I can see to splitting things up, other than perhaps a mild performance gain, and satisfying someone's overeager sense of order.

Failing to answer that, I think his whole discussion is pointless.

Blaming it on lazyness on not wanting to muck with PATH is wrong. Managing your PATH is a real issue, something an administrator with any experience should understand. In the bad old days we came up with ludicrious schemes that people would run in their dot files to manage user's PATH. I'm glad those days are over. Not having to worry about PATH is a tangible benefit. Forcing package mantainers to use a clear and concise standard on where to put programs is a tangible benefit.

Perhaps I'm biased because these past many years I've always worked with operating systems (Solaris, Debian, *BSD) that have package management systems. I don't care where they get installed, as long as when I install the package and type the command it runs. This is a Good Thing.

Re:Why? (1)

Hektor_Troy (262592) | more than 12 years ago | (#2595791)

why is it bad that I have 2000 files in /usr/bin?

Hmm ... lemme see if I can answer that, even though I'm no linux-buff (I use FreeBSD at home, but I suck at that as well).

If I want to be sure that I have removed a program completely, it's impossible to be sure, since there are now 1999 files in /usr/bin. "But the package-remover takes care of that.". Are you sure? If it's even close the the standard of the windows uninstaller, it'll leave a ton of files lying around ... just in case.

Compare it to your messy bedroom (if you take offence, compare it to my messy bedroom instead): Even though I know where everything is (more or less), it's still a mess and it takes me some time to find all my dirty laundry. With a messy /usr/bin, it takes alot of time finding the one ini-file you need to modify. Is it called xyz, xzy, yxz, yzx, zxy or zyx?

I'm sure people more skilled in linux/bsd/whatever can come up with better reasons than this, but you even came up with an answer yourself: "satisfying someone's overeager sense of order".

You obviously don't like order and just have everything placed in / ... right?

Read the article, THEN post. Please? (0, Informative)

Hektor_Troy (262592) | more than 12 years ago | (#2595721)

Most people haven't read the article it seems. Allow me to copy the follow-up:

A few followups
The response to this commentary has been large and I've gotten a ton of emails, (mostly positive). A few things I think I should clarify. First of all, this seems to only be an issue in RH based systems - many Slackware and Suse users emailed me to say that their systems try to do the right thing. Second of all a few angry people questioned my qualifications to make the above commentary, and one person even called me a novice! Many people know who I am and that I've been involved in Linux for years, but I figure since most editorials state the author's experience I might as well, too. I'm a Unix and Windows developer, have certifications in HP-UX Systems Administration and Tru64 cluster management (TruCluster), and have been a either a Unix admin or developer since college. I've worked on free software for about 3 years and have been a Linux user since the 0.9x days. Last of all, a few users say I should just use RPM, usually stating something along the lines that I'm stupid and don't know how to use it. Nothing can be further from the case: I have a lot of experience with RPM both from a user experience and creating quite a few RPMs for Linux distributions in the past. Just because you have a package manager is no excuse for sloppy and lazy directory management.

Depends on what you do... (1)

Mudge Pinkerton-Bott (529980) | more than 12 years ago | (#2595731)

While I would be the first to admit that I hate having to hunt through screeds of files in a single directory by hand, most of us who operate *nix boxes of one colour or another on the desktop have far too many object files to make for easy digestion without having recourse to some kind of package manager, whether it be rpm, apt or whatever. I think only servers really can use the luxury of separate directories for everything nowadays.

Use /usr/local for add-ons, keep /usr clean (4, Interesting)

Baki (72515) | more than 12 years ago | (#2595733)

~> ls /usr/bin | wc -l
403
~> ls /bin | wc -l
36
~> ls /sbin | wc -l
91
~> ls /usr/sbin | wc -l
220
~> ls /usr/local/bin | wc -l
796

This is FreeBSD, which installs a relatively clean OS under /usr and puts all extra stuff in /usr/local (sometimes the executable is in /usr/local/bin, sometimes in /usr/local//bin).

I like that much more, it is the old UNIX way to separate the essential OS from optional stuff. It really is a pity that most Linux distro's dump everything directly in /usr.

As for my slackware, I installed only the minimum, and roll my own packages for everything I consider not to be 'core Linux'; all these packages go under /usr/local. It can be done, and keeps things tidy and clean.

Re:Use /usr/local for add-ons, keep /usr clean (1)

ThatDamnMurphyGuy (109869) | more than 12 years ago | (#2595807)

This is one of the reasons I switched from Linux to FreeBSD for now while I'm still learning.

The directory structure just seemed cleaner.
Now, with a grain of salt, other Linux distros may be just as close..i just seemed to bond to FreeBSD quicker than Redhat, Mandrake, and Suse because of the directory structure, and a FBSD "minimal" install was just that..where as some of the others "minimal" installs still have lots of stuff by default.

Re:Use /usr/local for add-ons, keep /usr clean (1)

Daniel Serodio (74295) | more than 12 years ago | (#2595822)

Package managers (or installation programs) shouldn't put anything under /usr/local, there's where programs compiled by hand should go. In fact, I think making a mess out of /usr/local is even work than /usr

Have a directory standard for applications. (2)

MongooseCN (139203) | more than 12 years ago | (#2595740)

Have a standard directory structure for every application. Put all the applications in /opt then require every application to have the subdirectory /bin so if you want to find the binaries of all applications you look through all the /opt/[app name]/bin directories. You could also have other dirs like /opt/[app name]/lib for libraries, etc... You don't need to know the specific name of each application to search all the /bin dirs, you just open /opt and get a list of the directories, then append /bin to all the names and try and open those, then search in those for the binaries.

This keeps all the application files in one directory. If you want to remove an application, you just rm -rf that one directory. Upgrading applications is much simpler since you just point to that one dir and put the files there. You can also have multiple versions of an application installed just by renaming their root directory.

Applications shouldn't spread themselves all over the system, they should be placed in one spot with a specific directory structure and be moduler to the rest of the system.

Tradeoffs/union fs (2, Insightful)

apilosov (1810) | more than 12 years ago | (#2595741)

Here, the tradeoff is being able to quickly determine the files belonging to a particular package/software vs time spent managing PATH/LD_LIBRARY_PATH and all sorts of other entries.

Also, the question is how should the files be arranged? By type (bin, share/bin, lib, etc) or by package?

In Linux (redhat/FSSTD), the emphasis was placed on arranging files by type, and the file management was declared a separate problem with rpm (or other package managers) as a solution.

There is another solution which combines best points of each:

Install each package under /opt/packagename. Then, use unionfs to join all /opt/packagename's under /usr tree. Thus, you still will be able to figure out which package has which files without using any package manager, but at same time, you are provided unified view of all packages installed.

Unfortunately, unionfs never worked on linux, and on other operating systems its very tricky. (Such as, how do you ensure that underlying directories will not have files with same name? And if they do, which one will be visible? What do you do when a file is deleted? etc).

Translucent file system (5, Interesting)

Pseudonym (62607) | more than 12 years ago | (#2595743)

Even better would be if Linux had a translucent file system. Simply mount all the path directories on top of each other and let the OS do the rest.

For the uninitiated, a translucent file system lets you mount one filesystem on top of another filesystem, the idea being that if you tried to open a file the OS would first search the top filesystem, then the bottom one. In conjunction with non-root mounting of filesystems (e.g. in the Hurd) it removes the need for $PATH because you can just mount all the relevant directories on top of each other.

Re:Translucent file system (3, Funny)

brunes69 (86786) | more than 12 years ago | (#2595761)

Wait until KDE 3 / Gnome 2 com out with Xrender suport, and we can all have translucent filesystems!

HAR HAR!

Re:Translucent file system (1)

Vairon (17314) | more than 12 years ago | (#2595838)

So, does that mean I would need a seperate partition for X, gnome, kde, sawfish, etc.

I wish unix had this... (3, Interesting)

Steve Mitchell (3457) | more than 12 years ago | (#2595752)

I wish Unix/Linux had a mechanism where a directory could be marked executable and executing the directory whould internally call some default dot file (such as .name_of_directory)within the directory, and some environmental variable (like $THIS_PATH) was set to the directory and passed to the application process.

Maintance for applications like these whould be a no-brainer. Just move the directory and all the associated preference files and whatnot travel with the app.

-Steve

Re:I wish unix had this... (0)

Anonymous Coward | more than 12 years ago | (#2595818)

Hear hear. Fantastic idea - it'd be just like RiscOS on the Acorn Archimedes - any directory with a ! as the first character would get turned into an "application package", and then "running" that directory would run the file !run inside it. And setting the equivalent of $PATH was just as easy (obey$dir, if memory serves me correctly)

I'd LOVE this facility on UNIX...

Linux is just different, is all... (1)

wrinkledshirt (228541) | more than 12 years ago | (#2595753)

One of the things about the proliferation of Windows is that people get used to a filesystem is generally organized by individual software entities. On Linux, it's organized by software type. Generally. Of course the rule gets broken both ways, and the Windows Directory throws its own set of curveballs, but for the most part, that's the way it goes.

The latter basically means that your source isn't always in the same place as the executable which isn't always in the same place as the libraries which isn't always in the same place as the documentation which isn't always in the same place as the user .conf files...

Tomayto Tomahto. You get used to it. It would help if all the vendors got together and enforced the LSB on themselves, so that a common way of doing things with the filesystem would become a practiced standard in its own.

But really, in the end, you just get used to it, and Linux has usability problems in other areas that the community should probably look into before worrying about this one.

Related to yesterday's story (5, Interesting)

vrt3 (62368) | more than 12 years ago | (#2595760)

I think the fundamental problem here is related to yesterday's story about new user interfaces [slashdot.org] . It's a problem of how and where storing our files. Regarding applicationsn, there are two ways to do it: you can store all files (binaries, config files, man pages, etc.) of the same application in the same directory, or you can store all files of the same type from different applications in their respective directories (all config files in /etc, man pages in /usr/share/man (I think), etc.).

Both approaches have their advantages. The problem with hierarchical file systems is that we have to choose one of them. I would love to see a storage system where we can use both ways _at the same time_. A system that groups file depending on relationships they have. Such that 'ls /etc' gives me all config files for all apps, and 'ls /usr/local/mutt' shows me all mutt-related files, including it's config file(s).

I have no idea how to implement such a beast. I'm thinking about a RDBMS with indices on 'filetype' and 'application', but I would love to see something much more flexible. All pictures should be accessible under ~/pictures and subdirectories, all files relating to my vacation last year in ~/summer2000. Files relating to both should be in ~/pictures/summer2000 _and_ ~/summer2000/pictures.

To a certain extent, this can be done via symlinks, but it should be much easier to deal with. You shouldn't have much manual work

Keeping one applications files in one place (5, Informative)

tjwhaynes (114792) | more than 12 years ago | (#2595765)

The unix system doesn't really dump all the files in /usr/bin. These are, almost without exception, executable files. For each executable, support files are usually installed into one or more directory trees, such as /usr/share/executable_name/. The main convenience gained by having all the main binaries in one place (or two - I usually try to leave system binaries in /usr/bin and my own installations in /usr/local/bin) is convenience for searching paths when looking for the binaries.

However, this paradigm is pretty ugly if you are browsing through your files graphically. It would be nice if each application/package installed into one directory tree, so you could reorganise the system simply by moving applications around. For example,

/usr/applications/

/usr/applications/games/

/usr/applications/games/quake3/

.. this dir holds all quake 3 files ...

...etc..

/usr/applications/graphics/

/usr/applications/graphics/gimp/

... this dir hold all gimp files

...etc...

If this appeals to you, you might like to check out the ROX project [sourceforge.net] . This sort of directory tree layout was the standard on the Acorn Risc OS and made life extremely easy for GUI organisation. It makes a lot of sense to use the directory tree to categorise the apps and files.

Cheers,

Toby Haynes

RiscOS... (4, Interesting)

mirko (198274) | more than 12 years ago | (#2595767)

In RiscOS, applications are directories which contains several useful files (besides the app binaries, conf or data files):
  • !Sprites[mode] contains the icons to be used with the app and whichever file to be associated with after its filetype
  • !boot which contains directives (associations, globalvariables, etc.) to be executed the first time the Filer window that contain this app is opened (the app is "seen" by the Filer)
  • !run which describes any action to be associated with a double-click on the app icon

There's also a unique shared modules directory in the System folder.

This system is at least 10 to 15 years old (not sure Arthur was as modulable, though) and sure proved to be an excellent way to deal with this problem...

Re:RiscOS... (0)

Captain Pedantic (531610) | more than 12 years ago | (#2595828)

ROX-Filer [sf.net] does something like this, which is really what you should be using to manage files if you liked RiscOS.

The author has written a freshmeat article [freshmeat.net] explaining all this in more detail.

Laziness is a virtue (1, Troll)

scott1853 (194884) | more than 12 years ago | (#2595768)

Linux developers are geeks. They know that the only people that use their products are going to be geeks. Hence the end users will understand the laziness.

Of course I can't help but think that too much laziness is keeping developers from working towards making Linux a desktop competitor.

Not flamebait, not a troll, just a comment.

So what...? (1)

dabadab (126782) | more than 12 years ago | (#2595770)

I've got 1200+ files under /usr/bin. That has caused no headache for me so far.

Anyway, if every package would have its own directory, there would be nearly 600 dirs. How would that be any better?

And, I should add, /usr/bin is not where an "application" resides. It is the place where the executables are. If you are talking about a whole application, it will have files likely at other places: /var, /etc, /home (for user-specific configuration), etc

So, this guy fails to state what is this problem (having a lot of files in a single dir is not a problem: the fs can handle that) and it is absolutely not clear how having separate dirs would give any advantage over the current situation - though it is absolutely clear what the disadvantages would be (having fscking long PATHs).

I would moderate him -1, drunk babbling, if I could ;)

Um, so? (3, Informative)

bugzilla (21620) | more than 12 years ago | (#2595771)

Much better to have a few thousand files in one dir than to have so many dirs that need to be in your $PATH that some shells will barf.

For instance, the POSIX standard (I believe) is 1024 characters for $PATH statements. That's a minimum. My users at work sometimes have need for much longer $PATH's. Some OS vendors say, ok, 1024 is the minimum for POSIX compliance, that's what we're doing. Some, like HP-UX (believe it or not) have increased this at user request to 4K.

In any case, this all seems pretty petty. It's not like our current and future filesystems can't handle it, and package managers are pretty good and know what they put where.

Six of one... (2, Insightful)

Marx_Mrvelous (532372) | more than 12 years ago | (#2595780)

Half a dozen of the other. Of course there are pros/cons to both way; having all executeables in one (or O(1)) location/s makes finding programs also O(1), and a PATH length of O(1). Having one dir/"folder" for each program (or O(X) directories) would then have O(X) search time for a particular program, and O(X) entries in your PATH. On the other hand, finding and deleting entire packages becomes much harder if not all filenames belonging to that package are known. Personally I think it it doesn't matter either way.

UNIX is a mess in multiple ways (4, Troll)

jilles (20976) | more than 12 years ago | (#2595783)

This is only part of the problem and characteristic for the way unix has evolved. The whole problem is that there are no standards, just conventions which most unix programmers are only partly aware of. I imagine the whole reason for putting all binaries in a single directory was that you then only have to add one directory to the path variable. In other words because of genuine lazyness you have around 2000 executables in your /usr/bin directory. Of course adding all 2000 programs to the path is not the right solution either (that would be moving the problem rather than solving it). Obviously the path variable itself is not a very scalable solution and needs to be reconsidered.

To sum it up UNIX programs all have their own sets of parameters, their own semantics for those parameters, their own config files with their own syntax. Generally a program's related files are scattered through out the system. Just making things consistent would hugely improve usability of unix and reduce system administrator training cost. Most of the art of maintaining a unix system goes into memorizing commandline parameters, configuration file locations and syntax and endless man pages. Basically the ideal system administrator is not to bright (after all it is quite simple work), can work very precise, and has memorized every manpage he ever encountered. The not to bright part is essential because otherwise he'll get a good job offer and he'll be gone in no time.

Here's a sample better solution for the problem (inspired by mac os X packages): give each app its very own directory structure with e.g. the directories bin, man, etc for binaries, documentation and configuration. In the root of each package specify a meta information file (preferably xml based) with information about how to integrate the program with the system (e.g. commands that should be in the path, menu items, etc.). Standardize this format and make sure that the OS automatically integrates the program (i.e. adds the menu items, adds the right binaries to a global path, integrates the documentation with the help system). Of course you can elaborate greatly on these concepts but the result would be that you no longer need package managers except perhaps for assisting with configuration.

Folder? (1)

Griim (8798) | more than 12 years ago | (#2595784)

Well, traditionally under both Unix and DOS you used subdirectories to group related files. So Microsoft Office got it's own folder, CDE got it's own folder, X Windows got it's own folder, Oracle got it's own folder...

What's a folder?

I read this last night... (2, Interesting)

cthulhubob (161144) | more than 12 years ago | (#2595785)

I came away thinking "this man is insane".

  1. He claims DOS had a better way of organizing applications. This is a red herring. I don't want to organize my applications. Ever. I want to organize my data. I don't remember many applications in DOS that were compatible with the same type of data. If there had been, the limitations of the DOS structure would have been readily made apparent. First, CD into the directory where your audio recording utility is and make a .wav file. Then, move the .wav file into the directory where your audio editing utility is and edit it. It works, but why not keep the data in one place and run programs on it as you see fit without regard for their location on your hard drive, and without having a 10-second seek through your PATH variable?

  2. Besides which, DOS had c:\msdos50 (or whichever version you used). That was DOS's variation on /bin. Ever look in that directory and attempt to hand-reduce the number of binaries in it to save disk space? I did. A package management system would have made that doable.

  3. You can have all the localized application directories you want in /usr/local. The point of /usr/local is to hold larger packages which are local to the system. (hmm... /usr/local/games/UnrealTournament, /usr/local/games/Quake3, /usr/local/games/Terminus, /usr/local/games/RT2...) And as a bonus, thanks to the miracle of symbolic links you can have your cake and eat it too - as long as the application knows where the data files are installed you can make a symlink of the binary to /usr/local/bin and run it without editing your PATH variable too! Isn't UNIX grand?

Don't install so much stuff! (3)

ivan256 (17499) | more than 12 years ago | (#2595788)

How many of those 1500 binaries do you run, hmm?

Many distributions install lots of packages you don't need nowadays. Uninstall some, or switch to a more minimalist distribution. Try installing debian with only the base packages. Then whenever you need a program you don't have, apt-get it. It'll make for an annoying few weeks perhaps, but at the end you'll have a system with just what you need on it. I'll bet you will end up with only around 600 binaries in the end (Unless you install gnome... That's like 600 binaries on it's own.)

What does it matter anyway? If you have 1500 programs it's no better to have them in their own directories then to have them in one place. Also, it's not like you're dealing with all of them at once.

Hierarchy (3)

Waffle Iron (339739) | more than 12 years ago | (#2595792)

The root problem for all of this seems to be the limits of a hierarchical data organization such as a file system. The debate is if the heirarchy should be organized by application (as the article proposes), file type (all binaries in 'bin'), or some broad attribute of the application ('/usr' vs '/usr/local', 'bin' vs 'sbin').

There probably is no way to solve all of the issues simultaneously in one hierachical scheme. Symlinks could help because they crosslink the tree. Package managers add a more sophisticated database of relations. These relations are much more useful, but unfortunately are accessible only through the package manager program.

All in all, though, it seems that organizing by package makes the most intuitive sense, and the helpers like package managers should be responsible for figuring out how to run the app when you type it on the command line.

solution is obvious.... (1)

DrSpoo (650) | more than 12 years ago | (#2595793)

Instead of having 2000 programs that do only very specific things, have 1 program that can do everything. Think Microsoft Word for instance...

QED.

A partial solution (1)

keath_milligan (521186) | more than 12 years ago | (#2595796)

I think one of the big problems here is that all kinds of stuff ends up in /usr/bin, /usr/local/bin and other "catch-all" locations that really don't need to be there.

I'd prefer to see only commands that I am likely to use from a shell in /usr/bin - everything else should go somewhere else. Major applications like Mozilla need to be in their own directories - you shouldn't need to have these in your path since you're likely to launch them from buttons on the panel.

Microsoft is just now getting around to addressing a similar issue in Windows - for years, developers have dumped application-specific DLLs in \winnt\system32 for no good reason. Now they are strongly discouraging this.

This issue stems primarily from the simplistic path-searching mechanism shared by pretty much every OS out there. Either you dump tons of crap in standard locations like /usr/bin or you have a PATH variable a mile long. Perhaps there is an opportunity for a technical advancement here...

Why we have common directory paths in UNIX (1, Informative)

Anonymous Coward | more than 12 years ago | (#2595803)

There are probably some very valid reasons for the way UNIX does things. For example, manu application binaries and related files are shared between different applications and used with others because software is often cooperativily developed between vendors rather than in isolation the way windows stuff is typically developed. As a result, sharing of common resources, libraries, etc, is much easier to achieve.

Also, most complex sites may use different kinds of nfs partial mounts on a file system. For example all of "/usr/share" may be off a single master nfs server, but /usr/bin might come from from a cpu architecture specific machine. To do this in /opt/packagename would mean all kinds of nfs 'micro-mounts' for portions of each applications tree. Having a common set of directory trees for applications rather than package specific ones makes it much easier to organize role specific network mounts.

Of course, most of the current package management systems do not seem to understand the concept of role specific filesystem mounts. It would be nice if I could install a rpm for my /usr/share portion on my master nfs, without it also installing bin, etc, and having it install my /usr/bin or whatever portion on workstations or my cpu specific nfs servers without it installing /usr/share and such. Having a master config file in /etc that can explain this kind of usages to package managemement systems that can be setup on these machines would make that much easier to accomplish.

OSX as a guide? (0)

Anonymous Coward | more than 12 years ago | (#2595812)

MacOSX seems to go about this by using bundles & frameworks. Applications have the ".app" extension and are actually directories (this is made transparent to a user). Within the app is a standard directory structure consisting of the application components. Libraries (aka. frameworks) do this in a similar fashion. In fact, frameworks encapsulate different library versions and API documentation within a framework. ARS technica has a nice description here [arstechnica.com]

the problem is deeper (2)

Karmageddon (186836) | more than 12 years ago | (#2595824)

1. package managers should make it easy to move things around. I should be able to install the latest perl-xxx.rpm in a test location, test my scripts against it, and then reinstall it in the canonical place.

2. this needs to include all the files in /etc so app installers need to support flexible package management. Also note, the #!/shebang is totally broken in this sort of environment.

3. "the canonical places" (/usr, /etc, etc. :) should be a family of canonical places. The sysadmin group might not want to upgrade their perl scripts at the same time as the dbadmin group. decoupling their interdependency will lead to much more flexibility and quicker overall upgrading.

4. we can achieve this best if / is no longer / but is instead /root so there could be a /root1 and /root2 . Think of this, one file system containing two different distros that don't wrassle with one another.

do not evaluate this on whether you think it's a good idea. the point is that software allows soft parameterization, reentrency, soft configuration, etc. So, why can't we have it? Programmers need to stop hard coding shit, binding locations to one place.

I'd love to upgrade my workstation from RedHat 7.1 to RedHat 7.2 by installing onto the same partition without trashing the old. Then, over the course of the week I could work out the kinks and delete the old, knowing that at any time I could reboot the old to send a fax or whatever. There are 1000s of corporate uses for this type of environment too... how many times have you heard "we're taking the mailserver down to upgrade it overnight" and then heard "um... it didn't come back up..."

File Systems We Dont Need No Stinking File Systems (1)

oldstrat (87076) | more than 12 years ago | (#2595831)

USE FORTH - REAL FORTH, not some hack that sits on top of a host OS. FORTH is the OS, the Language, and the File System.

I won't go so far as to say FORTH is God, but I will bet that God uses FORTH.

grep (0)

Anonymous Coward | more than 12 years ago | (#2595837)

As long as I can use grep, I don't care how many files are in /usr/bin.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?