Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Petreley On Simplifying Software Installation for Linux

timothy posted more than 11 years ago | from the but-gentoo dept.

Linux 310

markcappel writes "RAM, bandwidth, and disk space are cheap while system administrator time is expensive. That's the basis for Nicholas Petreley's 3,250-word outline for making Linux software installation painless and cross-distro." The summary paragraph gives some hint as to why this isn't likely to happen anytime soon.

cancel ×


Sorry! There are no comments related to the filter you selected.

first spoiler (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5873900)

trinity dies


Nice to see Nick back in the swing (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5873902)

Nice to see Nick back in the swing of things, after his ... accident.

There are some good points but... (-1, Interesting)

Anonymous Coward | more than 11 years ago | (#5873903)

I think evolution is better than revolution.

I would say backward compatibility is the No. 1 issue with package managers. A. They don't handle things as well as they could for things that do provide backwards compatibility and B. The people working on the libraries and Linux distrobutions need to work more on making sure libraries are backward compatible or can coexist with each other. To me this means that can contuually be updated to point to the current version of a library until such time as it is deemed a necessity to break binary compatibility then you add an new version of the library with a new symbolic link that works until the next time it is deemed necessary to break backward compatibility.

The Debian and Redhat package managers or the associated tools both provide a way to search for a file and identify the package that includes it. a seperate utility that can be used by the package management front ends could be made that provides a fallback method of finding the package that includes a file and selecting it for installation to resolve a missing dependency.

An additional fallback method could be created in the package management front end that would allow the user to specify the location of a file or group of files and indicate that trinity dies at the end of the matrix reloaded these should be installed or used from their current location and then update the package database and if needed edit the path.

I also believe that package managers should have a user mode that installs packages in and only affects settings in the users home directory.

Matrix Reloaded spoiler in parent (5, Informative)

jeroenvw (566364) | more than 11 years ago | (#5873968)

WARNING: Stupid Matrix reloaded spoiler in parent, in the middle of the 4th paragraph

Please heed this warning (1)

Sits (117492) | more than 11 years ago | (#5874160)

Your post came too late for me :( Looks like the moderators are listening as it is falling down the page. I guess that's what I get for reading new stories...

Re:Please heed this warning (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#5874311)


Reason: Don't use so many caps. It's like YELLING.

Reason: Don't use so many caps. It's like YELLING.

Re:There are some good points but... (1)

You're All Wrong (573825) | more than 11 years ago | (#5874282)

I think evolution is better than revolution.

Where possible, yes. One thing that stuck out in the article as much as your use of "revolution" above was the following:
The software-installation process should be distribution-agnostic
i.e. All distributions must understand the new installation process.
i.e. all distributions must change what they currently do to conform to the new one true installation mechanism.
i.e. Get rid of incompatabilities by making everyone change.

That ranks alongside Stroustrup's naming of C++ header files with no extension, so that all of the ".h", ".H", ".hpp", ".hh", ".hxx",
".h++" camps were left in an equal state of needing to change.

However, on the whole it looks like most of what he says could be slowly adopted on a piecemeal basis, and perhaps that means only partially, so that even teh distributions that dont want to conform, can still minimise their differences.


Word (5, Funny)

Anonymous Coward | more than 11 years ago | (#5873904)

"That's the basis for Nicholas Petreley's 3,250-word outline for making Linux software installation painless and cross-distro."

Was it necessary to include the word count? It's hard enough to get slashdotters to read a small article and post intelligently, this cant help...

Re:Word (4, Interesting)

Anonymous Coward | more than 11 years ago | (#5874116)

The article is surprisingly dense for such a word count -- yet is easy to read.

Petreley is undoubtedly getting the grip of this "writing thing"... ;-)

Seriously, though, however smart and logical are his conclusions, one thing bothers me: the installation should be simplified but "right", too.

I mean, there are other objectives besides being easy.

Last week I tried to install Red Hat 8.0 on a Pentium 75Mhz with 32MB RAM (testing an old machine as X-terminals). It didn't work.
The installation froze at the first package -- glibc (it was a network installation) -- probably due to lack of memory (as evidenced by free et al.).

Why? It was a textmode installation. I know from past experience that older versions of Red Hat would install ok (I used to have smaller computers).

My suspect is that Red Hat has become too easy -- and bloated. Mind you, I opted for Red Hat instead of Slack or Debian because of my recent experiences, in which RH showed to recognize hardware better than others.

I hope Petreley's proposed simplification, when implemented, takes size into consideration. The way it is (using static libs, for instance), it seems the other way.

The article as a whole, though, present neat ideas and it's one of the best I've recently read.

Java (2, Informative)

RighteousFunby (649763) | more than 11 years ago | (#5873906)

Yes it should be automated to install software, especially in the case of Java. Anybody wanting to run LimeWire has to download a 20MB file, then mess around in a terminal. Not good. Though Synaptic is close to full automation....

painless installs on linux? (-1)

2 Dollar Sand Niggah (667666) | more than 11 years ago | (#5873908)

... we call this gentoo emerge! ;)

slashdot moderators are fuct in the head! (-1, Troll)

badl (546552) | more than 11 years ago | (#5873956)

what gives with the /. mods, the parent post is short, sweet, and to the point and gets modded -1. The post below gets modded +5, what gives? Are the moderators racists or something, just becuase I am a niggah?

Gentoo (Score:5, Insightful) by Tyler Eaves (344284) on Sunday May 04, @09:05AM (#5873929) ( emerge Doesn't get any simpler than that. Come back in a minute to 12 hours (Depending on the package), and *poof* new software. Ditto BSD ports. [ Reply to This ]

Autopackage comes to mind (5, Informative)

Simon (S2) (600188) | more than 11 years ago | (#5873911)

Autopackage [] comes to mind.

from the site:
* Build packages that will install on many different distros
* Packages can be interactive
* Multiple front ends: best is automatically chosen so GUI users get a graphical front end, and command line users get a text based interface
* Multiple language support (both in tools and for your own packages)
* Automatically verifies and resolves dependancies no matter how the software was installed. This means you don't have to use autopackage for all your software, or even any of it, for packages to succesfully install.

Static linking problems (4, Insightful)

digitalhermit (113459) | more than 11 years ago | (#5873922)

Static linking might be useful as a workaround for the more esoteric distros, but it has its problems. For one, if you statically link your application then anytime there's a security fix or change to the linked library you'll need to recompile the application, not just upgrade the library. This would probably cost more in administration time than upgrading a single library since multiple applications may be dependent on the one library.

Re:Static linking problems (0)

Anonymous Coward | more than 11 years ago | (#5873938)

Exactly. Remember zlib?

Re:Static linking problems (1)

mattdm (1931) | more than 11 years ago | (#5874034)

Seriously. On my BU Linux 3.0 (Red Hat 9-based) system, rpm -q --whatrequires|wc -l gives me 172. But that's not all: some of those things are libraries themselves -- kdelibs, for example. And what's worse, were everything statically linked, I couldn't use a simple command like rpm --whatrequries to find where the code is used. So I'd basically have to rebuild the whole distro to be safe. Yeah, *that* reduces administrator hassle.

Re:Static linking problems (4, Interesting)

Ed Avis (5917) | more than 11 years ago | (#5874014)

Static linking is a seriously bad idea. Part of the job of a packager is to arrange the app so it doesn't include its own copies of packages but uses the standard ones available on the system (and states these dependencies explicitly, so the installer can resolve them automatically).

Take zlib as an example of a library that is commonly used. When a security hole was found in zlib a few months ago, dynamically linked packages can be fixed by replacing the zlib library. This is as it should be. But those that for some reason disdained to use the standard installed and insisted on static linking needed to be rebuilt and reinstalled.

(OK I have mostly just restated what the parent post said, so mod him up and not me.)

Quite apart from the stupidity of having ten different copies of the same library loaded into memory rather than sharing it between processes (and RAM may be cheap, but not cheap enough that you want to do this... consider also the CPU cache).

A similar problem applies to an app which includes copies of libraries in its own package. This is a bit like static linking in that it too means more work to update a library and higher disk/RAM usage.

Finally there is a philosophical issue. What right has FooEdit got to say that it needs libfred exactly version 1.823.281a3, and not only that exact version but the exact binary image included in the package? The app should be written to a published interface of the library and then work with whatever version is installed. If the interface provided by libfred changes, the new version should be installed with a different soname, that is rather than It's true that some libraries make backwards-incompatible changes without updating the sonames, but the answer then is to fix those libraries.

Not just a linux problem (3, Interesting)

jd142 (129673) | more than 11 years ago | (#5874318)

This same problem occurs in the windows world as well, dll hell as it is often called. Here's how it works for windows. Say your program needs vbrun32.dll. You have a choice. You can put the dll in the same folder as the executable, in which case your program will find it and load the right dll. Or you can put it in the system or system32 dll in which case your program and others can find it and load it. However, if vbrun32.dll is already loaded into memory, your program will use that one. I remember we used to have problems with apps only working if loaded in the right order so the right dll would load.

As with Linux, if there's a bug in the library you have to update either one file or search through the computer and update all instances. But, as with linux, the update can mess up some programs, others might be poorly coded and not run with newer versions of the dll. I've seen this last problem in both windows and linux; it looks like the programmer did if version != 3.001 then fail instead of if version 3.001 then fail.

If everyone is forced to use the same library, you get these problems and benefits:

--1 easy point of update
--1 easy point of failure
--older software may not run with newer versions
--programmers may insist on a specific version number
--updates to the libraries can benefit all programs; if kde or windows gets a new file open dialog box, then all programs that link to the common library can have the newer look and feel by updating just one library.

On the other hand, if you let each program have its own, you get these problems and benefits:

--difficult to update libraries when bugs are found
--can run into problems if a different version of the library is already loaded into memory (does this happen with linux?)
--guarantee that libraries are compatible with your app
--compartmentalization; everything you need for an app is in it's directory. Want to uninstall? Just delete the directory. No need to worry that deleting the app will affect anything else.
--no weird dependencies. Why does app X need me to install app Y when they clearly aren't related at all. The answer is shared libraries. Which is why many people like Gentoo and building from source.

Microsoft has waffled back and forth on the issue. Under dos, everything just went into one directory and that was it. Windows brought in the system directory for shared dll's. Now the latest versions of windows are back to having each app and all of its dlls in one directory.

Personally, I think compartmentalization is the key, provided we get some intelligent updaters. If libthingy needs to be updated, the install procedure should do a search and find all instances of the library, back up existing versions and then update all of them. This wouldn't be that hard to do.

Re:Static linking problems (1)

Beliskner (566513) | more than 11 years ago | (#5874026)

For one, if you statically link your application then anytime there's a security fix or change to the linked library you'll need to recompile the application,
Easily solved, all you have to do is,

1. Go the Bank and change $1 for 5000000 Indian Rupees
2. Hire 1000 Indian programmers with above currency
3. Tell the programmers to recompile all statically-inked applications with the new libraries
4. Hire unemployed American programmer [] for $20000 to translate the program from Hindi to English
5. Charge large corporations big $$$ for upgrading all their software
5. Profit!!! (Really)

Re:Static linking problems (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#5874046)

1.) Put indians in a ship
2.) Sink the ship
3.) Don't use static libraries

Re:Static linking problems (1)

Speare (84249) | more than 11 years ago | (#5874057)

<AOL>Me too.</AOL>

The malformed zlib attack comes to mind. There's several slightly different static copies in the kernel, nevermind the many static copies in an endless variety of programs. Red Hat Network was shitting errata for a week.

Re:Static linking problems (1)

Ed Avis (5917) | more than 11 years ago | (#5874091)

Hmm, the kernel has a good reason to statically link zlib but the applications? Red Hat could have made sure they were dynamically linked with zlib when packaging them. This would have reduced the number of errata they had to issue later on.

Gentoo (4, Informative)

Tyler Eaves (344284) | more than 11 years ago | (#5873929)


Doesn't get any simpler than that. Come back in a minute to 12 hours (Depending on the package), and *poof* new software. Ditto BSD ports.

Re:Gentoo (5, Interesting)

sirius_bbr (562544) | more than 11 years ago | (#5873961)

I tried it (gentoo) some time ago. After two weeks of frustration I moved back to debian.

For me it was more like
1. emerge
2 come back in 8 hours and then:
a. see whole bunch of compilation errors,
b. dependencies were not sorted out correct, so nothing works
c. combination of above

I specially liked (still do) the optimization potential (where debian is stuck at i386), but it didn't work for me.

Re:Gentoo (0)

Anonymous Coward | more than 11 years ago | (#5873998)

Well its really pretty much flawless these days. For instance i'm running a developement kernel with Andrew Morton's patchset and i still don't know how to patch the kernel ;-p

Very stable even with stupid optimisation flags etc. Perhaps you should grab a live cd [] and try it again ;-)

Seriously 2.6 is going to be a bit special; especially for interactive performance and the 'wiggle' test. The future of linux is going in this direction as faster hardware will mean extremely short compile times - or just more bloat - oh well.

Gentoo is not the only source based distro (2, Informative)

Drasil (580067) | more than 11 years ago | (#5874095)

In descending order of (my) preference:

Re:Gentoo (2, Insightful)

brad-x (566807) | more than 11 years ago | (#5874203)

This is typically a result of a technique known as 'skimming the documentation' and 'thinking you know how to do it yourself'.

People are too quick to blame the distribution (any distribution, even Debian) when something goes wrong.</rant>

Re:Gentoo (0)

Anonymous Coward | more than 11 years ago | (#5874283)

well you do see the gentoo zealots say on the net all the time that it's as simple as 'emerge whatever'

Re:Gentoo (1)

bwalling (195998) | more than 11 years ago | (#5873979)

Doesn't get any simpler than that

That's if you can get through the complexity of the install, which requires that you do everything yourself.

Re:Gentoo (1)

Tyler Eaves (344284) | more than 11 years ago | (#5873994)

The Gentoo doc is *VERY* good. I find it hard to beleive someone has trouble with it. Yes, it's doing it by hand, but it walks you through it step by step.

Re:Gentoo (1)

the uNF cola (657200) | more than 11 years ago | (#5874017)

It's not that it's difficult to apply, just tedious. Assuming redhat, freebsd, windows and mac osx installers installed and setup how you like, the interfaces are a lot simpler than gentoo or debian's.

Re:Gentoo (3, Interesting)

KPU (118762) | more than 11 years ago | (#5874133)

Assuming redhat, freebsd, windows and mac osx installers installed and setup how you like, the interfaces are a lot simpler than gentoo or debian's.
I have installed windows, redhat, and gentoo. Yes, windows and redhat have much prettier interfaces. However, I have spent countless hours trying to install windows and redhat because the install tried to do something I didn't want it to do and crashed.
Windows 2000: the box has IDE and SCSI drives. I wanted windows on the SCSI drive as C. I had to take the IDE drive out to get it to let me. I don't even know where to start installing windows 2000 on a box without a CD-ROM drive.
RedHat: Anybody ever try installing RedHat onto a new box using ReiserFS and network install when the card is listed but the module won't load? I gave up and installed a CD-ROM drive.
Gentoo's install does take a long time but I never had these problems. When I was selecting where to install, I just used /dev/sda* instead. Machine doesn't have a CD-ROM drive or network isn't supported? I made a nfsroot kernel, mounted root from another one of my gentoo boxes, and did the install from there.
Slackware has a similar install procedure (all console) but it doesn't compile everything like gentoo.
So the point is, "Assuming redhat, freebsd, windows and mac osx installers installed and setup how you like" is a very big assumption.

Re:Gentoo (2, Interesting)

the uNF cola (657200) | more than 11 years ago | (#5874184)

But that's not the point. The point is, the interface, not the process behind the interface, needs to be intuitive.

I remember back in the BBS days, it took me an hour or too to realize that...

Continue (Y/n):

Meant Y was the default. Not that Gentoo does or doesn't do it. But it's guilty of the same thing OpenBSD does. The interface is very VERY simple. Just not intuitive.

For the general case, win2k and redhat have intuitive install interfaces. Skip the actual working or not working of some driver or some odd setup. It's the clicking of the buttons and the finishing of the process. You can't very well understand what you are doing if the interface is unusable.

Look at DOS. Dos was a very simple interface except for one facet. It used drive letters. Other than that small hurdle, you were fine. That's the problem gentoo has as well. /dev/sda, to someone who doesn't understand taht a / is a file seperator, /dev is where your devices are and sda is where your scsi (I'm not a linu dude, i might be wrong) is a scsi device is a small hurdle as well. But for Gentoo, it's not the only small hurdle. Things like emerge and what to do if a package doesn't compile properly is another. I've had it happen. Just re- emerge.

Linux and *BSD are great under the hood. Quite stable. I'm not a windows user other than at work, and I'm not fond of it. But if your interfaces suck, how can you get anything done?

Re:Gentoo (3, Informative)

Tony Hoyle (11698) | more than 11 years ago | (#5874146)

Firstly, when gentoo boots you have to wing it to work out how to get online to read the damned thing. Then it tells you to set your USE variables, with *no* documentation about what any of them do (not that it matters, half of the packages ignore them anyway). I also found several factual errors in it (for example stating that files are on the CD that aren't there).

emerge doesn't pick the latest versions of stuff either... you end up installing from source anyway. eg. I need the kerberos enabled ssh to work with my network. I had krb5 in my USE but it didn't build with kerberos. It also built an out of date version. I had to manually go in to the package directory and force it to build the latest version (which emerge insisted didn't exist).. which still didn't build with kerberos, so I gave up on it and ftp'd a prebuilt one from a debian machine.

Also the dependencies suck rocks. I wanted to build a minimal setup and get it working, so I decided to install links. Bad move. It pulled in svgalib (??), most of X and about a million fonts - for a *text mode* browser.

12 hours is also a bit optimistic - On a dual processor machine I had it building for 3 days.. and at the end half the stuff didn't work anyway. Luckily I can get a debian install on in 20 minutes with a following wind, so I got my machine back without much hassle.

Re:Gentoo (1)

brad-x (566807) | more than 11 years ago | (#5874185)


Re:Gentoo (0)

Anonymous Coward | more than 11 years ago | (#5874296)

so this is the legendary Gentoo community's helpful response? You did nothing to refute this person's problems.

Re:Gentoo (1)

Ed Avis (5917) | more than 11 years ago | (#5874051)

Emerge is nothing special. 'rpm --rebuild whatever-1.2.3.src.rpm', come back in a few minutes and *poof* a freshly built package.

Although I will admit, you need to have the BuildRequires packages installed - rpm tells you if they're not, but won't download and install them automatically... some tool like urpmi or apt-rpm would be needed for that part.

But some of the problems another person mentioned with emerge can sometimes apply to rpm --rebuild too. That is, a package doesn't state its build dependencies fully, so you try to build it and it craps out for lack of some header file. The package's author should have specified the library used with a BuildRequires: line in the spec file, then this can be checked and reported friendly before the build starts.

Re:Gentoo (0)

Anonymous Coward | more than 11 years ago | (#5874145)

Yes it does: apt-get install.

And if you think that you're getting a really optimized system because you're compiling everything, allow me to disavow you of that notion. GCC has not come very far in this arena, and an executable for a certain architecture will be the same no matter what machine it's compiled on. You're just doing extra work because you like to be 31337.

Congratulations. You wait 12 hours for something that installs in 10 minutes for me, and we both operate at the same speed.

Re:Gentoo (1)

subzerohen (664161) | more than 11 years ago | (#5874278)

Ah, so you are running XFree 4.3.0 and a vanilla 2.4.20 kernel and ALSA 0.9.0rc6?

I could care less about the optimizations. That is not why I use Gentoo.

I don't know how good apt-get is since I have never actually managed to install Debian "Sorry, Debian is not perfect." I'll take bash over dselect any day.

Fallback (4, Insightful)

Anonymous Coward | more than 11 years ago | (#5873931)

Place user applications in their own directories

This single rule alone would eliminate most of the problems. It enables fallback to manual package management, it resolves library conflicts, it avoids stale files after uninstallation and it prevents damaging the system which can be caused by overwriting files during installation and subsequently removing files during uninstallation.

Re:Fallback (0)

Anonymous Coward | more than 11 years ago | (#5874110)

You mean, do things the Apple way.

Re:Fallback (0)

Anonymous Coward | more than 11 years ago | (#5874288)

Yes, that's what Apple does, plus they hide the directory-contents from the GUI users: The directory IS the application. But I don't care if this is what Apple does. It's a good idea and it would still be a good idea if Apple were not doing it this way.

software reliability == complex install process??? (1, Funny) (562495) | more than 11 years ago | (#5873933)

I have noticed that installation complexity is directly propotional to the reliability of the software.

If a software is extremely complex to install, one can safely assume it is reliable :)

If a software is easy to install, it is not reliable. for e.g. MS products.

But seriously I dont think applications are complex to install, it is just that a learning curve is involved in doing anything.

Re:software reliability == complex install process (1) (259996) | more than 11 years ago | (#5873983)

I agree with this logic completely. The
fundamental problem with most things (i.e., walking and chewing gum at the
same time, taking out the trash
masturbating) is that they're so easy to do that most any old jomoke can do them
without any "learning curve" being needed.
The rest of the world would be a much
better place if it aspired to the
standards of the free software community:
make things as difficult as possible
for the average person to learn. And if
they can't ever manage to get it figured out, well, (heheheh) too bad for them.

Re:software reliability == complex install process (1)

MikeFM (12491) | more than 11 years ago | (#5874005)

I'd agree that installing Linux software isn't typically that hard once you've done it a while. Red Carpet makes updating/installing RPM files really easy. Apt-get makes the same process almost as easy in Debian based systems. Anything not available as a package just download and compile (which has gotten much easier in recent years).

What needs to be made easier is making good third party packages. It needs to be as easy as making a tarball or using WinZip. Obviously, the distros can't keep up with providing packages for every program that exists. They need to make it so your average sysadmin or developer can package the software he is trying to compile and install so that it works well across all his systems. Sure they can do learn to do it the hard way but obviously most of them are busy. The low number of third party packages is a good example as to the need for such tools.

As for every dick distro having trouble running standard packages it's been my experience they cause these problems by changing things pointlessly. Good packages play nice in any sane distro and distros should be smart enough to leave themselves compatible with those packages. Debian based distros should maintain compatibility with Debian, RedHat based distros with RedHat, etc. Pretty simple.

Yet Another Reason (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#5873937) use Linux.

The Linux kiddies whine because of the "fragmentation" of commercial UNIX, yet it is happening in Linux. What gives?

Oh, being hypocrites again. My bad, I should have known you were set in your ways and unwilling to change.

You will break like the mighty oak since you can't bend like the reeds...

having read the story... (-1)

2 Dollar Sand Niggah (667666) | more than 11 years ago | (#5873941)

this stallman guy sounds like he is real cocksucker - like he is hanging on the edge of the conversation to get his GNU term in there, who gives a fuck what its called! He seems to be really depressed that no one includes the GNU and Linux words together.

General bad attitude towards anything easy (5, Insightful)

Microlith (54737) | more than 11 years ago | (#5873952)

The first obstacle to overcome is the bad attitude many linux users have that if something is easy to install, or easy to use, it is therefore bad.

As I see it, many would like to keep the learning curve very, very steep and high to maintain their exclusivity and "leetness" if you will.

For instance, the post above mine displays the ignorant attitude that "easy to install" by definition equals "unstable software" and has only a jab at MS to cite as a reference.

That's truly sad (though that may just be a symptom of being a slashdot reader.)

As I see it, not everyone finds: ./configure
make install

to be intuitive, much less easy, never mind what happens if you get compiler errors, or your build environment isn't the one the package wants *cough*mplayer*cough*, or if you even have said development environment.

Nor does it mean the software is any more stable. Could be just as shitty. _That_ is a matter of the developer of the program, not the install process.

Re:General bad attitude towards anything easy (1)

Ed Avis (5917) | more than 11 years ago | (#5874035)

'./configure && make install' is not an installation process. It's the first stage in building some sources and making them into a package such as an RPM, Debian package or Slackware .tgz. Then you use your system's package manager to install, and it tracks where the files go so you can easily uninstall later.

People who want to change the build process to make 'installation' easier are barking up the wrong tree. Building the software from source is something that the packager should do, or at least the package manager (rpm, dpkg, Gentoo's emerge) should encapsulate that step in a single 'build' command. Installation means installing the resulting package on your system, keeping track of its dependencies automatically, adding the application to system menus and so on.

True, someone has to do the initial job of making a source package, recipe or spec file. But that is only once per distro or once between fairly similar distros.

Building your own packages is not always the way (2, Informative)

Skapare (16644) | more than 11 years ago | (#5874323)

Basically your suggestion amounts to building a binary package from a source package as a stage to having it actually installed. While that is something I actually do (using Slackware package.tgz format), and even recommend it to many people, it's not necessarily suitable for everyone or every purpose. I still run an experimental machine where everything I install beyond the distribution is installed from source. That's good for quickly checking out some new package to see if it really does what the blurbs imply (quite often this is not the case).

Building your own binary packages in whatever is your preferred package format is definitely a plus when managing lots of computers. And this way you have the plus that the MD5 checksum of all the binary files will be the same across multiple computers, making it easy to check for trojans.

As for making the recipe or spec file, I've actually figured out a way to do that when I build source packages into binary packages. It's slow but it works. I first construct a subdirectory consisting of a copy (never a hard link or bind mount) of the system root tree (or just enough to accompish the building). Then I scan that directory modifying every file object to some weird date way in the past. For symlinks which cannot have their date changed, I just note the current time as all new symlinks will have a timestamp after this. Then the installation is executed under chroot and thus will be installed within the subdirectory (as long as the package author didn't slip in some program to crack out of the chroot, which can be done easily). A rescan compares everything against the previous scan. Every file and every symlink changed or created will be detected. It's not perfect in theory (if the source package elects to not install something because it already exists in your root template, then you'll miss it), but so far I've been totally successful with it.

Re:General bad attitude towards anything easy (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5874125)

Excuse me but screw linux egotistical asshats,

If you want to COMPETE with other platforms you NEED SIMPLICITY. IF you disagree, go back to you're server room and never say Linux is good on the desktop because its pure bullshit.

LeetLinux distro (1)

Skapare (16644) | more than 11 years ago | (#5874259)

If a bunch of Linux geeks want to have a hard to install Linux system in order to raise the leetness level, they can always put together their own "LeetLinux" distribution. We can't (and shouldn't) stop them. There shouldn't be a requirement that all distributions be "easy". This even applies to the BSD's. I personally find the command line install of OpenBSD more flexible (and even easier anyway) than the menu driven install of FreeBSD. But as I use Linux mostly, my preferred leetness distro is Slackware. Many will also find Gentoo and Linux From Scratch suits their desires.

No, please (5, Insightful)

Yag (537766) | more than 11 years ago | (#5873963)

Thats the reason windows servers are more vulnerable to attacks, because they give you the idea that its easy to mantain them... Its the same thing saying that you dont need any pilot on an airplane (and that you can put there anyone) if you make a good autopilot engine... We need more knowledge in system administration, not more automatisms.

Re:No, please (0)

Anonymous Coward | more than 11 years ago | (#5874038)

The problem is twofold: First, many installation procedures are unnecessarily complicated. Even if you know exactly what you're doing as an admin, having an application scattered over the whole filesystem is an unnecessary burden if only one user or the system is going to use it. Second, non-admins are using computers too, and even though you can't hide all complexity from the user, the installation procedure should include enough information and guidance, so that an interested user does not wreck his system. And still, many applications are not designed to reduce installation complexity as far as possible. The job isn't done when the program works. An installation procedure without unnecessary complexity is part of usability and must not be omitted.

Re:No, please (2, Insightful)

bogie (31020) | more than 11 years ago | (#5874107)

So you would deny the use of free software only to those who are experts with their OS?

I say this as a longtime linux user and booster, if installing sofware on Windows was one-tenth as hard as it often is on Linux then everyone would be using Macs.

Ease of use really should be the ultimate goal with all appliances and software. Would it really be some benefit if cars were twice as difficult to use?

To take your example, Windows servers are not more vulnerable because they are easier to use, they are/were more vulnerable because MS shipped its OS with dumbass defaults. Make the same OS even easier to use and setup, but make more sane security choices and its even MORE secure while being easier to maintain.

Re:No, please (3, Insightful)

evilviper (135110) | more than 11 years ago | (#5874234)

I can't agree with that. There are lots of programs that could add one or two features, and simply no longer require so much human work... XF86 comes to mind immediately.

But, I have to say this article was so far off the mark that it's funny. `Let's take all the ideas from Windows of what an installation package should be, and apply them to Unix.' No, I think not.

I dare say the biggest problem is that everyone is going the wrong direction. RPM is the standard, yet it sucks. Binary packages sepearate the `devel' portions into another package, making the system fail miserably if you ever need no compile software. It has piss-poor depend management. Instead of checking if a library is installed, it checks if another RPM has been installed. If it has been installed, it assumes the library is there. If it isn't installed, it assumes the library isn't there... Crazy! To have an RPM depend on a library I've compiled, I have to install the RPM of the library, then compile and install my own over the top of the RPM's files. RPM is like the government system of package management. You have to do everything their way, or it won't let you do anything at all.

I liked Slackware's simplistic packages more than anything else. At least there I could just install the package, and it wouldn't give me shit about dependencies. If I didn't install the dependencies, I got an error message, but it wouldn't refuse to install or try to install something for me automatically. I can take care of the dependencies any way I want. RPMs are supposed to save you time, but instead, because of it's dependency management, it used up far more of my time trying to deal with it's quirks, than it could have *possibly* saved me.

Another thing I find annoying is that there is only one version available. You can only get a package compiled without support for XYZ... Well that's fine if I don't have XYZ, but what if I do? I like the ports system, although it does some things automatically that I don't like (I would rather it asked me), it doesn't step on your toes much at all, it gives you all the customizability you could want (and only if you want it), and it's much simpler and faster than untaring and configure/make-ing everything.

Linux politics... (1, Insightful)

Anonymous Coward | more than 11 years ago | (#5873970)

One of the drawbacks of being so open is politics. In open source, a lot of times a dictatorship is the most efficient way to get things done. Not everyone deserves a say... only the people who are actually doing the coding! (Great job Mozilla in trying to be less democratic. Some of the "bug battles" were getting out of hand.)

OpenStep / OS X frameworks (5, Informative)

pldms (136522) | more than 11 years ago | (#5873975)

Did some of the suggestions remind anyone of the OpenStep frameworks idea?

Frameworks (very roughly) are self contained libraries containing different versions, headers, and documentation. Java jar libraries are somewhat similar.

The problem is that using frameworks requires major changes to the tool chain - autoconf et al, cc, ld etc.

Apple shipped zlib as a framework in OS X 10.0 (IIRC) but getting unix apps to use it was very difficult. Apple now only seem to use frameworks for things above the unix layer.

I suspect there are lessons to be learned from this. As another poster said, evolution rather than revolution is more likely to succeed.

emerge maybe easy. (4, Funny)

Anonymous Coward | more than 11 years ago | (#5873978)

But installing gentoo is still hard.

Insert cd.
login in from the command line
net-setup ethx
tar xzjdocmnaf stage1.tar
mkdir /mnt/gentoo/
chroot /mnt/gentoo
(10 hours later)
emerge ufed
edit use flags
emerge system
emerge gentoo-sources
configure kernel, having do lspci and googling obscure serial numbers to find out what modules to compile
install kernel
muck around with it's non standard bootloader
install cron and sysloggers
spend two days sorting out the kernel panics
wait all week for kde to emerge.
processor dies of over work
huge nasty electricty bill arrives after running emerge for over a week 24/7

in other words, no

Re:emerge maybe easy. (2, Insightful)

Blkdeath (530393) | more than 11 years ago | (#5874256)

  1. bootstrap doesn't take anywhere near 10 hours on a modern machine
  2. The stage-x tarballs come on the boot CD ISOs. Wget is not required.
  3. If you have to resort to lspci to compile a bootable kernel, Gentoo is not for you (IOW you don't know your hardware well enough). BTW - You could grep the pci.ids datafile that comes with the kernel rather than Googling "obscure" (International standard, unique) PCI IDs.
  4. GRUB is a standard boot loader now. If you don't like it, emerge LILO.
  5. KDE takes nowhere near a week to compile on a modern machine. (If you're that impatient, emerge any of a dozen other window managers /desktop environments)
  6. Processor dies? High electricity bill? What are you, an end-user who boots Windows just long enough to leech a few MP3s and chat on MSN?
  7. Most importantly - Gentoo is not designed to be a point-click-forget install. If you're new to Linux or you're that impatient, install RedHat, SuSE, Mandrake, or any other glossy storebought distribution that strikes your fancy.

The trouble with Linux isn't the installation. That procedure should remain somewhat esoteric (else we find ourselves plagued by every Windows user out there who's ever run regedit and thinks he's a sysadmin). The trouble is in package installation, upkeep, and maintainance. With Gentoo, keeping your system up to date with security/critical updates and new features is a breeze. It's so simple and automated, you could set it up as a cron job if it struck your fancy.

"Insightful" my ass.

why? (2, Informative)

SHEENmaster (581283) | more than 11 years ago | (#5873982)

Debian's system, or possible something like gentoo has, is preferable to any "easy" installation process.

apt-get install foo#installs foo and any prerequisites.

Apt-get can also download and build the source of the package if needed. The biggest advantage of this is that:apt-get update && apt-get upgrade will upgrade every single installed package to the latest version. I can get binaries for all the architectures I run (mostly PPC and x86).

On my laptop, Blackdown and NetBeans(unused at the moment) are the only two programs that I had to install manually. Those who need a pretty frontend can use gnome-apt or the like.

It's hard enough making all the packages of one distro to play nice with eachother, imagine the headache of attempting it with multiple ones!

Shipping software on disc without source is such a headache. The program will only work on platforms it was built for, it will be build against archaic libraries, and it can't be fixed by the purchaser.

As for your "universal installer", it should work as follows.

tar -xzf foo.tgz
cd foo
./configure && make && sudo install

Any idiot can manage that.

Re:why? (0)

Anonymous Coward | more than 11 years ago | (#5873986)

any idiot who can't manage that, will settle for a Mac or a Dell.

Linux Standard Base (0)

Anonymous Coward | more than 11 years ago | (#5873995)

Not that I've read the article, but isn't this exactly what the Linux Standard Base [] is for? Any distribution that provides this will allow conformant packages to be installed without hassle. While LSB is RPM-based, any distribution can support it. Debian [] does so via the lsb [] packages.

Here's what we did... (4, Interesting)

martin-k (99343) | more than 11 years ago | (#5874004)

We [] just released our first Linux app, TextMaker [] , our non-bloated word processor.

Installation goes like this:

1. tar xzvf textmaker.tgz
2. There is no 2.

After that, you simply start TextMaker and it asks you where you want to place your documents and templates. No muss, no fuss, no external dependencies except for X11 and glibc. People like it that way and we intend to keep it this way with our spreadsheet and database []

Martin Kotulla
SoftMaker Software GmbH

Re:Here's what we did... (1)

Ed Avis (5917) | more than 11 years ago | (#5874082)

But I'm sure you realize that if every application used its own installation procedure that requires the command line, life would be awkward. The point of packaging systems like RPM is that they _standardize_ things. Need to push an RPM to multiple workstations? Use Red Carpet, autorpm or similar tool to automatically install new packages at regular intervals. Want to keep track of exactly what is installed? Use rpm --query. Want to smoothly upgrade between versions? No problem, rpm --upgrade keeps track of what files need to be replaced and will back out cleanly if the upgrade cannot be performed for whatever reason.

Distributing software only as tarballs goes back to the bad old days of MS Windows or proprietary Unix, when every application had its own procedure for installing and uninstalling. Still, perhaps some enterprising person will take the tarball you provide and package it up as an rpm or dpkg so that it can be installed in the same way as other software. And they do say that it's good for packager and developer to be different people, since the packager knows about the idiosyncrasies of the particular OS while the developer should not need to.

Re:Here's what we did... (2, Insightful)

mattdm (1931) | more than 11 years ago | (#5874097)

That's perfectly fine for a single proprietary app, but is in no way a scalable solution for a whole distro.

Re:Here's what we did... (0)

Anonymous Coward | more than 11 years ago | (#5874148)

>> 2. There is no 2.

Very funny! (Really, no sarcasm here)

Change your name to Marktin... ;-)

Re:Here's what we did... (1)

chneukirchen (640531) | more than 11 years ago | (#5874307)

... and how can I uninstall it?

By running rm -f `tar txf textmaker.tgz`?

Get a life, and make .debs!

Typical windows installation (1)

pommiekiwifruit (570416) | more than 11 years ago | (#5874319)

For running a game with statically linked libraries:

1. Insert CD

2. There is no 2.

Executive summary: (3, Insightful)

I Am The Owl (531076) | more than 11 years ago | (#5874010)

Use Debian and apt-get. No, seriously, could it be much easier?

Re:Executive summary: (1)

Haeleth (414428) | more than 11 years ago | (#5874053)

Well, assuming that you're experienced enough with Linux to be able to get Debian installed and working in the first place (and yes, I know it has a menu-based installer - sorry, it still isn't easy) - assuming that, then yes, apt-get is easy.

Except that not everyone has a direct connection to the internet, you know. Some of us need to download software on other machines, even on machines running different OSS. So for those there's still all the hassle of repeatedly downloading new packages until everything has all its dependencies.

Oh, and not every application is packaged, and of those that are, a lot of packages are outdated. So if you want to do anything out of the ordinary, you're back to compiling your own, or begging someone to package it.

Tired old arguments, I know, but every time installation issues come up on /. we get a dozen Debian advocates shouting "apt-get! apt-get!". Could it be much easier? parent asks. Yes, it could. For most Windows software, you download a single executable file - from the site of your choice, on any machine you like - run it, and your software is installed and works. Now, that's easy.

Re:Executive summary: (1)

benad (308052) | more than 11 years ago | (#5874076)

Yes. Fink [] , built on top of apt-get.

But that's for UNIX stuff on Mac OS X. We usually install with "drag & drop", as applications are self-contained directories that are flagged in the file system to look like files.

- Benad

Re:Executive summary: (0)

Anonymous Coward | more than 11 years ago | (#5874170)

I'd like. But Debian didn't recognize my NVidia Geforce 4... and it's not a X problem, the card is not recognized at the kernel (PCI) level... unknown device etc.

Mandrake, Red Hat, even ELX recognized it.

Any ideas? (besides "use Debian")

And before you flame me, think for a second: why have I tried to install Debian? That's why I want to be able to use it! Duh!

Re:Executive summary: (1)

mickwd (196449) | more than 11 years ago | (#5874322)

Or Mandrake and urpmi.

How about this (3, Insightful)

Fluffy the Cat (29157) | more than 11 years ago | (#5874028)

The complaints are, almost entirely, about libraries. But there's already a robust mechanism for determining that a library dependency is satisfied - the SONAME defines its binary compatibility. So if stuff is breaking, it's because library authors are changing binary compatibility without changing the SONAME. How about we just get library authors to stop breaking stuff?

No no no! (4, Interesting)

FooBarWidget (556006) | more than 11 years ago | (#5874030)

First of all, RAM and disk space are NOT cheap. I spent 60 euros for 256 MB RAM, that's is not cheap (it's more than 120 Dutch guilders for goodness's sake!). A 60 GB harddisk still costs more than 200 euros. Again: not cheap. Until I can buy 256 MB RAM for 10 euros or less, and 60 GB harddisks for less than 90 euros, I call them everything but cheap.

What's even less cheap is bandwidth. Not everybody has broadband. Heck, many people can't get broadband. I have many friends who are still using 56k. It's just wrong to alienate them under the philosophy "bandwidth is cheap".
And just look at how expensive broadband is (at least here): 1 mbit downstream and 128 kbit upstream (cable), for 52 euros per month (more than 110 Dutch guilders!), that's just insane. And I even have a data limit.

There is no excuse for wasting resources. Resources are NOT cheap dispite what everbody claims.

Re:No no no! (1)

RealityProphet (625675) | more than 11 years ago | (#5874040)

Well, when you spend 60 euros PER HOUR on system administrators, spending 60 euros on a memory chip seems cheap to me!

Re:No no no! (1)

FooBarWidget (556006) | more than 11 years ago | (#5874044)

I'm not a corporation. I don't have a paid sysadmin.
Corporation who *do* have a paid sysadmin should use the RPMs or whatever provided by their vendor. RedHat's up2date resolves dependancies automatically, provided that you're using RPMs made by RedHat. Of course, all you *should* use is RPMs made by RedHat anyway if you're a corporation, because those packages are supported by RedHat.

Re:No no no! (0)

Anonymous Coward | more than 11 years ago | (#5874058)

Where can I buy gold-plated 60GB harddisks and DIMMs?

Re:No no no! (2, Interesting)

Zakabog (603757) | more than 11 years ago | (#5874085)

Wow you're getting ripped off. Where are you buying this stuff?

256 megs of good ram is 35 euros or less or 25 euros for some cheap PC100 ram. If you can't call that cheap, let me remind you that years ago it was $70 (62 euros) for 8 megs of ram. And a 200 gig Western Digital drive is less than 200 euros on New Egg [] which is a very good computer hardware site. 60 Gigs is like 50 euros. I'm sorry you have to live in a country where hardware is so expensive, but where I live it's incredibly cheap.

Re:No no no! (1)

FooBarWidget (556006) | more than 11 years ago | (#5874111)

> Wow you're getting ripped off. Where are you buying this stuff?

In the store. Heck, I checked out several stores, and even advertisements in computer magazines! The DIMM modules I bought in Vobis was actually the cheapest modules I could find.

This is The Netherlands. I don't know where you live.

Re:No no no! (2, Funny)

Ed Avis (5917) | more than 11 years ago | (#5874099)

More than 120 Dutch guilders? Wow! It's a good job you didn't express the amount only in those obscure euros.

Re:No no no! (1)

pommiekiwifruit (570416) | more than 11 years ago | (#5874330)

1 euro ~ 1 US dollar. This is intentional. Whether it will stay that way is another matter.

Re:No no no! (0)

Anonymous Coward | more than 11 years ago | (#5874102)

Yes, RAM, processing power, and hard drive space are cheap. You got ripped off horribly if you paid that much for those components.

Besides, you missed the point. When he says RAM is cheap, he is saying that it is okay for an application to waste a few hundred kilobytes if it is necessary for it to work properly. RAM, processing power, and hard drive space all cost next to nothing in these tiny amounts, though the components themselves may be more expensive.

Re:No no no! (1)

FooBarWidget (556006) | more than 11 years ago | (#5874124)

"Yes, RAM, processing power, and hard drive space are cheap. You got ripped off horribly if you paid that much for those components."

No I didn't. I checked out several stores. I read lots of advertisements in computer magazines. Nowhere can I find DIMM modules that are compatible with my VIA motherboard and are cheaper.

Re:No no no! (1)

vrt3 (62368) | more than 11 years ago | (#5874211)

I don't know about the memory since I don't know exactly what kind you need, but your HD is extremaly expensive. Checking out the website of a Flemish computer shop, I can get a 80 GB HD for 95 euro. The most expensive HD in that particular shop is a Seagate Barracuda V 120 GB which costs 164 euro.

Re:No no no! (1)

10Ghz (453478) | more than 11 years ago | (#5874220)

First of all, RAM and disk space are NOT cheap. I spent 60 euros for 256 MB RAM, that's is not cheap (it's more than 120 Dutch guilders for goodness's sake!). A 60 GB harddisk still costs more than 200 euros. Again: not cheap. Until I can buy 256 MB RAM for 10 euros or less, and 60 GB harddisks for less than 90 euros, I call them everything but cheap.

Whoa! That IS expensive! Let me quote some prices:

256MB DDR266 CL2: 44e (could be had for 34e if you want generic brand)

Western Digital Caviar SE 120GB EIDE 8MB cache: 175e

These are the prices in Finland (more precisely, in So I think it's fair to say that you are getting ripped off.

Re:No no no! (1)

FooBarWidget (556006) | more than 11 years ago | (#5874289)

Again, I'm not getting ripped off. I've checked out several stores as well as lots of computer advertisements. All the prices are more or less the same: too expensive.

Petreley should do it (2, Funny)

Anonymous Coward | more than 11 years ago | (#5874031)

He's got time, motivation, money and computer. Sounds like the right guy for the job!

It is not just the ease but the language... (5, Insightful)

terraformer (617565) | more than 11 years ago | (#5874043)

It is not just the ease of installation but also the language used during that installation that is foreign to many users. Having a nice point and click interface on linux installs is a major leap forward but these still reference things like firewalls, kernels, services, protocols etc. Most people, when faced with new terms become disoriented and their frustration level rises. These setup routines have to ask users what they are looking to accomplish with their brand spanking new linux install.

  • Would you like to serve web pages? (Yes or No where the answer installs and configures Apache)
  • Would you like to share files with other users in your home/office/school? (Yes or No where the answer installs and configures Samba)


Re:It is not just the ease but the language... (0)

Anonymous Coward | more than 11 years ago | (#5874108)

True, as long as the installation tells more advanced users what is actually going to be done.

Re:It is not just the ease but the language... (1)

groomed (202061) | more than 11 years ago | (#5874291)

The argument could be made that somebody who doesn't know what Apache is really shouldn't be serving webpages.

Apart from that, there are other problem with the dumbing-down approach; for instance, it promises things that it cannot make good on. When you instruct the computer to "share your files", but then you can't share files with the AppleTalk Macs, that's confusing. When you instruct the computer to "serve web pages" but instead your server gets hacked, that's a disaster.

Language need to be consistent, and terms need to be explained. But never dumbed down. People aren't idiots. They don't buy a "program to manipulate images". They buy "Photoshop".

Re:It is not just the ease but the language... (1)

TeknoHog (164938) | more than 11 years ago | (#5874321)

Users who need easy point n' click installations should not be installing servers.

Moreover, your scenario is the complete antithesis of choice, which is a major driving force for using Linux. People choose Linux or another OS for a variety of reasons, and they choose their applications accordingly. For example, when choosing Linux because it performs well on slower hardware, you'll also want to choose leaner applications. If we didn't have choice we could just as well consider this:

Would you like to use a computer? (Yes or No for installing the one and only OS)

I'm Glad It Isn't Easy (1, Funny)

fire-eyes (522894) | more than 11 years ago | (#5874064)

I'm glad it isn't easy. I like a challenge, even if it takes longer, even at work.

I'm glad it isn't brainless. I'm glad it's different across distros. Then I can pick and choose what I like.

The less each distro is like any other, the happier I am.

This is why I enjoy Linux.

Not too big an issue (1)

rsilvergun (571051) | more than 11 years ago | (#5874084)

I was just talking with my brother about this. My take is since a linux install comes with just about anything you need to use a computer (networking and comminucation software, office software, media players, etc), it really doesn't matter if there's tons of old software you can install. Especially since your probably not paying too much (if anything) for your licenses. Backwards compatiblity was a lot more important when your wordprocessor costs $500 and you don't want to buy the new version :).

Not that it wouldn't be nice to find a 5 year old config tool somebody wrote that'll make my job easier and be able to run it.

Apple has it right (4, Interesting)

wowbagger (69688) | more than 11 years ago | (#5874188)

From what I am given to understand of the way the Mac OS 10.* handles such things, Apple got it more closely to right.

As I see it, the following things need to happen to really make application installation be very clean under any Unix like operating system:
  1. All apps install in their own directory under /usr/[vender name]/[app name] - the reason for including the vender name is so that when two venders release different apps with the same name (Phoenix comes to mind) you can still dis-ambiguate it. Also allow apps to install into ~/apps/[vender name]/[app name] to allow for non-root installation.
  2. Under an app's directory, create the following subdirs:
    • [arch]/bin - any binaries that are OS/CPU dependent.
    • bin - shell scripts to correctly pick the right [arch]/bin file.
    • man - man pages for the app
    • html - help files in HTML, suitable for browsing
    • [arch]/lib - any shared libraries specific to the app.
    • system - desktop icons and description files, perferably in a WM-agnostic format, MIME type files, magic files (for the file command, and a description of each program in the app, giving the type(s) of application for each binary (e.g. Application/Mapping; Application/Route Planning).

  3. Shells and WMs are extended to search under /usr/*/*/bin for programs, /usr/*/*/man for man pages, etc.
  4. Programs shall look for ~/.{vender]/[appname] for their per-user storage area, and will create this as needed.
  5. The system must provide an API for asking if a given library/application/whatever is installed.
  6. The system must provide an API for installing a missing component - this API should be able to *somehow* locate an appropriate package. The requesting app will provide a list of acceptable items (e.g. need,,
  7. This is the biggest item, so I'm really going to stress it:

    Too damn many times I've tried to install FOO, only to be told by the packaging system "FOO needs BAR". But FOO doesn't *need* BAR, it just works "better" if BAR is present (e.g. the XFree packages from RedHat requiring kernel-drm to install, but working just fine (minus accelerated OpenGL) without it).

Were venders to do this, then a program install could be handled by a simple shell script - untar to /tmp, run script to install needed pre-reqs, move files to final location.

The system could provide a means to access the HTML (a simple, stupid server bound to a local port, maybe?) so that you could browse all installed apps' help files online.

As a final fanciness, you could have an automatic process to symlink apps into a /usr/apps/[application class] directory, so that if you wanted to find all word processing apps you could
ls /usr/apps/WordProcessors
and see them.

Installation is only the first step (1)

HidingMyName (669183) | more than 11 years ago | (#5874197)

I don't install that often once I pick a distro/version combination that meets my needs. However, the real problem lies in keeping control of installed packages during upgrades and doing routine systems administration. Unfortunately, this is the place where most fracturing of linux distributions have occurred. I really wish that there was some tool that the distributions would support and standardize on (e.g. linuxconf, although any good tool would do). We use redhat 7.3 in my lab (mainly because of its popularity) but the admin tools leave a bit to be desired.

Instances don't really matter for static linking (3, Informative)

Skapare (16644) | more than 11 years ago | (#5874204)

Nicholas Petreley writes: []

The following numbers are hypothetical and do not represent the true tradeoff, but they should serve well enough to make the point. If libthingy is 5K, and your application launches a maximum of 10 instances, all of which are statically linked with libthingy, you would only save about 45K by linking to libthingy dynamically. In normal environments, that is hardly worth the risk of having your application break because some other build or package overwrites the shared version of libthingy.

Linking libthingy statically into application foo does not preclude the sharing. Each of the instances of application foo will still share all the code of that executable. So if libthingy takes up 5K, and you launch 10 instances, that does not mean the other 9 will take up separate memory. Even statically linked, as long as the executable is in a shared linking format like ELF, which generally will be the case, each process VM will be mapped from the same file. So we're still looking at around 5K of real memory occupancy for even 1000 instances of application foo. The exact details will depend on how many pages get hit by the run-time linker when it has to make some address relocations. With static linking there is less of that, anyway. Of course if libthingy has its own static buffers space it modified (bad programming practice in the best case, a disaster waiting to happen in multithreading) then the affected pages will be copied-on-write and no longer be shared (so don't do that when developing any library code).

Where a shared library gives an advantage is when there are many different applications all using the same library. So the "shared" part of "shared library" means sharing between completely different executable files. Sharing between multiple instances of the same executable file is already done by the virtual memory system (less any CoW).

The author's next point about sharing between other applications is where the size of libthingy becomes relevant. His point being that if libthingy is only 5K, you're only saving 45K by making it a shared (between different executables) library. So that's 45K more disk space used up and 45K more RAM used up when loading those 10 different applications in memory. The idea is the hassle savings trumps the disk and memory savings. The situation favors the author's position to use static linking for smaller less universal libraries even more than he realized (or at least wrote about).

For a desktop computer, you're going to see more applications, and fewer instances of each, loaded. So here, the issue really is sharing between applications. But the point remains valid regarding small specialty libraries that get used by only a few (such as 10) applications. However, on a server computer, there may well be hundreds of instances of the same application, and perhaps very few applications. It might be a mail server running 1000 instances of the SMTP daemon trying to sift through a spam attack. Even if the SMTP code is built statically, those 1000 instances still share unmodified memory mapped from the executable file.

Re:Instances don't really matter for static linkin (1)

FooBarWidget (556006) | more than 11 years ago | (#5874320)

Yes if you start the same app twice, they will share memory. But that isn't the problem.
If your entire GNOME desktop is statically linked to GTK+, and you launch panel, nautilus and metacity, then you're loading 3 seperate copies of GTK+ into memory that don't share any memory at all!

This is the price to pay.. (3, Interesting)

DuSTman31 (578936) | more than 11 years ago | (#5874209)

One of the greatest strengths of the UNIX platform is its diversity..

Package installation is a simple prospect on the Windows platform for the simple reason that the platform has little diversity.

Windows supports a very limited set of processors.. So there's one factor that windows packaging doesn't have to worry about.

Windows doesn't generally provide seperately compiled binaries for slightly different processors ("Fat binaries" are used instead, wasting space).. So the packaging system doesn't have to worry about that. On linux, on the other hand, you can get separate packages for an athlon-tbird version and an original athlon version.

On an MS system, the installers contain all the libraries the package needs that have the potential to not be on the system already. This could make the packages rather large, but ensures the user doesn't have to deal with dependencies. Personally, I'd rather deal with dependencies myself than super-size every installer that relies on a shared object..

Furthermore, on windows there arn't several different distributions to worry about, so the installers don't have to deal with that either.

All of these point confer more flexibility to the unix system but have the inevitable consequence that package management can get to be rather a complex art. We could simplify package management a great deal, but it'd mean giving up the above advantages.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>