Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AutoPackaging for Linux

Zonk posted more than 9 years ago | from the automagically dept.

Announcements 623

Isak Savo writes "The next generation packaging format for Linux has reached 1.0. With Autopackage officially declared stable, there is now an easy way for developers to create up to date, easy installable packages. There are lots of screenshots available including a flash demo of a package installation."

Sorry! There are no comments related to the filter you selected.

Mirror (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#12061080)


MIRROR IS HERE (2, Insightful)

IamTheRealMike (537420) | more than 9 years ago | (#12061207)

Re:MIRROR IS HERE (2, Informative)

sirReal.83. (671912) | more than 9 years ago | (#12061226)

That's not a complete mirror :(

This is missing. []

nextgen already here: emerge (-1, Troll)

ZeekWatson (188017) | more than 9 years ago | (#12061081)

The nextgen packaging system is already in Gentoo.

emerge -- use it, live it, love it!

Re:nextgen already here: emerge (0)

Anonymous Coward | more than 9 years ago | (#12061127)

Why is this "informative"? Next time I see a gentoo user in real life I'm going to trip him.

Re:nextgen already here: emerge (-1, Troll)

frankm_slashdot (614772) | more than 9 years ago | (#12061130)

i 110% agree. i have a gentoo server colocated in cali (im in south jersey) and i only have to say one thing about it --- anyone can promise you complete control over your operating system... but only gentoo can deliver on it and make it easy at the same time.

Re:nextgen already here: emerge (4, Interesting)

ZephyrXero (750822) | more than 9 years ago | (#12061220)

I'm a gentoo user as well, but I'm very excited by Autopackage. The whole reason many people use gentoo is b/c it is so easy to install software. The main problem with that system is that someone has to add it to the portage tree, and if it's not popular enough, it won't get in... with Autopackage you put the installation in the developers hands and you no longer have to rely on your distro to do it for you. I say use Portage/Apt/Whatever for your system/low-level programs and use autopackage for your higher level ones (Firefox, Gaim, GIMP, etc)...Autopackage could finally be the answer many Windows users have been waiting on to make the switch!

Re:nextgen already here: emerge (1)

St. Arbirix (218306) | more than 9 years ago | (#12061296)

Plenty of people publish their own ebuilds seperately from the portage tree. That's what portdir_overlay is for.

It's similar to how many Fedora users get mp3 support. They use an unofficial update site.

Re:nextgen already here: emerge (1, Interesting)

karmaflux (148909) | more than 9 years ago | (#12061131)

Not everyone wants to tie themselves to a huge complex packaging system. Some of us are perfectly happy with checkinstall. Thanks anyway.

Re:nextgen already here: emerge (0)

Anonymous Coward | more than 9 years ago | (#12061196)

I tried checkinstall back with Red Hat 7.1 when I was desperately trying to make RPM distributions sane to administer. I gave up eventually.

Let us know when you're ready to join us in 2005.

Re:nextgen already here: emerge (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#12061237)

hahahahah you use red hat no wonder

try using a distribution that doesn't suck

Re:nextgen already here: emerge (1, Insightful)

Abcd1234 (188840) | more than 9 years ago | (#12061135)

Riiight, Gentoo's package system, which is basically a re-implementation of the BSD ports system, is "nextgen"... oh, you fanboys, you're so cuuute!

Re:nextgen already here: emerge (5, Insightful)

ArbitraryConstant (763964) | more than 9 years ago | (#12061141)

Developers want to be able to release packages that work on all the Linuxes, not just Gentoo. Not everyone wants to make the fast updates/reliability tradeoff necessary to use Gentoo.

Re:nextgen already here: emerge (1)

ZephyrXero (750822) | more than 9 years ago | (#12061240)

I think with Autopackage giving developers the freedom to make one installation package for all distros, it's going to make Linux really take off this year. The huge number of incompatable distros scares off many companies and developers, so maybe this will be the answer we've been looking for...

Re:nextgen already here: emerge (1)

St. Arbirix (218306) | more than 9 years ago | (#12061262)

That still doesn't discount emerge with Gentoo's portage. For Gentoo in specific, emerge will retrieve the files it needs (often in source) and then run the various commands it needs to to get the program correctly inserted into your distribution (often this requires compiling).

There's nothing stopping anyone from using portage to install an entire system based on binaries. Besides, if a developer is releasing a package that truly works on all Linuxes then they're packaging compiled binaries for every architecture, using some sort of interpretation, or compiling upon installation. Portage can already retrieve the correct binary for your architecture based solely upon the ebuild file. Unless the AutoPackaging system requires you to download compiles for all architectures at once, they're going to be doing what portage already does with less dependence upon user compilation.

Re:nextgen already here: emerge (0, Offtopic)

jay-be-em (664602) | more than 9 years ago | (#12061149)

Yeah, because everyone wants to wait 8 hours for gnome to compile.

Give me a fucking break.

Re:nextgen already here: emerge (1)

Screaming Lunatic (526975) | more than 9 years ago | (#12061194)

Yeah, because everyone wants to wait 8 hours for gnome to compile.

Portage can work with binaries. The package to be installed can be source, binary, or your grand-daddy's p0rn. Portage is a bunch of python scripts that make dependency checking and package distribution easy. Your package being source is not a strict precondition.


Re:nextgen already here: emerge (2, Insightful)

bman08 (239376) | more than 9 years ago | (#12061224)

Portage doesn't work as well on Redhat or Debian systems as it does on Gentoo. The beautiful magic of Autopackage, as I understand it, is that one package works for all distributions. The theory is that devlopers will then only have to release one autopackage instead of making ebuilds, debs, rpms and whatever other packages the seventeen thousand faces of Linux are asking for these days.

Re:nextgen already here: emerge (0)

Anonymous Coward | more than 9 years ago | (#12061213)

PKGDIR="/mnt/cdrom" emerge -vgk gnome

Next complaint, please.


Anonymous Coward | more than 9 years ago | (#12061171)

Official Gentoo-Linux-Zealot translator-o-matic

Gentoo Linux is an interesting new distribution with some great features. Unfortunately, it has attracted a large number of clueless wannabes who absolutely MUST advocate Gentoo at every opportunity. Let's look at the language of these zealots, and find out what it really means...

"Gentoo makes me so much more productive."
"Although I can't use the box at the moment because it's compiling something, as it will be for the next five days, it gives me more time to check out the latest USE flags and potentially unstable optimisation settings."

"Gentoo is more in the spirit of open source!"
"Apart from Hello World in Pascal at school, I've never written a single program in my life or contributed to an open source project, yet staring at endless streams of GCC output whizzing by somehow helps me contribute to international freedom."

"I use Gentoo because it's more like the BSDs."
"Last month I tried to install FreeBSD on a well-supported machine, but the text-based installer scared me off. I've never used a BSD, but the guys on Slashdot say that it's l33t though, so surely I must be for using Gentoo."

"Heh, my system is soooo much faster after installing Gentoo."
"I've spent hours recompiling Fetchmail, X-Chat, gEdit and thousands of other programs which spend 99% of their time waiting for user input. Even though only the kernel and glibc make a significant difference with optimisations, and RPMs and .debs can be rebuilt with a handful of commands, my box MUST be faster. It's nothing to do with the fact that I've disabled all startup services and I'm running BlackBox instead of GNOME or KDE."

" Gentoo Linux workstation..."
" overclocked AMD eMachines box from PC World, and apart from the third-grade made-to-break components and dodgy fan..."

"You Red Hat guys must get sick of dependency hell..."
"I'm too stupid to understand that circular dependencies can be resolved by specifying BOTH .rpms together on the command line, and that problems hardly ever occur if one uses proper Red Hat packages instead of mixing SuSE, Mandrake and Joe's Linux packages together (which the system wasn't designed for)."

"All the other distros are soooo out of date."
"Constantly upgrading to the latest bleeding-edge untested software makes me more productive. Never mind the extensive testing and patching that Debian and Red Hat perform on their packages; I've just emerged the latest GNOME beta snapshot and compiled with -09 -fomit-instructions, and it only crashes once every few hours."

"Let's face it, Gentoo is the future."
"OK, so no serious business is going to even consider Gentoo in the near future, and even with proper support and QA in place, it'll still eat up far too much of a company's valuable time. But this guy I met on #animepr0n is now using it, so it must be growing!"


Anonymous Coward | more than 9 years ago | (#12061205)


so true

It does not scale. (2, Insightful)

JPriest (547211) | more than 9 years ago | (#12061186)

The idea of storing all software on repositories does make dependencies easier to manage but could you imagine doing it that way in Windows? You have all the overhead of having to centrally locate ALL software on a mirror somewhere. Anytime you as a software developer want to release software, you have to try to get it pushed out to all the mirrors (which you have no control over) in order for people to access it. This idea is also not very friendly to closed source projects.

Re:It does not scale. (1)

ZephyrXero (750822) | more than 9 years ago | (#12061281)

It's not friendly to any project, closed or open...that's why Autopackage is such a great idea :)

Re:nextgen already here: emerge (5, Insightful)

ArbitraryConstant (763964) | more than 9 years ago | (#12061282)

Jesus Gentoo fanbois can be annoying. For some reason, unlike the users of every other distro, some Gentoo users think everyone would be happier with the decision they've made for themselves.

Some people like Gentoo, but some people have serious issues with it. emerge is a decent package manager, but it's attached to a distro that conservative users aren't going to touch. The more conservative distros have package managers that their users are already perfectly happy with, so it's unlikely to be used anywhere else.

Linux (-1, Troll)

PepeGSay (847429) | more than 9 years ago | (#12061086)

((Handshake)) Welcome to 1990.

Re:Linux (0)

Anonymous Coward | more than 9 years ago | (#12061210)

No kidding... Windows has had this as long as I can remember (since 3.1)...

That said, one of the major barriers to Linux on the desktop finally fell. Rather than having to understand packages, people can simply download a file and have everything automagically set up.

Fix that, and either get GNOME and KDE to work together nicely, or standardise around one, and Linux on the desktop will become truely viable...

Re:Linux (0, Troll)

Anonymous Coward | more than 9 years ago | (#12061214)

Yeah, I'll say the same thing about Microsoft Windows when it can actually use all of my system memory and not resort to some BIOS memory hole hack and preemptively multitask. I nearly laughed out loud when my officemate told me that it's normal to only have 3 GB of 4 available under Windows. Please.

Windows user gripes about linux: hard to use.
Unix gripes about Windows: performs poorly.

Re:Linux (-1)

Anonymous Coward | more than 9 years ago | (#12061252)

Uh... yeah...

I've dealt with hundreds of high-memory Windows computer, never ONCE have I seen that happening.

Fucking troll.

Re:Linux (-1)

Anonymous Coward | more than 9 years ago | (#12061279)

Fine. Come to my office and fix mine (Windows XP pro) or tell me how, and I'll shut up happily.

In the meantime, fuck yourself.

Re:Linux (2, Interesting)

Doc Ruby (173196) | more than 9 years ago | (#12061270)

Which platform had packages installable from the Net, with dependencies and versioning, by clicking a single GUI button, in 1990? Or typing "installer install package"? One of the reasons I prefer Debian to Windows is precisely because of the package method of SW distribution. The closest MS has come is its WindowsUpdate abominations.

Re:Linux (1)

PepeGSay (847429) | more than 9 years ago | (#12061307)

The issue is that it is the first real solution that makes it "click" installable with Average Joe knowledge about the system. The fact that it also uses more modern delivery methods is certainly in its favor. Otherwise I would have said. "((Handshake)) Welcome to 1990. And congrats on only doing it as well as 1990 level even though it is 2005."

Making something "Internet aware" is just not an earth shattering big deal, it really never was. It is more important to talk about what you do with that internet awareness. Otherwise e-toys would have made billions.

Oops... (0, Offtopic)

whysanity (231556) | more than 9 years ago | (#12061091)

Error: 408 Request Time-out

Just a comment... (0, Redundant)

pro-mpd (412123) | more than 9 years ago | (#12061096)

Aww, already /.'ed... and is saying timeout... :(

Mirrordot (4, Insightful)

Hachey (809077) | more than 9 years ago | (#12061120)

it's about time there was a system to automatically put submitted links thorugh MirrorDot [] . This is a prime example; /.ed before 10 comments were posted. sheesh.

Check out the [] :
The only wiki source for politically incorrect non-information about things like Kitten Huffing [] and Pong! the Movie [] !

That's cool, but... (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#12061097)

Linux is cool for a free OS, but let's be honest.

If you want to be productive, you gotta go Microsoft. End of story.

They ought to autopackage themselves a mirror (1)

MsGeek (162936) | more than 9 years ago | (#12061099)

Their server is now in smoking ruins.

Coralized link (0)

Anonymous Coward | more than 9 years ago | (#12061100)

Aren't there enough (2, Insightful)

dg41 (743918) | more than 9 years ago | (#12061106)

Aren't there enough package/software installation formats for Linux that aren't being used enough as it is?

Re:Aren't there enough (4, Informative)

FooBarWidget (556006) | more than 9 years ago | (#12061177)

I knew someone would say that. Read our FAQ. Mike posted a part of it below. Please read our website for the full FAQ once the Slashdotting is over.
And we'll be available at #autopackage at if you have questions.

Re:Aren't there enough (4, Informative) (653730) | more than 9 years ago | (#12061241)

yes, but that's not the main problem IMO. Package formats are often tought as the number 1 offender when it comes to inter-distro compatibility, but that's not the main problem. The main problem is that package mainteinance happens at distro-level, instead of developer-level. What we need is to move as much mainteinance work to the developers. It's the number 1 cause of problems: A package in debian may require "libfoo", but in fedora it may depend on "foo" or "foo+randomstring". If all those things would be done at developer level, they'd be more coherent, and inter-distro compatibility would be greater. Package format would be irrelevant.

And this is why autopackage is great. It doesn't tries to replace apt/rpm/gentoo, it just tries to be a good way of distributing software, and encourages developers to create their own "autopackage packages" so they work in every distro

The purpose of autopackage (5, Informative)

FooBarWidget (556006) | more than 9 years ago | (#12061107)

No doubt lots of people will have all kinds of questions about autopackage, such as:
- "What idiots!! Another packaging format is the last thing we need!"
- "What's wrong with apt-get?"
- "Everybody should use Gentoo!"

Slashdotters are highly encouraged to read the autopackage FAQ [] ! Our project has existed for over 2 years now, and many people have asked us those questions. In short: autopackage is not meant to replace RPM/DEB/apt-get/etc.

If you have more questions, feel free to come over at #autopackage at
We'll be glad to answer your questions

Re:The purpose of autopackage (1)

Dave2 Wickham (600202) | more than 9 years ago | (#12061122)

#autopackage probably being the best bet ATM, with sunsite being slashdotted.

Re:The purpose of autopackage (1)

FooBarWidget (556006) | more than 9 years ago | (#12061129)

It seems our servers can't withstand slashdotting. Mike posted the FAQ below. Scroll down to read it.

Please let non-root people install (5, Insightful)

Anonymous Coward | more than 9 years ago | (#12061175)

The only thing I'd like to see in a package manager is to allow non-root users to install software (perhaps under $HOME ; perhaps under /usr/local if they're members of the group local).

It's absurd that you need to enter a root password to do something as simple as install a user-space program - and it's absurd that package mangers only support dependancy checking for stuff installed in the main system directories.

At work, the main directories (/usr, /bin, etc) can only be accessed by the IT guys; but every department has a directory ("/usr/department/engineering", for example) of that memebers of that group can install software in. We have a newer version of Perl in ours. It really sucks that package managers can't help deal with the dependancies in an environmennt like this.

Re:Please let non-root people install (1)

Dave2 Wickham (600202) | more than 9 years ago | (#12061184)

Non-root installs are easily done in autopackage.

Re:Please let non-root people install (4, Informative)

stefanlasiewski (63134) | more than 9 years ago | (#12061219)

The only thing I'd like to see in a package manager is to allow non-root users to install software

Check out the Flash demo. They actually demonstrate that capability-- the application installs packages under $HOME/.local/lib $HOME/.local/share, etc. I dislike cluttering $HOME with $HOME/lib, $HOME/share, $HOME/man, etc. -- installing these pieces under a dot directory to keep them hidden & organized is a great idea.

But I am also curious about allowing groups to install software under /usr/local or /usr/department .

Mirror of the autopackage website (4, Informative)

FooBarWidget (556006) | more than 9 years ago | (#12061197)

Mike setup a mirror of the autopackage website! It's here [] . The FAQ is also available on that mirror.

Re:The purpose of autopackage (5, Funny)

Screaming Lunatic (526975) | more than 9 years ago | (#12061225)

All Your Packages Are Belong To Us []

All Your Bandwidth Are Belong To Us.

apt-get? (0, Redundant)

FlashBuster3000 (319616) | more than 9 years ago | (#12061108)

Nice idea, but where is the difference to apt-get and its various front-ends?
Seems to me like the exact same thing, except that apt-get is proven and widely used..
They don't expect someone to switch to autopackage when so many distros didn't change from rpm to apt, do they?

Re:apt-get? (1)

Dave2 Wickham (600202) | more than 9 years ago | (#12061152)

Autopackage is targetted at end-user installation, not core distro packages; the intention is that the core is still managed with RPM/Deb et al. Read the FAQ posted elsewhere in the topic for more info.

Re:apt-get? (1)

FooBarWidget (556006) | more than 9 years ago | (#12061154)

autopackage is definitely not the same as apt-get. Please read our FAQ. Mike posted it below. You may also want to read the website when it's up.

Kinda sluggish (0)

Anonymous Coward | more than 9 years ago | (#12061112)

I'm not going to wait that long to install a simple package.

Some FAQ entries (4, Informative)

IamTheRealMike (537420) | more than 9 years ago | (#12061115)

So much for sunsite having plenty of bandwidth and fast servers! Well, here are some select pieces from the FAQ:

# What is autopackage?

For users: it makes software installation on Linux easier. If a project provides an autopackage, you know it can work on your distribution. You know it'll integrate nicely with your desktop and you know it'll be up to date, because it's provided by the software developers themselves. You don't have to choose which distro you run based on how many packages are available.

For developers: it's software that lets you create binary packages for Linux that will install on any distribution, can automatically resolve dependencies and can be installed using multiple front ends, for instance from the command line or from a graphical interface. It lets you get your software to your users quicker, easier and more reliably. It immediately increases your userbase by allowing people with no native package to run your software within seconds.

# Is autopackage meant to replace RPM?

No. RPM is good at managing the core software of a distro. It's fast, well understood and supports features like prepatching of sources. What RPM is not good at is non-core packages, ie programs available from the net, from commercial vendors, magazine coverdisks and so on. This is the area that autopackage tackles. Although in theory it'd be possible to build a distro based around it, in reality such a solution would be very suboptimal as we sacrifice speed for flexibility and distro neutrality. For instance, it can take several seconds to verify the presence of all required dependencies, something that RPM can do far quicker.

# Why a new format? Why do the RPMs I find on the net today only work on one distro?

There are a number of reasons, some obvious, some not so obvious. Let's take them one at a time:

  • Dependency metadata: RPMs can have several types of dependencies, the most common being file deps and package deps. In file deps, the package depends on some other package providing that file. Depending on /bin/bash for a shell script is easy, as that file is in the same location with the same name on all systems. Other dependencies are not so simple, there is no file that reliably expresses the dependency, or the file could be in multiple locations. That means sometimes package dependencies are preferred. Unfortunately, there is no standard for naming packages, and distros give them different names, as well as splitting them into different sized pieces. Because of that, often dependency information has to be expressed in a distro-dependent way.

  • RPM features: because RPM is, at the end of the day, a tool to help distro makers, they sometimes add new macros and features to it and then use them in their specfiles. People want proper integration of course, so they use Mandrake specific macros or whatever, and then that RPM won't work properly on other distros.

  • Binary portability: This one affects all binary packaging systems. A more detailed explanation of the problems faced can be found in Chapter 7 [] of the developer guide.

  • Bad interactions with source code: because the current versions of RPM don't check the system directly, they only check a database, it makes it hard to install them on distros like Gentoo even when they only use file deps. Similar problems arise if you compile things from source.

There are various reasons why a new format was created rather than changing RPM. The first is that in order to do dependency checking in a distribution neutral fashion, a completely new dependency model was required. RPM expresses dependencies only in terms of other RPMs (and the same is true of all current package managers). An expressed dependency is usually given by a name or file location, but both of these things vary - usually pointlessly - between distributions. Worse, a dependency does not encode any information on where to find it or how to install it if it's missing: to get these features databases with all possible package names must be provided externally. If a name is found in a package but not a database you get the familiar "E: Broken Packages" error from apt. It is said that the package universe lacks closure. But, this is a natural and common occurance that any robust installer should be able to deal with cleanly.

Another reason was that when autopackage was designed, distributions and desktops differed greatly in how they dealt with things like menus and file assocations. It's been three years now, and this is still the case. While the situation is looking up, and we should soon have robust and well designed standards for these facilities, it will be years until they have been fully rolled out. Also there are still many things that are not yet standardised, like init scripts, documentation/help systems and so on.

To deal with this, it was felt that an API based approach rather than a table based approach was the way forward. New APIs could be added easily to deal with the case of things like installing a menu item which can be highly complex. Quirks in the distributions could be easily dealt with as the full power of bash scripting was available at every stage, so decisions like "should this file be installed here or here" can be answered on the fly.

Finally, there was a psychological reason. While RPMs can theoretically be multi-distro, in practice they never are. In practice, people do not know how to build multi-distro RPMs so on virtually all open source project websites you have different RPMs for different versions of each distribution. Worse, much software is not designed with binary portability in mind. Features are enabled/disabled at compile time instead of runtime, paths are hard coded into the binaries.

Creating a totally new format forces people to learn about it from us, and along the way we can teach them how to build robust and portable binaries using the tools we have produced, like apbuild which deals with the baroque glibc/gcc combination, relaytool which makes it trivial to adapt to missing libraries/library features at runtime, and binreloc which lets you easily make a typical autotools based project binary relocatable (installable to any prefix) at runtime. By using a new format, we ram home the point that this is different to what has come before, and requires modifications to the source, so making autopackages even more reliable and useful.

Re:Some FAQ entries (5, Informative)

IamTheRealMike (537420) | more than 9 years ago | (#12061167)

Why bother?

# What's wrong with centralized repositories, apt style?

The system of attempting to package everything the user of the distro might ever want is not scalable. By not scalable, we mean the way in which packages are created and stored in a central location, usually by separate people to those who made the software in the first place. There are several problems with this approach:

  • Centralisation introduces lag between upstream releases and actually being able to install them, sometimes measured in months or years.
  • Packaging as separate from development tends to introduce obscure bugs caused by packagers not always fully understanding what it is they're packaging. It makes no more sense than UI design or artwork being done by the distribution.
  • Distro developers end up duplicating effort on a massive scale. 20 distros == the same software packaged 20 times == 20 times the chance a user will receive a buggy package. Broken packages are not rare: see Wine, which has a huge number of incorrect packages in circulation, Mono which suffers from undesired distro packaging, etc
  • apt et al require extremely well controlled repos, otherwise they can get confused and ask users to provide solutions manually : this requires an understanding of the technology we can't expect users to have.
  • Very hard to avoid the "shopping mall" type user interface, at which point choice becomes unmanagably large: see Synaptic for a pathological example of this. Better UIs are possible but you're covering up a programmatic model which doesn't match the user model esp for migrants.
  • Pushes the "appliance" line of thinking, where a distro is not a platform on which third parties can build with a strong commitment to stability but merely an appliance: a collection of bits that happen to work together but may not tomorrow: you can use what's on the CDs but extend or modify it and you void the warranty. Appliance distros have their place: live demo CDs, router distros, maybe even server distros, but not desktops. To compete with Windows for mindshare and acceptance we must be a platform.

# What's wrong with NeXT/MacOSX style appfolders?

One of the more memorable features of NeXT based systems like MacOS X or GNUstep is that applications do not have installers, but are contained within a single "appfolder", a special type of directory that contains everything the application needs. To install apps, you just drag them into a special Applications folder. To uninstall, drag them to the trash can. This is a beguilingly easy way of managing software, and it's a common conception that Linux should also adopt this mechanism. I'd like to explain why this isn't the approach that autopackage takes to software management.

The first reason is the lack of dependency management. Because you are simply moving folders around, there is no logic involved so you cannot check for your apps dependencies. Most operating systems are made up of many different components, that work together to make the computer work. Linux is no different, but due to the way in which it was developed, Linux has far more components and is far more "pluggable" than most other platforms. As such, the number of discrete components that must be managed is huge. Linux is different to what came before not only in this respect, but also because virtually all the components are freely available on the internet. Because of this, software often has large numbers of dependencies which must be satisfied for it to work correctly. Even simple programs often make use of many shared libraries in order to make them more efficient, and to make the developers lives easier.

See the question "Why not statically link everything?" for more discussion about dependencies. Appfolders have a number of other disadvantages as well, even if you ignore the loss of efficiency they entail. One obvious one is that there is no uninstall logic either, so the app never gets a chance to remove any config files it placed on the system. Apps can't modify the system in any way, except when they are first run - but what if you need to modify the system in order to run the app? Another is that the app menus are largely determined by filing system structures. This means that it's hard to have separate menus for each user/different desktop environments without huge numbers of (manually maintained) symlinks.

Finally the most straightforward implementation of things like file associations has serious security problems as the broken-by-design DMG exploits on MacOS X have shown. Unfortunately solving these kinds of problems will always be difficult because they run to the core of the systems design.

# Why not statically link everything?

A commonly proposed solution to things being hard to install on Linux is to eliminate the biggest problem - dependencies - by not having any dependencies at all. This solution is typically not viable:

  • Disk space may be cheap, but memory pressure is still a significant bottleneck on most desktop systems. Static linking prevents page-level sharing which is easily the biggest win in terms of memory usage. Without extensive use of dynamic linking, desktop Linux would slow to a crawl.

  • Large numbers of internet users (about 50% seems to be the best estimate) are still on dialup connections. For these people the difference between a 12mb download and a 3mb download can make the difference between trying and app and not trying it, or worse: download a security update or not.

  • Security updates are less effective because each app has a private copy of the same code.

That said, static linking is not necessarily evil. Dependencies do have a cost, and should be treated with care. For rare or very new libraries, static linking may be the best way to keep things simple for the user especially if the code is small. For very unstable libraries, you basically have no choice, as the version you need will often not be available. The apbuild tool makes statically linking much easier than it would otherwise be for occasions when it is deemed that the benefit outweighs the cost.

# What's a desktop Linux platform? Why do we need one?

Essentially, software is easy to install on Windows and MacOS not because of some fancy technology they have that we don't - arguably Linux is the most advanced OS ever seen with respect to package management technology - but rather because by depending on "Windows 2000 or above" developers get a huge chunk of functionality guaranteed to be present, and it's guaranteed to be stable.

In contrast, on Linux you cannot depend on anything apart from the kernel and glibc. Beyond that you must explicitly specify everything, which restricts what you can use in your app quite significantly. This is especially true because open source developers often depend on new versions of libraries that have been out for perhaps only a few months, putting Linux users on a constant upgrade treadmill.

Worse, sometimes the only way of easily upgrading a particular component like GTK+ is to upgrade your entire distribution which may be hard, especially for dialup users (the norm in third world countries). The problem is especially big because many distros don't install obsolete library versions by default, relegating them to compatibility packages the user must know about and request explicitly.

A desktop Linux platform would help, because instead of enumerating the dependencies of an app each time (dependencies that may be missing on the users system), application authors can simply say that their program requires "Desktop Linux v1.0" - a distribution that provides platform v1.3 is guaranteed to satisfy that. Support for a given platform version implies that a large number of libraries are installed.

The platform would update every year, meaning that for applications that were Desktop Linux compatible (or whatever branding was used), there would be a worst-case upgrade lifespan of 12 months.

A platform is more than just providing a bunch of packages. It implies a commitment to stability - that's why it's called a platform. Would you want to build your house on sand? No? Neither do application developers, and this is why a platform is so important.

A fair amount of work is required to make this happen, and it's also quite political. You can read more thoughts on this in the NOTES file [] .

I don't know about this (1, Insightful)

theantix (466036) | more than 9 years ago | (#12061195)

To me it seems like anything that makes it easy for users to install random software off the internet to be a REALLY BAD THING. Using Linux right now is a pleasure because if you're using Gentoo, Ubuntu, Fedora, Mandrake, etc... you get all your software packaged for you by your distribution. No spymare, no viruses, so adware, no shareware. It all generally works, dependancies are sorted out and managed, it's all a really good system.

Encouraging users to install Comet Cursors for Linux seems to me like a huge step backwards for Linux. I sincerely hope that distributions do not support this or any other system like this one to promote good computing practises and avoid the sorts of problems that plague Windows users. Why do we want to emulate what has been proven to be a terrible way of distributing and using software?

Re:I don't know about this (1)

TheSunborn (68004) | more than 9 years ago | (#12061231)

But what do you do if the software you need are not included with you distro?

testing, unstable, experimental and (2, Informative)

free2 (851653) | more than 9 years ago | (#12061299)

But what do you do if the software you need are not included with you distro?
I usually don't have this problem with Debian (especially with the huge testing and unstable distros), and even the Debian experimental repository is signed.

But if I did, I woud look for some known Debian maintener unofficial packages in .

Where does everything get autopackaged to? (2, Insightful)

Wavicle (181176) | more than 9 years ago | (#12061119)

The biggest problem with Linux distributions is that different distributions have different ideas about where things should go: does this file go in /sbin, /usr/sbin, /usr/bin/, /usr/local/bin, or somewhere else? Where do the configuration settings go? /home/*/.config? /etc/profile?

So, does this address the problem? Most software makers would really like to be able to release ONE package for their software and know that it will end up somewhere sensible.

I know we all love to bash Microsoft, BUT, I have rarely seen an installation problem with software written for Windows.

Installation for Windows (2, Insightful)

karmaflux (148909) | more than 9 years ago | (#12061145)

Right, installing shit is great on Windows. The suicide hotlines start overloading when you try to remove software.

Re:Where does everything get autopackaged to? (5, Insightful)

Abcd1234 (188840) | more than 9 years ago | (#12061150)

Umm, that's what the Linux Standard Base [] is for. Blame the distro makers and packagers for not following it. After all, the LSB has been out for a *long* time...

Re:Where does everything get autopackaged to? (1)

IamTheRealMike (537420) | more than 9 years ago | (#12061251)

The LSB really isn't big enough for any non-trivial desktop app, and it shows no signs of growing to meet that challenge. There'll probably ahve to be a different "standard" (yeah yeah, I know) base set, one that builds on LSB but extends it.

Re:Where does everything get autopackaged to? (1)

evvk (247017) | more than 9 years ago | (#12061165)

Who cares about M$? Installing software on Windows is a lot of work, having to manually leech (or *shudder* by) the software and click through dozens of dialogs. apt-get is easy, and I have seldom had any problems with it.

Which brings me to the question, why would any free software author bother with autopackage when all decent distros provide mechanisms such as apt-get? And what about non-x86 architechtures. In my view autopackage support is _wasted time_. Any self-respecting user of my software already uses Debian, something based on it or another distro with a good package collection. Why would they want to use to something less convenient? Non-free software OTOH is irrelevant.

Re:Where does everything get autopackaged to? (2, Interesting)

mp3phish (747341) | more than 9 years ago | (#12061206)

Because maybe your package is a small project not yet picked up by the distributions. Are you as the maintainer going to package it for debian, mandrake, redhat, and suse? Or would you rather convert your tarballs into autopackages?

I think if I were an upstart package, and nobody were packaging me for their distro, I would want to be converted to an autopackage.

Re:Where does everything get autopackaged to? (4, Informative)

FooBarWidget (556006) | more than 9 years ago | (#12061215)

By default, autopackage either installs to /usr or to $HOME (depending on what the user wants). If you really want it to use another prefix, you still can. The reason we use /usr instead of /usr/local is because there are many broken distributions that don't setup paths for /usr/local correctly. And /usr is the standard for many/most distributions.

There's a mirror of the autopackage website's information here [] .

Re:Where does everything get autopackaged to? (1)

mp3phish (747341) | more than 9 years ago | (#12061244)

"does this file go in /sbin, /usr/sbin, /usr/bin/, /usr/local/bin, or somewhere else?"

Well, if it is a tool root might need and might be needed in some sort of "recovery" console, then you put it in /sbin. If it is a tool that might be needed by users then you put it in /usr/bin. However, this all assumes these are distribtuion supplied files. 3rd party files not provided by your distributer's package management should be installed into /usr/local/bin. The distribution package managers stay out of /usr/local. It is reserved for the administrator to install programs into.

Config files is another story. But typically the program shouldn't install files into people's /home until that user uses that program (IMO) Otherwise keep the default files to be copied in /etc.

Where do the configuration settings go? /home/*/.config? /etc/profile?"

Way to Slashddos the server already (0)

Anonymous Coward | more than 9 years ago | (#12061128)

Anyway, time to get outside and smell the rain clouds.

Mirror of screenshots link (-1)

Anonymous Coward | more than 9 years ago | (#12061132)

Since the site is already slowing down, here's a mirror of the content...
Everybody loves screenshots, and we're no exception, so here we are! Enjoy...

Flash movie demo
2.7 Mb flash demo of an install session []

Commandline frontend: [Screenshot]

GTK+ frontend: [Screenshot]

QT frontend: [Screenshot]

QT Manager: [Screenshot]

GTK+ Manager: [Screenshot]

There are some older screenshots here. []
You can thank me later.

Re:Mirror of screenshots link (0)

Anonymous Coward | more than 9 years ago | (#12061204)

yeah, thanks for nothing asshole.

Yes, we need this!! (4, Insightful)

rice_burners_suck (243660) | more than 9 years ago | (#12061134)

There aren't many replies to this story yet, but I can already see it: Lots of people are going to complain, "Why the fsck do we need yet another packaging solution?!?! We already have rpm, deb, tgz, blah blah blah..."

The reason is that most of these packaging solutions, while great for developers and those who want detailed knowledge of the inner workings of their systems, simply suck when given to mortal users.

And they don't handle a number of edge cases too well... What if you want different versions of some software to coexist on the same system? What if you want ten different versions of a library? Yes, these can all be handled by current stuff... but not very well. It's bad enough that when we install software here, we actually get the rpms or whatever and then re-package them ourselves to serve our needs.

A packaging solution that actually works is desperately needed. (0)

Anonymous Coward | more than 9 years ago | (#12061137)

pkgsrc is better

Filled with these fun extras (0)

Anonymous Coward | more than 9 years ago | (#12061138)

Yes, but does it come with stopwatch?

Wrong Paradigm (5, Insightful)

user9918277462 (834092) | more than 9 years ago | (#12061139)

I've said it before and I'll say it again: The Windows model of acquiring and running software from a large number of random third parties is broken. It is fundamentally unsafe and, frankly, archaic in 2005. We do not trade 5.25" floppy disks with BASIC games on them, and we certainly shouldn't be downloading self-extracting installers from sketchy websites anymore, regardless of OS.

The current Linux model of distros integrating and authenticating software from upstream authors helps ensure the security of the userbase as well as providing installation ease of use. This is something we should be proud of rather than trying to imitate the technically inferior competition.

Re:Wrong Paradigm (1)

Abcd1234 (188840) | more than 9 years ago | (#12061168)

Wow... now *that* is an interesting point I'd never considered before. No, I have nothing else to add to this conversation, other than, well put. :)

Re:Wrong Paradigm (1)

bersl2 (689221) | more than 9 years ago | (#12061174)

Without a bridge, how do we get even a portion of the change-fearing masses?

Re:Wrong Paradigm (4, Insightful)

Steven Edwards (677151) | more than 9 years ago | (#12061203)

If the Windows Paradigm was broken people would not use Windows. Yes there are some things about Windows that suck but MSI and InstallShield installers are not a example. Windows security in most regards does suck but packaging is one of the few things Windows does right. You do know you can sign a package in Windows right? Vendor certificates work, just install any packages from Microsoft or from any other major third party vendor.

I guess you would only be happy if we just pulled everything down from SVN/CVS and built from source.

Re:Wrong Paradigm (2, Insightful)

schon (31600) | more than 9 years ago | (#12061292)

If the Windows Paradigm was broken people would not use Windows.

Yeah, just like if the ActiveX-plugin paradigm was broken, nobody would use IE, right?

Most users have *no* clue if a piece of software is designed incorrectly or not, it has exactly zero bearing on whether the masses use a particular piece of software or not.

Re:Wrong Paradigm (1)

evvk (247017) | more than 9 years ago | (#12061294)

> Yes there are some things about Windows that suck but MSI and InstallShield installers are not a example.

Yes they are. apt-get install is infinitely easier and more convenient.

Re:Wrong Paradigm (1)

t_allardyce (48447) | more than 9 years ago | (#12061212)

Linux is all about choice and like it or not most Windows users are stuck in this method and if they are going to switch to linux they'll want it to be just as easy. When i want some sort of program ill want it now, i don't want to waste time reading man pages, finding dependencies, watching builds or copying and pasting errors into google. Sure i could be downloading any kind of trojan program from a random website, it could delete all my data or try and phone home with my passwords but hell, i live on the edge. Production servers are of course a different matter. Personally i love the OSX way of just dragging an icon out of an archive and into your apps directory. The ideal way is to have distros do all the work like you say, but distros will never have every program - theres always something im gonna want to install that doesn't come in the package or binary that i need, one big autopackaging system is idea.

Re:Wrong Paradigm (5, Insightful)

karmaflux (148909) | more than 9 years ago | (#12061223)

Bittorrent calls you a liar, buddy. We trade 5.25" floppies in a metaphorical sense constantly. When I develop a program that takes random input and outputs Frank & Earnest cartoons, I don't want to have to wait for some Board of Linux Usage Oversight to give my 5k perl script the Stamp of Approval.

Nobody's trying to copy the Windows paradigm with autopackage. What they're trying to do is break down that barrier to cross-distribution software releasing. Your average desktop user does not want to compile software. Dropping to a terminal, cd pathtoapp, tar -jxvf whatever.tar.gz, cd newpath, ./configure; make; make install is too much shit for a user -- and then how to uninstall? Keep the source directory there forever?

"If they can't compile they should run Windows" is a stupid, backwards attitude, and autopackage is trying to fix it. Relying on upstream content providers is dangerous -- what happens when you disagree with your upstream provider? You have to switch distributions? Pat recently dropped Gnome support for Slackware -- I still run gnome. I do it with a third-party package from dropline. Is that broken? No.

The way to fix the problems you describe is to educate users, not to remove their usage priveleges. Teach people not to install untrusted software -- and teach them how to tell what software to trust! Don't just slap their hand and yell NO.

Re:Wrong Paradigm (2, Interesting)

isaks (871187) | more than 9 years ago | (#12061239)

If the package is a self extracting installer or a rpm/deb/whatever doesn't make the slightest difference. It all boils down to if you trust the author of the package or not. A major difference with autopackage wrt rpm/deb/whatever is that the upstream software author creates the package, not random packager joe. If you don't trust the author, you shouldn't be using the software, regardless of package format!

Re:Wrong Paradigm (0)

Anonymous Coward | more than 9 years ago | (#12061297)

You miss the point completely.

Namely, that it is essentially impossible for the average computer user to trust even a small fraction of the third parties responsible for all of the software on her system. Even with modern cryptography thrown into the mix, the overhead associated with collecting public keys, authenticating them through a web of trust and then verifying each and every package is enormous. If you expect average Joe Office Worker or Aunt Tillie to go through all that you are not living in reality.

The only solution is to automate the whole process, which several distributions have either done or are in the process of completing. You seem to be arguing that you implicity trust every person you believe you are downloading software from. That's fine, I'm not so cavalier with my machines.

Mod parent up! (0)

Anonymous Coward | more than 9 years ago | (#12061272)

>>The current Linux model of distros integrating and authenticating software from upstream authors helps ensure the security >>

Well said.

I can update and upgrade my whole Debian system without worrying about malware because I know the contributors have to be trusted in order to be given upload privileges.

Isn't this why sooooo many people use ALL Microsoft applications and hardware? What's the difference?

Re:Wrong Paradigm (1)

IamTheRealMike (537420) | more than 9 years ago | (#12061291)

I'm afraid your assumption that distributions "authenticate" software is flawed. Hiding a back door in software of any reasonable complexity is not hard, if you're so determined, and no distribution can protect you from that. Think about it: who in their right mind is going to wade through all 100,000 lines of configure script looking for back doors? Not going to happen.

Now what you may mean is spyware, malware etc. But this thinking is also flawed. Spyware and such is designed to make money by piggybacking on proprietary software. Open source software has no need of revenue to keep it alive, therefore no need of spyware - but proprietary software cannot enter most distro repositories because most distros do not distribute commercial software. Therefore they would never get a chance to filter things out anyway.

If you want to filter out malicious software in the decentralised model, you need to do several things:

  • Define a policy for what is and is not allowed
  • Create a certificate authority heirarchy SSL-style. It should be easy to get a package signing certificate: however, it should also be easy to get kicked off if you violate network policy.
  • Have an auto-update mechanism that downloads whitelists of signing keys: if you aren't in the whitelist, the system will give you big honking warnings.

The danger here is that people try to audit code before giving out certificates. That's clearly bong: you can't do that reliably or scalably, and anyway waiting until somebody has violated the network policy and then kick them off rapidly has much the same effect: the OS stays free of "bad" software.

Mirror (1, Informative)

anandpur (303114) | more than 9 years ago | (#12061144)

Autopackage []

For more information on autopackage... (5, Insightful)

mp3phish (747341) | more than 9 years ago | (#12061148)

I have been following autopackage for a while now.. It looks promising. This release will be the test to see if anybody will take it seriously (I hope so). Autopackage brings some really cool features to the table:
  • Frontends to different windowing and desktop systems.
  • Able to resolve dependancies even if you installed other software through the source, or with RPM or DEB
  • You will be able to download one package and install it on several different distributions.
Essentially, this will be as flexible as tarballs, only they will install easilly, and have clean upgrade paths and uninstall paths. With clean dependancy resolution. It sounds too good to be true, but you can only know it if you try it.

Here is the sourceforge link [] with some more info and downloading.

This is a great thing for Linux (2, Insightful)

TheMadPenguin (662390) | more than 9 years ago | (#12061155)

Seriously. I had envisioned something similar [] last year but this really takes the cake, or so to speak. I have yet to try Autopackage, but after seeing this, I'm sold. Especially if it works as intended. Cross-distro package management is what we need. Sure, DEB, RPM, TGZ, etc etc are all excellent in their own right, but being able to install packages across multiple distros is what we really need. I for one am impressed. Of course there are a few technical details that I need to learn about as far as cross-distro packaging goes, but it's a step in the right direction in any event.

Mirrordot got it! (1)

djsmiley (752149) | more than 9 years ago | (#12061170) e4822a1ed01/index.html

Autopackage Rocks (0)

Anonymous Coward | more than 9 years ago | (#12061176)

This project is essential for Linux's adoption and that's why I hope many distributions will ship with it. It's fantastic!

Be like OSX (0, Insightful)

Anonymous Coward | more than 9 years ago | (#12061187)

Maybe this packaging system works this way (I can't rtfm so I don't know), but one of the things that is easier in OSX than in Linux (especially for the newbie) is that installing software is usually as simple as dragging the application (which is actually a directory containing all of the application's files) to your "Applications" folder (or where-ever else you want to put it).

I do like apt as well, but I've also had some apt-nightmares trying to sort out messed-up dependancies on my debian box.

Re:Be like OSX (3, Insightful)

MsGeek (162936) | more than 9 years ago | (#12061267)

Here's how you fix dependencies in Debian:

#apt-get update
#apt-get dist-upgrade

Badabingbadabangbadaboom. It's done. Happy days are here again.

It would be nice... (1)

haeger (85819) | more than 9 years ago | (#12061222)

...if one could just make one package and have it install on any distribution. It seems that the page is slashdotted at the moment so I can't see anything but the frontpage. :-/


Choice is Important! (1)

zerojoker (812874) | more than 9 years ago | (#12061227)

I always thought that having the choice is one of the big advantages of open-source: Competition creates better software! Now they want to create one package-format for all distris? Time to make a fork! ;-) Seriously, i hope that developers start to pick it up asap...

mirrordot mirrors of article and images (2, Informative)

Nallep (700113) | more than 9 years ago | (#12061236)

Mirrordot of Flash (5, Informative)

vectorian798 (792613) | more than 9 years ago | (#12061245)

Flash Demo [] Screenshots []

I have to say this is like a godsave for linux. Most layusers will want some easy installation like this instead of using something like Yum (even if it is a GUI front-end to yum like GYUM). This is one giant step towards a viable desktop linux - and I believe that it isn't a replacement for apt/yum/[INSERT YOUR FLAVOR HERE] but uses them under the hood.

Before everyone starts bashing it and says that apt or emerge or whatever they use is the way to go, seriously think about it - one click installation, from a FRIENDLY user-interface, and easy to manage system for installing and uninstalling programs. Now if this were part of the base install on many distributions and some sort of standard was established (seriously, we need standards) I can probably convince my scared-of-Linux-because-it-is-hardcore friends to actually try Linux out.

One Autopackage... (3, Funny)

dotslashdot (694478) | more than 9 years ago | (#12061253)

One format to rule them all, One format to find them One format to bring them all and in the package bind them...

Apt-get is not bad either. (0)

jack_canada (786305) | more than 9 years ago | (#12061259)

Apt-get the download, installation and resolves your dependencies for you, I think it's the best package management system fro Linux so far. The graphical front end "Synaptic" is also really good, it's intuitive and easy to use. Emerge is not bad either but you wouldn't want to go through the long hours of compiling from source.

Also, Autopackage has an O.K interface. I'm saying this because by looking at the screenshots, it doesn't let you mark multiple items, search database or manipulate repositories what synaptic lets you do. Another thing is, I don't think an interface like that would be effective when managing about 2000+ packages that I have on my Linux system,.

Re:Apt-get is not bad either. (1)

jack_canada (786305) | more than 9 years ago | (#12061277)

the first line should be "Apt-get HANDLES the download, installation and resolves your dependencies for you, I think it's the best package management system fro Linux so far." I got carried away when I was typing :)./

Very nice (1)

ptarjan (593901) | more than 9 years ago | (#12061265)

How many time have you wanted to uninstal a package and then done this: rpm -qa .... and a thousand things run by .. then you have to guess the name of it.. or: man rpm .... trying to look for the command to find out the package name that contains a specific file. Then... after finding the name, you try rpm -e "package name", and it yells at you for some crazy dependency, like gaim depending on http or something like that. No, I must say that I have wanted a nice package manager for all my extraneous packages. I prefer packages to source because they make uninstalling much easier but autopackage seems to fill some of the voids with regular rpm packages.

Conflicts with existing package managers (3, Interesting)

jesterzog (189797) | more than 9 years ago | (#12061300)

I'm presently running Debian. I've briefly played with making my own .deb files so I'd be able to install some of my own things without necessarily completely losing track of everywhere they were scattered. With all of the extra meta files that need editing, the source packages versus binary packages, and everything else, though, the whole process of designing a .deb package looked a bit too structured and complicated for me to bother learning about... at least within the time I was prepared to spend.

If AutoPackage has a straightforward way to generate a simple package, such as from a tar.gz file, I might find it very helpful. What I'm wondering about, though, is how bad does it get when two package managers conflict? eg. Apt and AutoPackage might end up trying to control the same file, get mixed up between what manager is providing a particular dependency (particularly if Apt tries to grab a package that AutoPackage has already installed), or whatever else.

It also sounds a bit of extra work having two sets of commands to manage packages on one system, so ideally (in my world), I guess AutoPackage would wrap around Apt or whatever other manager, and invoke it when appropriate. Does AutoPackage just fight for control when there are conflicts, or does it integrate with other package managers nicely?

The server seems to be very slashdotted right now, so I can't do much reading up on it. Does this sort of conflict thing turn out to be much of a problem?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?