×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ryan Gordon Wants To Bring Universal Binaries To Linux

timothy posted more than 4 years ago | from the grossly-obese-binaries dept.

Software 487

wisesifu writes "One of the interesting features of Mac OS X is its 'universal binaries' feature that allows a single binary file to run natively on both PowerPC and Intel x86 platforms. While this comes at a cost of a larger binary file, it's convenient on the end-user and on software vendors for distributing their applications. While Linux has lacked such support for fat binaries, Ryan Gordon has decided this should be changed."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

487 comments

Yippy!! (-1, Redundant)

Anonymous Coward | more than 4 years ago | (#29863643)

Yahoo!!

Gee, just 14 years (4, Funny)

Anonymous Coward | more than 4 years ago | (#29863645)

after the diminse of NeXTStep!

(c)Innovation!!(tm)(R)

Re:Gee, just 14 years (5, Informative)

TheRaven64 (641858) | more than 4 years ago | (#29863763)

GNUstep has supported cross-platform app bundles for a long time. You can include Linux binaries for various architectures, FreeBSD, Windows, and even OS X-with-Cocoa binaries in the same .app, then drag it to your platform of choice and have it work. The down side of this approach is that it consumes a bit more disk space because you have a copy of all of the data (not just the code) in every binary. The advantage is that the same bundle will work on platforms that use ELF (Linux, *BSD, Solaris), Mach-O (OS X) and PE (Windows) binaries. Given how cheap disk space is, and how trivial it is to thin a bundle like this (NeXT's ditto tool could do it, but all you really need is to delete the folders for targets other than the one you want from the bundle) it's not really a big disadvantage. Fat binaries on Linux would mean you could run the same binary on Linux/x86 and Linux/ARM, for example, but that's not exactly a massive advantage.

Re:Gee, just 14 years (3, Informative)

BuR4N (512430) | more than 4 years ago | (#29863777)

NextStep isnt dead, it just got a new name when Next told Apple to buy them....

Re:Gee, just 14 years (2, Interesting)

Hal_Porter (817932) | more than 4 years ago | (#29863901)

Nextstep isn't really gone, it just possessed MacOS and now it walks around in its body, a bit like VMS did to Windows.

Linking problems (1)

kdawgud (915237) | more than 4 years ago | (#29863659)

Could this technology also help binaries to link against multiple versions of standard libraries (glibc, libstdc++)?

Re:Linking problems (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29863681)

Could this technology also help binaries to link against multiple versions of standard libraries (glibc, libstdc++)?

Not really, it'll just make bigger binaries that'll run on a system you'll never use.

Re:Linking problems (2, Interesting)

WaywardGeek (1480513) | more than 4 years ago | (#29863773)

While your reply sounds a bit like flame-bait, I basically have to agree. The format isn't a universal binary that gets translated to each machine architecture when installed. Instead, it's basically an archive of pre-compiled binaries for each platform you support. So, for example, my stupid Qt application has to be compiled separately for Fedora and Ubuntu. This technology would in theory allow me to merge the binaries into a single FatELF binary. Personally, I'd rather just provide separate .deb and .rpm files.

However, the idea of a universal binary is cool. We could do something like the old p-Code, where we compile to a virtual architecture, and then translate it to the machine during installation. I liked the idea when they had it way back in the early 80's (late 70's?), and I was sad to see we didn't have the compute power back then to make it fly. I bet we do now.

Re:Linking problems (1)

Hal_Porter (817932) | more than 4 years ago | (#29863989)

You could use Mono. Of course if you built C for CLR that would not hide differences in libraries as far as I can tell.

Or ANDF [wikipedia.org]. Then again ANDF has never been used commercially and is owned by the OSF.

Re:Linking problems (5, Insightful)

dkf (304284) | more than 4 years ago | (#29863701)

Could this technology also help binaries to link against multiple versions of standard libraries (glibc, libstdc++)?

Probably not. Or not without getting headaches like you get with assemblies on Vista. Keying off the system architecture (32-bit x86 vs. 64-bit ia64) is much simpler than keying off library versions.

The fix with standard libraries is for the makers of them to stop screwing around and stick with ABI compatibility for a good number of years. OK, this does tend to codify some poor decisions but is enormously more supportive of application programmers. Note that I differentiate from API compat.; rebuilding against a later version of the API can result in a different - later - part of the ABI being used, and it's definitely possible to extend the ABI if structure and offset versioning is done right. But overall, it takes a lot of discipline (i.e., commitment to being a foundational library) from the part of the authors of the standard libs, and some languages make that hard (it's easier in C than in C++, for example).

Re:Linking problems (0)

Anonymous Coward | more than 4 years ago | (#29863703)

I saw this discussed somewhere else and the answer is no. The author didn't write it for that purpose.

My guess here is that this is something a lot of people really want. A way to offer a single binary packages that work on every/most Linux setups. But this will be rejected by the people in control of Linux, because they don't need it.

Re:Linking problems (3, Funny)

martin-boundary (547041) | more than 4 years ago | (#29863719)

Could this technology also help binaries to link against multiple versions of standard libraries (glibc, libstdc++)?

I think FatELF is too skinny for that. You want SantaELF, which links all those libraries statically in each binary...

We really care (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#29863671)

Seriously, we care. I mean, really. This would so make my life easier, because I run the same binaries on everything, like, uh, well, shit. Why do I care again? If it could take care of the library problem instead, this would be a "good thing"

Re:We really care (3, Informative)

argent (18001) | more than 4 years ago | (#29863789)

Read the fine website:

Benefits:

[...]
        * You no longer need to have separate /lib, /lib32, and /lib64 trees.
        * Third party packagers no longer have to publish multiple .deb/.rpm/etc for different architectures. Installers like MojoSetup benefit, too.
[...]
        * Ship a single shared library that provides bindings for a scripting language and not have to worry about whether the scripting language itself is built for the same architecture as your bindings.
        * Ship web browser plugins that work out of the box with multiple platforms.
[...]
        * No more ia32 compatibility libraries! Even if your distro doesn't make a complete set of FatELF binaries available, they can still provide it for the handful of packages you need for 99% of 32-bit apps you want to run on a 64-bit system.
[...]

Apple dropped it (0)

Anonymous Coward | more than 4 years ago | (#29863673)

Ask PPC owners that want to get the latest version of OS X.

Apple is still using it (2, Informative)

argent (18001) | more than 4 years ago | (#29863759)

Apple is still using it for x86/x86_64 fat binaries in Snow Leopard.

Re:Apple dropped it (5, Informative)

PenguSven (988769) | more than 4 years ago | (#29863809)

Ask PPC owners that want to get the latest version of OS X.

No, Apple didn't drop support for Universal Binaries. Most apps available for Mac today are universal binaries and work on PPC or Intel macs, and in some cases support PPC 32, PPC 64, Intel 32 and Intel 64. Just because a new OS doesn't support an older CPU architecture doesn't mean the functionality for Universal or "Fat" binaries is not supported.

Re:Apple dropped it (1)

Bert64 (520050) | more than 4 years ago | (#29863857)

It's also theoretically possible to include ARM support too, to make a binary that would also run on an iphone...

Only useful for non-free applications (5, Insightful)

dingen (958134) | more than 4 years ago | (#29863679)

If you have access to the source, you can always compile a version for your platform. The 'fat binary' principle is only useful for non-free applications, where the end-user can't compile the application himself and has to use the binary provided by the vendor.

Since most apps for Linux are free and the source is available, this feature isn't as useful as it is on the Mac. Not that it shouldn't be created, but it makes sense to me why it took a while before someone started developing this for Linux.

Re:Only useful for non-free applications (4, Insightful)

betterunixthanunix (980855) | more than 4 years ago | (#29863725)

Not everyone is skilled enough to compile the source on their own, especially for packages that must be patched to run on certain architectures. Personally, I would think this might be useful for distro maintainers who do not want to maintain separate packages across multiple architectures, although the benefits may not outweigh the costs.

Re:Only useful for non-free applications (3, Interesting)

Sique (173459) | more than 4 years ago | (#29863771)

But... "compiling for your platform" is just another way to install software. You could wrap this in a little application (call it "setup"), where you click "Next >" several times, and as a result you have a binary for your platform.
For those who know what they are doing, there is always the "expert configuration" button.

Re:Only useful for non-free applications (1)

Yvan256 (722131) | more than 4 years ago | (#29863883)

But then you need a fat binary for your little installation program.

Re:Only useful for non-free applications (1)

laederkeps (976361) | more than 4 years ago | (#29863999)

But then you need a fat binary for your little installation program.

Or just have it use some already cross-platform scripting engine which can run the same script on all supported architectures. Python comes to mind, although the GUI portion would still require some care to be widely available.

Re:Only useful for non-free applications (1)

Sique (173459) | more than 4 years ago | (#29864033)

... which is commonly known as "shell" and normally comes precompiled for your plattform.

Re:Only useful for non-free applications (0)

Anonymous Coward | more than 4 years ago | (#29863929)

"Why is this little program taking an hour to install?"

Re:Only useful for non-free applications (4, Insightful)

turbidostato (878842) | more than 4 years ago | (#29863835)

"Not everyone is skilled enough to compile the source on their own"

By "end user" we can understand here "distribution maintainer" which already has the skills to compile the source (and that's not but a part in the lot of things that have to be done in order to integrate some software in a distribution).

"I would think this might be useful for distro maintainers who do not want to maintain separate packages across multiple architectures"

But they have to: they still must build and integrate for their supported platforms, then rebuild when bugs are found or the software is upgraded, then test... It's just the last step (producing the very binary packages) that changes so instead of multiple packages you'd end up with a single multplatform package. The distributor still need (almost) as much disk space and infrastructures as before, but then each and every user will end up with spending much more space in their hard disks (imagine the fat binary for, say, Debian, supporting eleven platforms).

And then, please note that this will allow for single binaries for diferent hardware platforms but not for different version compilations (so it won't be useful to obtain binaries for, say, amd64 for Debian, Red Hat and SUSE).

It seems it will only benefit to those that want to publish their software in an only binary form outside the framework of stablished distributions and that means closed source software. Of course they can look for their bussiness the way they feel better, it's only they don't get my simpathy so I don't give a damn about them.

Re:Only useful for non-free applications (0, Flamebait)

bazaarsoft (911025) | more than 4 years ago | (#29863919)

Congratulations - it's this thinking that's kept Linux on the server all these years.

Re:Only useful for non-free applications (4, Insightful)

tyldis (712367) | more than 4 years ago | (#29864097)

Please elaborate.

I too agree that this is pointless for the end user in Linux, at least when it comes to free software. Only closed binary blobs will benefit, which IMHO is not something worth putting effort towards helping. They did their design choices and accepted the reality in doing so.

As for the end user, she should just use the package manager of her distro and find whatever she needs. Not worrying about neither compiling nor platforms.
For example, in Debian/Ubuntu you could more easilly package your installer to simply drop a file in /etc/apt/sources.d. Not only will the user be able to use the package manager to install your app like any other, she will also get security updates you publish.

Let the package system handle these things, they do it well and does not bloat your boat.

Re:Only useful for non-free applications (0, Flamebait)

MartinSchou (1360093) | more than 4 years ago | (#29864023)

It seems it will only benefit to those that want to publish their software in an only binary form outside the framework of stablished distributions and that means closed source software.

Really? You cannot possibly imagine that someone making a tiny niche product outside of the purview of the established distributions would want to make binaries available to people?

Well, fuck you and your narrow minded obtuseness.

If I want to build in support for x86 64, i386, Power PC and a range of other platforms to make it easy for new users to get started, why the fuck would that mean PROPRIETARY CLOSED SOURCE SOFTWARE? Or did ease of use suddenly become a closed source model only?

Re:Only useful for non-free applications (3, Interesting)

slim (1652) | more than 4 years ago | (#29864135)

I agree that fat binaries are not appropriate for applications in the distribution's archive. And I agree that the first port of call for any user should be apt-get / up2date / etc.

However there are many kinds of app that might not get into the distro archive, for all kinds or reasons. Maybe it's of really niche interest, maybe it's too new, maybe the distro-maintainer just interested in it. Or maybe it's proprietary. Some people are willing to compromise on freedom.

The last application I had trouble installing on Linux, due to glibc versioning problems, was a profiler for WebMethods Integration Server. Something like that is never going to get into the APT repository.

Re:Only useful for non-free applications (1, Informative)

Anonymous Coward | more than 4 years ago | (#29863979)

Not everyone is skilled enough to compile the source on their own, especially for packages that must be patched to run on certain architectures.

That's ok as not everyone needs to compile the source on their own. If a distro supports a certain architecture then the user only needs to install the package and he doesn't even need to know what architecture he is running. This isn't 1996.

Personally, I would think this might be useful for distro maintainers who do not want to maintain separate packages across multiple architectures, although the benefits may not outweigh the costs.

Is that even a problem? I don't remember seeing anyone complain about that problem. I've heard complains about how the packaging process being complicated for newbies to tackle it but not about multiple platform support.

Re:Only useful for non-free applications (3, Interesting)

Monkey-Man2000 (603495) | more than 4 years ago | (#29863737)

While this is true, of course a lot of free software can run on OS X as well. Compiling this is nearly as easy as Linux, but it's still quite useful just to download a universal binary of the full application if it's available. Smaller apps aren't a big problem, but for bigger ones it can become an unnecessary hassle. For example, I just had to compile Inkscape from scratch on Snow Leopard and I spent an afternoon tracking down and compiling all the dependencies because the universal binary doesn't currently run on 10.6. I really would have benefited from the universal binary if I wasn't so bleeding edge.

Re:Only useful for non-free applications (5, Informative)

eldavojohn (898314) | more than 4 years ago | (#29863741)

Well, that's an important point but the author of this defends himself:

  • Distributions no longer need to have separate downloads for various platforms. Given enough disc space, there's no reason you couldn't have one DVD .iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS system, doing the right thing at boot time. You can remove all the confusing text from your website about "which installer is right for me?"
  • You no longer need to have separate /lib, /lib32, and /lib64 trees.
  • Third party packagers no longer have to publish multiple .deb/.rpm/etc for different architectures. Installers like MojoSetup [icculus.org] benefit, too.
  • A download that is largely data and not executable code, such as a large video game [icculus.org], doesn't need to use disproportionate amounts of disk space and bandwidth to supply builds for multiple architectures. Just supply one, with a slightly larger binary with the otherwise unchanged hundreds of megabytes of data.
  • You no longer need to use shell scripts and flakey logic to pick the right binary and libraries to load. Just run it, the system chooses the best one to run.
  • The ELF OSABI for your system changes someday? You can still support your legacy users.
  • Ship a single shared library that provides bindings for a scripting language and not have to worry about whether the scripting language itself is built for the same architecture as your bindings.
  • Ship web browser plugins that work out of the box with multiple platforms.
  • Ship kernel drivers for multiple processors in one file.
  • Transition to a new architecture in incremental steps.
  • Support 64-bit and 32-bit compatibility binaries in one file.
  • No more ia32 compatibility libraries! Even if your distro doesn't make a complete set of FatELF binaries available, they can still provide it for the handful of packages you need for 99% of 32-bit apps you want to run on a 64-bit system.
  • Have a CPU that can handle different byte orders? Ship one binary that satisfies all configurations!
  • Ship one file that works across Linux and FreeBSD (without a platform compatibility layer on either of them).
  • One hard drive partition can be booted on different machines with different CPU architectures, for development and experimentation. Same root file system, different kernel and CPU architecture.
  • Prepare your app on a USB stick for sneakernet, know it'll work on whatever Linux box you are likely to plug it into.

While you may be able to claim none of those points are overly compelling and target a very small part of the population, you have to recognize there's more than just satisfying non-free applications. Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).

Re:Only useful for non-free applications (2, Insightful)

recoiledsnake (879048) | more than 4 years ago | (#29863841)

Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).

He means free as in Stallman-Free, not the free as in cost. That's what I don't like about this 'free' as in 'freedom' thing, it's needless confusing by trying to change the meaning that first comes to mind to people. They could've just gone with libre or something.

Re:Only useful for non-free applications (0)

Anonymous Coward | more than 4 years ago | (#29863907)

Yeah, well, thank English for that. Real languages distinguish between the two.

Re:Only useful for non-free applications (0)

Anonymous Coward | more than 4 years ago | (#29863859)

And he even left another useful one off the list:

You can have one application binary sitting on a network drive somewhere and start it up on any client machine, regardless of architecture.

Re:Only useful for non-free applications (3, Insightful)

Digana (1018720) | more than 4 years ago | (#29863887)

They clearly meant free, not gratis. Gratis is such a weak feature of software that I don't think it deserves to share a meanin with free.

Re:Only useful for non-free applications (1)

MartinSchou (1360093) | more than 4 years ago | (#29864039)

Well, kick whomever decided to call it "free software" instead of "liberated software" in the nuts.

Free has meant 'no charge' for a lot longer than it has meant 'free (as in liberated) software'.

Re:Only useful for non-free applications (5, Funny)

meringuoid (568297) | more than 4 years ago | (#29864103)

Free has meant 'no charge' for a lot longer than it has meant 'free (as in liberated) software'.

Yes, that's right. That's why a 'freeman' was someone you didn't have to pay for his work, whereas a 'slave' was, er...

Re:Only useful for non-free applications (2, Informative)

dingen (958134) | more than 4 years ago | (#29863911)

Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).

I'm sorry, I should have been more clear. I mean free as in freedom. MS Visual Studio Express isn't free, it just doesn't cost any money to purchase.

Re:Only useful for non-free applications (1)

selven (1556643) | more than 4 years ago | (#29864047)

Non-open-source? That's a pretty convoluted way to say "closed-source".

Re:Only useful for non-free applications (1)

mrmeval (662166) | more than 4 years ago | (#29864079)

We already have all of this without the bloat blob that has irrelevant crap in it. Adobe set up a binary server for it's product for various flavors of linux and if they do the code right it works. All I had to do was add them into my repository list.

If they need some means of collecting money or locking it to only my PC that may be difficult but free is easy. Non-free would probably need a physical or internet dongle if they needed that level of paranoia.

Re:Only useful for non-free applications (1)

dkf (304284) | more than 4 years ago | (#29863757)

The 'fat binary' principle is only useful for non-free applications, where the end-user can't compile the application himself and has to use the binary provided by the vendor.

On the other hand, it's a poor platform if it is too hostile to third party software (some of which will be sufficiently specialist to be effectively commercial-only, because you're really paying for detailed support). The big benefit is being able to say "this is a Linux program" as opposed to "this is a 32-bit x86 Linux program"; for most end users this is just a much easier statement to handle because they don't (and won't ever want to) understand the technical parts. (There's a smaller benefit to people serving up applications over networked filesystems to an enterprise's heterogenous Linux systems, but that's a less common scenario.)

Marketing. It sucks, but sometimes you need it anyway.

Re:Only useful for non-free applications (1)

should_be_linear (779431) | more than 4 years ago | (#29863781)

Actually, this can be good even for Ubuntu. Think of having i7, Atom and Phenom versions of executable alongside classic 586 version. Also, having one repository instead of several can also streamline few things. Downside is more data transferred on software updates, unless some create really smart update which transfers only part of fat binary that is actually used on client.

Re:Only useful for non-free applications (1)

Bert64 (520050) | more than 4 years ago | (#29863899)

There are already smart update systems, there is a single source package which is compiled into multiple binary packages, your smart client only transfers the binaries which are appropriate for the architecture it uses. Because different architecture versions have different filenames, the packages themselves can already sit alongside each other inside a distribution repository... There is nothing currently stopping you creating a multi architecture install dvd. The reason it's not done is because it would be wasteful to download a 4gb dvd instead of a 700mb cd, lots of people have bandwidth caps these days and downloading gigs of alien architecture binaries would be a complete waste of your usage cap.

Re:Only useful for non-free applications (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29864027)

>>> which transfers only part of fat binary that is actually used on client

Brilliant. First people try hard to pack executables for different platforms together in a single file, then (other?) people try harder to separate them back, and everyone looks busy pushing the cart of progress...

Re:Only useful for non-free applications (1)

TheRaven64 (641858) | more than 4 years ago | (#29863801)

Actually, there's one case where I can see this being useful. I was talking a while ago to some of the OpenBSD developers about the planned Dell laptops that had both ARM and x86 chips. Their idea was to have a /home partition shared between the two and let users boot OpenBSD on either. If you had fat binaries, you could share everything.

The canonical use for fat binaries with NeXT was for applications on a file server. You would install the .app bundle on a central file server and then run it from workstations with different OpenStep implementations, such as NeXT cubes or other workstations running OPENSTEP, Windows NT with OSE, or Solaris machines. These would then run the correct version of the binary when you double clicked on the .app. If you wanted to copy it locally, you would use the ditto tool, which had the option of only copying the parts relevant to your architecture.

I'm not really a fan of how Apple implement multi-arch binaries, putting them in the same file rather than in different directories in the bundle. It saves a bit of disk space from not having to duplicate the data segments, but it removes the ability for you to trivially strip out the irrelevant bits. You can't, for example, download a universal binary and then remove the irrelevant architectures without a lot of effort. With NeXT, you could do this as part of the install process (just copy it to the install location with ditto) without the copy program needing to be able to parse the binary format.

Re:Only useful for non-free applications (1)

Bert64 (520050) | more than 4 years ago | (#29863945)

You could take that a step further actually...
Boot the core OS on the ARM cpu, and use that for all your day to day tasks, but power up the x86 on demand for heavy computing workloads. Think of the early PPC amiga addon cards, the core system was still 68k based but you could use the PPC chip for certain power hungry apps or games.

Not sure how hard it would be to engineer, at the very least you could boot the x86 system headless, and have a virtual network between the two so you could access it virtually remotely (X11, rdesktop etc), or have the capability to switch which of the two systems the screen/keyboard are connected to.

I would certainly buy such a system, i use a laptop for everything these days and the vast majority of what i do doesn't require much power... Infact, if the ARM side of things had hardware h.264 decoding i doubt i'd use the x86 side more than once or twice a week.

Re:Only useful for non-free applications (1)

Zobeid (314469) | more than 4 years ago | (#29863863)

Maybe "most apps for Linux are free and the source is available" partly due to difficulties of distributing and installing binaries?

The whole Linux distribution and installation system (such as, with apt-get) is great for setting up a server, but it's very awkward and unnatural for desktop apps. Apple is far ahead in that respect, and I see no reason why Linux shouldn't follow their lead.

I read an opinion somewhere, and it made sense to me, that Linux treats all software as system software -- as part of the OS installation, effectively. System software and desktop apps ought to be handled differently.

Re:Only useful for non-free applications (4, Insightful)

koiransuklaa (1502579) | more than 4 years ago | (#29863959)

The whole Linux distribution and installation system (such as, with apt-get) is great for setting up a server, but it's very awkward and unnatural for desktop apps. Apple is far ahead in that respect, and I see no reason why Linux shouldn't follow their lead.

You've got to be kidding? Super-easy installation and automatic security updates for all applications is 'awkward'?

If I understood you correctly, your suggestion is that desktop software should be hard to find, it should be installed from whatever website I happen to ultimately find and it shouldn't automatically get security updates. Sounds fabulous.

Don't get me wrong, I agree that package management systems have their flaws (even inherent ones) but you just aren't making a good case against them... You could start with explaining what's unnatural about "Open 'Add applications', check what you want, click Install", and then continue with explaining what's awkward about totally automatic security updates.

Re:Only useful for non-free applications (1)

MartinSchou (1360093) | more than 4 years ago | (#29863971)

If you have access to the source, you can always compile a version for your platform.

. Yes ... you can.

Let's use OpenOffice.org as an example, if for no other reason than I was looking into building it optimized specifically for my computer (Windows).

Step 1) Getting the source [openoffice.org]

The source tarballs linked here contain a snapshot from SVN:
core source package
system source package
binfilter source package
l10n source package
extensions source package
testautomation source package

Okay, I probably don't need testautomation. Might be able to do without extensions. Wtf is l10n [wikipedia.org] ? Okay, I probably don't need that either. Unless it can't build without it - after all, English is technically a localisation.

Screw it - I'll get them all.

2) Compile the damn thing
Build instructions [openoffice.org]. Gotcha, I'm on Windows, I have Microsoft Visual C++ 2005 Express Compiler (well better). Instructions [openoffice.org]:

This page is moved into the Building Guide. Please make sure to add new information there and make this page a redirect if it only contains duplicate information.

Well, at least the build instructions are completely updated and the wiki-editors have made sure that all the pages are up to date. I mean, it's not like that particular page has remained unchanged for a full quarter of a year.

Anyway, onwards and upwards. Build guide for Windows [openoffice.org]. Software requirements:

Cygwin, C/C++ Compiler (VC++ 2008 Express), Windows SDK Server 2008, GDI+ Redistributrable, unicows.dll from Microsoft Layer for Unicode, dbghelp.dll, instsmiw.exe and instmsia.exe, various extra dll files, Apache Ant, Mozilla binary distribution, and yet more dll files.

Now, once you have Cygwin installed, you need to configure that properly. Breaking links to executables, ensuring filemode is unix, installing yet more perl modules and possibly fixing a few issues you might come across.

So, yeah, sure, you could build it yourself, if you have the source. And it comes with build instructions for your platform. And you know what you're doing. Good luck getting Joe Sixpack to compile OpenOffice.org from scratch.

Now, I'm sure some people will argue, that OpenOffice.org isn't free in the proper sense, but it's licensed [openoffice.org] under LGPL v3.

No, my example wasn't a Linux one. Who cares. The main point is that it's not just that easy to build from source. Especially if you're talking about products, that didn't have that platform as its first intended target.

Besides, what is so horrible about having fat binaries on Linux? Wouldn't it be kinda cool, if you could simply pop your installation dvd of RAGE [wikipedia.org] and support for a Linux installation out of the box, instead of requiring you to buy the game and then download the installer from their website? Or having to hunt down a limited Linux edition somewhere?

Or are you sure a frothing at the mouth fanatic, that the very idea of having ANY kind of proprietary software taint your harddrive makes you go into anaphylactic shock [wikipedia.org]? If you are, you need to go find your epinephrine pen, because the hardware platform you're using is very likely to be closed off for mere mortals by the sheer number of patents covering it.

Some of us would like to use our computers as tools. If the best tool at my disposal (economical as well as platform availability) is a proprietary one, then that's what I will use.

And again, how will Linux be hurt by this? It would make it simpler to have a single fat binary to download, than having a multitude of targeted distributions and platform choices (i386, x86 64, Power PC, ARM etc). Sure, you could compile it yourself, but you'd still need to figure out how. And if it's a niche application that isn't in your particular distribution's repository, then you're screwed.

Fat binaries could easily be a big advantage for distributions as well as developers.

No thanks! (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#29863685)

Seriously, "bundles" on OSX are the biggest pain in the ass ever. Actually, development in general on OSX is a pain in the ass. There is so much metadata crap associated with files, locations, etc.

Re:No thanks! (0)

Anonymous Coward | more than 4 years ago | (#29863997)

How are they a pain in the ass? How is it a pain in the ass to contain all required files of an application inside a single directory, instead of spreading the files all across the system? ("installing" the application, as people say in the windows world, and to quite some degree in the linux world, too)

The app bundle system is nothing short of genial. There is no "metadata crap associated with files, locations, etc." in it. You CAN add a lot of stuff that fills a function, but you don't have to if you don't want to. You're just misinformed.

However, this article has nothing to do with OS X' App Bundle system, but about universal executables - executables containing code for multiple architectures. App Bundles and Universal Binaries are two different things. They are not tied to eachother in any way at all, like you seem to think. OS X enjoyed the bliss of self-contained app bundles long before Apple extended their executable file format to accomodate multiple architectures, and you don't need to put an executable inside an app bundle to run it.

Does Linux even need them? (5, Insightful)

GreatBunzinni (642500) | more than 4 years ago | (#29863691)

Some people may claim that Linux may have some shortcomings but certainly the way that distributions handle support for multiple platforms and also the availability of binaries targeted at a certain platform surely isn't one of them. Linux already runs on a long list of platforms and software distributions already handle themselves quite nicely by building platform-specific packages, which also include all sorts of platform-specific binaries the applications will ever need. So, besides the empty "but Apple has them" rational, exactly what drives the need for universal binaries on linux?

Re:Does Linux even need them? (1)

buchner.johannes (1139593) | more than 4 years ago | (#29863897)

Well, the effort of packaging a application (a) to different platforms and (b) to different distributions is quite a duplicate one, involving a lot of people (and time).

Re:Does Linux even need them? (1)

GreatBunzinni (642500) | more than 4 years ago | (#29864013)

I believe the people behind autopackage [autopackage.org] will disagree with you. Nonetheless, I don't see how universal binaries would get rid of those non-issues. So, besides universal binaries being a solution looking for a problem, exactly what problems do they actually solve?

Re:Does Linux even need them? (1)

Carewolf (581105) | more than 4 years ago | (#29864011)

No Linux does not need them, even if you want to distribute multi-arch "binaries" linux already have a universal "binary" format called shell-scripts

Replace installed binaries with a symbolic link to a simple shell-script and you magically have a multiarch "binary".

#!/bin/bash
ARCH=`uname -m`
LD_LIBRARY_PATH=/usr/lib/$ARCH
`/usr/bin/$ARH/$0`

Re:Does Linux even need them? (0)

Anonymous Coward | more than 4 years ago | (#29864087)

No we do not is my suggestion Just because those Mac OS... people have something does it mean we have to follow suit and dive head long into yet more BLOAT that we can certainly do without when distros are tryin hard to fit everything onto a single DVD to get emcumbered with yet another file system seems somewhat Hummm well the word that comes to mind i wont type in and also do we not have more than enough file systems on Linux as it is we got is it 3 versions of the EXT sutff XFS Reiserfs BTRFS is that not enough (personally i would can EXTx ) dont use it at all cant stand it caused me no end of problems

But w do not need another one to cause yet more interactions .
 

We need 1-file installs (3, Insightful)

Jim Hall (2985) | more than 4 years ago | (#29863695)

We don't need the universal binary, so much as we need the "1-file install" idea that MacOS has. This would greatly simplify installing a standalone application.

For those of you who don't know, if you download an app for MacOSX (say, Firefox) you are presented with one icon to drag into your "Applications" folder. This is really a payload, a "Firefox.app" directory that contains the program and its [static?] libraries. But to the user, you have dragged a single "file" or "app" into your "Applications" folder - thus, installing it.

It's dead simple. We need something like this in Linux.

Re:We need 1-file installs (5, Interesting)

John Hasler (414242) | more than 4 years ago | (#29863739)

> It's dead simple. We need something like this in Linux.

"aptitude install " (or the pointy-clicky equivalent) works for me.

Re:We need 1-file installs (0)

Anonymous Coward | more than 4 years ago | (#29863837)

# aptitude install firefox

# aptitude install truecrypt

# aptitude install cplex

Strange, none of the above seems to work.

Re:We need 1-file installs (1)

RegularFry (137639) | more than 4 years ago | (#29863909)

# aptitude install firefox

...seems to work for me. What distro are you on, and what (if any) error do you see?

Re:We need 1-file installs (1)

Peaker (72084) | more than 4 years ago | (#29863921)

Which ancient OS are you using?

In Ubuntu at least the first command works fine (Haven't bothered to try the others)

Re:We need 1-file installs (1, Informative)

Anonymous Coward | more than 4 years ago | (#29863843)

I think he meant a general solution to handle software that isn't in the repositories.

Re:We need 1-file installs (1)

teg (97890) | more than 4 years ago | (#29863937)

"aptitude install " (or the pointy-clicky equivalent) works for me.

No, it doesn't. "yum install firefox" and similar things like apt install something called firefox. In many cases this will be ok, but in many others it won't. E.g. when new releases of openoffice.org or firefox arrives. This often won't show up in normal repositories for a while, if it shows up at all before a later release of the distribution they are running.

Re:We need 1-file installs (3, Insightful)

jonbryce (703250) | more than 4 years ago | (#29864123)

That is great for software supplied by your distro's repository, and most distros have lots of software available in their "contrib" or equivalent repository. Firefox of course usually comes installed out of the box, so it isn't an issue.

Where this could be beneficial is for software that isn't popular enough for the distros to package. At the moment, you have to publish different packages for each distro and for each architecture, and you probably won't bother about much beyond i386 and amd64.

Re:We need 1-file installs (1)

dna_(c)(tm)(r) (618003) | more than 4 years ago | (#29863805)

apt-get install firefox

or synaptic (click) firefox (check) apply (click)

A lot easier than finding/going to the website, clicking the download icon, waiting, draging an icon... Mozilla does this wonderfully (agent string used to present the download for your OS/language), most other downloads require you to look for the download you need.

Re:We need 1-file installs (0)

Anonymous Coward | more than 4 years ago | (#29864119)

A lot easier than finding/going to the website, clicking the download icon, waiting, draging an icon...

No it isn't.

Also, you're hinting by that "waiting," part that you don't have to download anything if installing your way.

Re:We need 1-file installs (3, Informative)

TheRaven64 (641858) | more than 4 years ago | (#29863817)

As I get tired of repeating, GNUstep has had this on Linux (and *BSD, and Solaris, and Windows) for many years. It supports NeXT-style bundles with different binaries (and, optionally, different resources) for different systems, so you can easily store Linux, Mac, FreeBSD, and Windows binaries in the same bundle.

Re:We need 1-file installs (1)

buchner.johannes (1139593) | more than 4 years ago | (#29863943)

Linux' way of saying "NO NEVER NEVER DO THIS! Also, it doesn't work." to installing stuff by downloading it yourself has beauty where Apples concept is malware-prone.

The only way I can see your suggestion implemented is by providing a file format that just contains what package you would like to install (for each possible distribution).

e.g. for Firefox.app:
Gentoo: ensure_installed(www-client/mozilla-firefox)
Debian: ensure_installed(firefox)
Fedora: ensure_installed(firefox) ... etc

Declarative, that is. Then the package manager takes care of the rest.

Re:We need 1-file installs (1)

Carewolf (581105) | more than 4 years ago | (#29864041)

It's dead simple. We need something like this in Linux.

You may want to check out something called RPM and DEB files. They are single files packages used to install applications. The rest is just a matter of user-interface. Create an io-slave (KIO, GVFS) that shows installed packages, and executes install when files are dropped there and you have everything you ask for.

convenient for _closed source_ software vendors (3, Insightful)

koiransuklaa (1502579) | more than 4 years ago | (#29863715)

...it's convenient on the end-user and on software vendors for distributing their applications.

Sofwtare vendors? This only makes life easier for _closed source_ software makers. For everyone else this is a solution looking for a problem as package management and repositories don't really have a problem with different arches and versions.

I'm not saying this is useless (people do want to run closed source software), but the kernel, glibc and other patches better be good and non-invasive if this guy wants them to land...

Re:convenient for _closed source_ software vendors (2, Insightful)

betterunixthanunix (980855) | more than 4 years ago | (#29863769)

"For everyone else this is a solution looking for a problem as package management and repositories don't really have a problem with different arches and versions."

Actually, having to maintain packages across several architectures can be tricky at times. Some packages need to be patched to run correctly on different architectures, and the upstream maintainers can accidentally break those patches (e.g. if they are not personally testing on a given architecture). It could even be the case that different architectures have different versions of the same packages, because the distro maintainers are busy trying to get everything to work.

I am not saying that this "universal binary" solution is the answer, but it might help streamline the build process at the distro level. It might help.

Re:convenient for _closed source_ software vendors (1)

koiransuklaa (1502579) | more than 4 years ago | (#29863861)

Oh yes, maintaining packages for several archs is real work, I'm not claiming otherwise. I just don't see how universal binaries makes things easier. Coding, compiling, testing, patching -- all of those need to be done with all supported archs in mind in any case.

Re:convenient for _closed source_ software vendors (5, Insightful)

turbidostato (878842) | more than 4 years ago | (#29863869)

"Actually, having to maintain packages across several architectures can be tricky at times."

Of course yes. But let's see if the single fat binary reduces complexity.

"Some packages need to be patched to run correctly on different architectures"

And they still will need that. Or do you thing that the ability to produce a single binary will magically make those incompatibilities to disapear?

"the upstream maintainers can accidentally break those patches (e.g. if they are not personally testing on a given architecture)"

That can happen too with a single binary exactly the same way.

"It could even be the case that different architectures have different versions of the same packages, because the distro maintainers are busy trying to get everything to work."

Probably with a reason (like new version needs to be patched to work on this or that platform). How do you think going with a single binary will avoid that problem? It's arguably that in this situation you would end up worse. At least with different binaries you can take the decision of staying with foo 1.1 on arm but promote foo 1.2 on amd64 in the meantime; with a single binary it would mean foo 1.1 for everybody.

"I am not saying that this "universal binary" solution is the answer, but it might help streamline the build process at the distro level."

Still you didn't produce any argument about *how* it could help.

Re:convenient for _closed source_ software vendors (1)

Bert64 (520050) | more than 4 years ago | (#29863981)

Another option is for distributors to have compile farms, whereby they have an example of each architecture available to them, and it's a simple case of submitting a source package and it gets built automatically for each available architecture.
Also most patch breakage is changes which prevent the patch from applying, most build systems will apply arch specific patches even if you aren't building for that arch so breakage will be quickly noticed... A well written patch to fix a specific arch should not have detrimental effects on other architectures, and can quite easily be submitted upstream. I used to maintain a lot of Alpha and Sparc based linux systems, and would often submit patches upstream... A lot of those Alpha patches fixed up generic 64bit issues which meant that when amd64 came along those packages usually compiled cleanly right away.

Apple Universal Binary is kinda of a joke. (0)

jellomizer (103300) | more than 4 years ago | (#29863717)

The binary is compiled twice. The way OS X packages its applications is the the Application that Icon that you click on isn't a File but a Folder with a predefined structure. So there there is a PPC and an intel port of the executable.

Linux doesn't handle applications that way. That means you will need to alter the kernel and create new files that will no longer be upward compatible with the old version or just do something really simple, however the simple solution is just as tricky as there are no starnardized installers for linux.

The file system /usr/bin /usr/local/bin /usr/lib
etc...
will have sub directories for each platform there is a compiled binary for.
eg /usr/bin/x86 /usr/bin/Amd64 /usr/bin/Sparc
etc...
Now when the installer installs the software it puts the platform particular binary there is a script installed in the root directory that checks the platform and goes to its platforms version.

Re:Apple Universal Binary is kinda of a joke. (0)

Anonymous Coward | more than 4 years ago | (#29863765)

There is actually only one real binary inside of the application package. (i.e. My Cool App.app/Contents/MacOS contains only one binary that is launched by Mac OS X in most cases, OO.o being an exception) It, due to the capabilities of Mach-O, has support for both PowerPC and Intel, as the article mentions.

Re:Apple Universal Binary is kinda of a joke. (5, Informative)

TheRaven64 (641858) | more than 4 years ago | (#29863825)

You are confusing NeXT and Apple's approaches, I think. Apple puts both all of the different architectures in the same file. Your code is compiled twice, but it's only linked once. The PowerPC {32,64} and x86 {32,64} code all goes in different segments in the binary, but data is shared between all of them, so it takes less space than having 2-4 independent binary files. To support this on Linux would not require any changes to the kernel, only to the loader (which is a GNU project, and not actually part of Linux).

Re:Apple Universal Binary is kinda of a joke. (1, Informative)

Anonymous Coward | more than 4 years ago | (#29863827)

OS X' universal applications are ONE SINGLE application, but the executable file itself inside the app - and there is only ONE executable, not two or more - contains code for all architectures. Let me repeat: ONE executable file with all architectures, not several executables. So you're wrong. It's a quite smart solution.

That said, OS X' universal files are pretty much on their way out, as Snow Leopard (10.6) doesn't play ball with PPC. As time goes on and people realize that x86 is a dead horse running, we might however see universal executables again, but then as ARM and x86.

Re:Apple Universal Binary is kinda of a joke. (1)

mcfedr (1081629) | more than 4 years ago | (#29863915)

That said, OS X' universal files are pretty much on their way out, as Snow Leopard (10.6) doesn't play ball with PPC. As time goes on and people realize that x86 is a dead horse running, we might however see universal executables again, but then as ARM and x86.

on mac it seems x86 is already a dead horse, snow leopard still uses universal binaries, they contain 32 and 64 bit code, rather than x86 and powerpc code that leopard had. some apps now contain up to 4 versions of the code, x86, x86_64, powerpc-32, powerpc-64

Is this really necessary? Or even advantageous? (1)

Tanuki64 (989726) | more than 4 years ago | (#29863729)

Most package manager can automatically create a binary package out of a source package. In many cases this resolves even problems with otherwise incompatible libraries. So for whom is such a fat binary advantageous? I'd assume mostly for closed source vendors. I have nothing against closed source in general, but if I pay for a software I expect at least a minimum of support. Such a fat binary does not look too userfriendly for me. Even if I can strip it down to my architecture. I suppose it does not solve the problems of incompatible libraries. I will follow the responses to this article, maybe I overlook someting and I will be convinced otherwise, but at the moment I would say: Superfluous.

It does not (0)

shitzu (931108) | more than 4 years ago | (#29863783)

It does not allow a "single binary file to run natively" on several platforms. All Universal Binary is - is a bunch of precompiled binaries that run on their particular platform in a folder with .app extension. Very convenient for an end user, but takes a lot of room on hard disk.

Re:It does not (0)

Anonymous Coward | more than 4 years ago | (#29863895)

You're misinformed, just like most people are when it comes to the universal executable file format, and the app bundles of OS X, so here goes:

The "Application Bundle" system of OS X is a directory named with the suffix .app. It's a way to make an application self-contained: no more spreading files across the system, but instead keep things neat and tidy in a single place - inside that app itself. Move the app anywhere you want, run it from anywhere you want. No more "installing software". This method has existed in OS X for ages, and is the recommended and standard way of delivering an application. It was not introduced with the concept of "Universal Binaries".

The "Universal Binary" format has nothing to do with the App Bundle. It's an extension of the executable file format of OS X, to allow code for multiple architectures to reside inside one and the same file:

UniBooky:MacOS marcus$ pwd /Applications/Dashboard.app/Contents/MacOS

UniBooky:MacOS marcus$ ls -l
total 104
-rwxr-xr-x 1 root wheel 50608 1 Sep 16:30 Dashboard

UniBooky:MacOS marcus$ file Dashboard
Dashboard: Mach-O universal binary with 3 architectures
Dashboard (for architecture x86_64): Mach-O 64-bit executable x86_64
Dashboard (for architecture i386): Mach-O executable i386
Dashboard (for architecture ppc7400): Mach-O executable ppc

So, you are wrong, on both accounts. The Universal Binary format DOES allow a single executable file to run natively on several architectures. It is NOT a bunch of MULTIPLE binaries for each architecture.

oh boy, just pack all archs on a .deb (5, Interesting)

C0vardeAn0nim0 (232451) | more than 4 years ago | (#29863811)

you know, just trick the good ol' .DEB package format to include several archs, then let to dpkg decide wich binaries to extract.

is not that in linux the binaries are one big blob with binaries, libs, images, videos, heplfiles, etc. all ditributed in as a single "file" which is actualy a directory with metadata that the finder hides as being a "program file".

being able to copy a binary ELF from one box to another doesn't guarantee it'll work, specially if it's GUI apps that may require other support files, so fat binaries in linux would be simply a useless gimmick. either distribute fat .DEBs, or just do the Right Thing(tm): distribute the source.

Not scalable (5, Insightful)

gdshaw (1015745) | more than 4 years ago | (#29863815)

To a first approximation, the size of the binary will increase in proportion to the number of architectures supported.

This is something you might decide to ignore if you are only supporting two architectures. Debian Lenny supports twelve architectures, and I've lost count of how many the Linux kernel itself has been ported to. I really don't think this idea makes sense.

(Besides, what's wrong with simply shipping two or more binaries in the same package or tarball?)

Re:Not scalable (1)

evanbd (210358) | more than 4 years ago | (#29864045)

To a first approximation, the size of the binary will increase in proportion to the number of architectures supported.

This is something you might decide to ignore if you are only supporting two architectures. Debian Lenny supports twelve architectures, and I've lost count of how many the Linux kernel itself has been ported to. I really don't think this idea makes sense.

(Besides, what's wrong with simply shipping two or more binaries in the same package or tarball?)

As mentioned by the other poster, data portions of the program are shared. In some cases, that means that data files are shared directly; multiple binaries, one data file. In other cases (libraries, etc) where the data is embedded in the binary, it simply means that the FatELF binary will compress to produce a combined file that's smaller than n * single architecture size. (The same is true for packing multiple binaries into one tarball, of course. Though in the case of several different files in one tarball, the FatELF version may have (slightly) better compression because it puts all the versions of one file right next to each other, rather than grouping all of one architecture together; that makes it more likely that the copies of the same data are within the same encoding block.)

The only thing wrong with shipping multiple binaries in one package or tarball is that, afaik, none of the major package managers support it. Sure, you could add support, but he decided this was a better approach.

In the case of something like Debian, it obviously doesn't make sense to have the package repositories use FatELF binaries, nor to include all possible architectures on one install CD / DVD. However, it might make sense to include a couple common architectures on a single iso that would work for most people, and have the obscure architectures get their own isos (or use jigdo) like they do now.

Not a particularly terrible idea... (1)

cfriedt (1189527) | more than 4 years ago | (#29863893)

This isn't a particularly bad idea and community-driven distros (or maybe the community of community-driven distros) like Ubuntu would probably benefit from it quite significantly. You can even strip unnecessary binary portions out of most programs (at least with the mach-o binary format), although it would really only effect disk usage. With the disk capacities today, it's really negligible disk-usage savings anyway.

In terms of the Linux kernel, this would mean a major overhaul for a large portion of the kernel and I can't see it being adopted very widely outside of the Desktop market.

Not a particularly useful one. (1, Redundant)

WindBourne (631190) | more than 4 years ago | (#29864105)

As somebody pointed out, this does not scale. In the end, if somebody is really wanting multiple systems, then they can simple create a dvd with multiple types on it. There is another solution though. Back in the late 80's/early 90's, the unix world was concerned about how a companies could create binaries for multiple systems. The best solution, though not implemented, was to compile to a universal binary, and then trans-compile this to a different arch. In light of all the work that has taken place on Java and now parrot, it may make far more sense to do that. With that approach, it would be possible to have various compilers go to a single arch, and then create a new back-end on compilers to take it to various archs.

Linux is fine, but how about other platforms (0)

Anonymous Coward | more than 4 years ago | (#29863923)

I would very much like to know does this also have support for building fat binaries for different operating systems in the future if the support is added? I would like to build libraries that work on Linux/*BSD/Solaris/OSX/Windows and distribute the binaries even though they are open source. I would also like to load them automatically from a C#/Mono application with P/Invoke.

Up to now having Windows and OSX libraries is easy, because the naming conventions are different and I can include a windows binary with .dll extension and OSX binary with .dylib extension and Mono even handles everything automatically just fine. Problems come with other unices, because they all use ELF and all use .so extension. I have to resort to ugly hacks of having a version of the library with a different name for each platform and add a Mono .config file to load the correct version depending on the platform.

What I would like to have is to have a single FatELF .so library that would include 32-bit and 64-bit versions for all mentioned platforms, with the correct one loaded by the operating system automatically. The .dylib OSX version I use is already made this way and it's not a big deal to distribute 32-bit and 64-bit windows versions as separate files if that ever is necessary, for now 32-bit version should work well enough in windows. However all the other platforms result in millions of version each in a separate file, that just makes me feel dirty.

I know the model I'm suggesting would result in really big files, but it would be really easy to even automatically strip the useless platforms of it if necessary. It would still make the binary distribution a lot easier.

Re:Linux is fine, but how about other platforms (0)

Anonymous Coward | more than 4 years ago | (#29863983)

nobody wants mono on linux. if that's whats behind fatelf, nobody will use that shit.

Universal Source ? (1, Interesting)

obarthelemy (160321) | more than 4 years ago | (#29864001)

I'm already amazed we have a universal x86 binary. With the architectural differences between an Atom and a Core7 or 9... I dare not think of all the inefficiencies this creates.

Wouldn't it be better to shoot for a Universal Source, with the install step integrating a compile+link step ? I know Gentoo does this, but Gentoo is marginal within the marginality that is Linux, on the desktop.

I'm amazed you can do real-time x86 emulation on non-x86 CPUs, but still can"t have a Universal Source.

Better Solutions For This Problem Exist (0)

Anonymous Coward | more than 4 years ago | (#29864029)

It seems to me this problem of "a single binary which can run on multiple architectures" could be extended to "a single binary which can run on multiple platforms." For BOTH of these goals, rolling all the possible binaries into one larger executable seems to be a bit of a messy, sloppy approach.

On the other hand, what if we compiled programs to some kind of intermediate language, and ran it on a code interpreter or virtual machine? The virtual machine could have a version for every platform and architecture. We could call it something random, like... Java. Or .NET.

Oh wait.

Yet another unnecessary archive format (1)

bit01 (644603) | more than 4 years ago | (#29864049)

Hmmm, yet another unnecessary, incompatible, redundant archive format requiring yet more tools and libraries to deal with.

What's wrong with putting whatever flavor of binary you want in a tar.gz archive, zip archive or folder and have the system smart enough to pick the right one (using the existing, standard file id's and tools) when you execute the archive or folder? Yes, I realize the system needs to map pages while executing but for archives that is trivially dealt with by extracting before executing.

The amount of fuzzy, shallow, magical thinking that happens with software is just amazing. Please, if you insist on recreating the wheel at least have the good sense to think about what you are doing and stop assuming that giving something a new name necessitates creating an entire new software infrastructure that will unnecessarily create complexity and problems for large numbers of people.

---

For the copyright bargain to be valid all DRM'ed works should lose copyright.

Unix (OSF) tried it with ANDF (4, Interesting)

Alain Williams (2972) | more than 4 years ago | (#29864121)

Architecture Neutral Distribution Format [wikipedia.org] was tried some 20 years ago. The idea was to have a binary that could be installed on any machine. From what I can remember it involved compiling to some intermediate form and when installed compilation to the target machine code was done.

It never really flew.

If someone wants to do this then something like Java would be good enough for many types of software. There will always be some things for which a binary tied to the specific target is all that would work; I think that it would be better to adopt something that works for most software rather than trying to achieve 100%.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...