Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux Applications And "glibc Hell"?

Cliff posted more than 13 years ago | from the .dll's-.so's-and-.a's-oh-my dept.

News 277

cybrthng asks: "How do you stop glibc hell on Linux? I thought I'd long lost the ever familiar DLL hell on Windows, but with Linux it breaks the applications so bad its not funny. Will Linux only be able to truely survive with applications available in source form? For instance take Oracle Applications, it is nearly impossible to install it on RedHat 7.0 or any glibc 2.2 based distro since the applications were built against 2.1.x. When you install this software it tries to relink itself with the correct libraries and fails miserably. You can however force it to use glibc-compat, but that isn't a solution for a production system. Do vendors have to recompile there applications for every kernel, every library and every distro? How can I use Linux when the core libraries don't seem to be forwards or backwards compatible across different distributions?"

Sorry! There are no comments related to the filter you selected.

It's a Redhat only problem. use Debian instead (1)

Anonymous Coward | more than 13 years ago | (#433872)

There's already been much talking about redhat 7.0 incompatible glibc. But it's the only distribution which ever chose to ship with a beta of glibc that is not compatible with previous versions.
Use Debian instead. see debian.org

Use the source (1)

XPulga (1242) | more than 13 years ago | (#433880)

So we have an OS where the only safe form to distribute software is source code (in fact, if it's not source code, it's barely software [gnu.org] ).

You have two options to deal with Linux here: provide gigantic static-linked binaries, which will cripple the freedom of your users and make their systems inefficient, or provide source code they can compile on their platforms (this way enabling your application to run also on other architectures like PPC, Alpha, Sparc...).

Yet, if you are willing to cripple user freedom, then just make binaries for a reference distribution (Distribution ZZZ version X.Y) and put a neon-orange sign in the shrink-wrap box "Requires ZZZ Linux X.Y". You're already taking the users' freedom in respect to the software they use, taking a little more and obligating them to use an specific version of a distro is just a step further).

If you are not willing to provide the source code for the users of your software, you'd better not be developing software, the world is better without you. Really.

Re:"glibc hell" (1)

E-Lad (1262) | more than 13 years ago | (#433882)

There's a problem with releasing products in binary form that are statically linked. Consider the following scenario:

Let's say that Oracle releases it's database which is statically linked against glibc 2.1.3. A few days/weeks/months later, a security/performance bug was found, perhaps even in a function that the staticly-linked Oracle binary uses... a bug severe enough to warrant everyone with 2.1.3 systems to consider upgrading ASAP. As I'm sure you're aware, under a dynamic environment, all one would need to do is upgrade the glibc so's to 2.1.4, reboot, and be done. All programs that were affected by the bug in 2.1.3 now link against 2.1.4 when they start and all is well.

Except for our Oracle binary. It is then up to Oracle to re-release their product, but built against glibc 2.1.4 now... incresing the amount of time the problem hangs around, increasing downtime for the user (instead of rebooting once, they now need to shut down Oracle again to upgrade), and basically keeping a release hell for vendor and user.

I don't see the problem. (1)

tzanger (1575) | more than 13 years ago | (#433883)

Glibc 2.0 is NOT glibc 2.1 which is NOT glibc 2.2. The glibc guys fucked up, this is true. I would have called these libraries something different (glibc2/3/4?) This is easy to fix though: I can install all three of these libraries and the softare can link up with whatever lib it wants. You can't do that (properly) under the Win* systems. I wouldn't call this DLL hell at all. A nuisance, yes. A potential security risk (having old libs with old holes), yes but I think that can be worked out (proper perms?) The point is this is a rather simple problem, and certainly workable if need be.

My next point is that you are really bitching without reason. When an app asks for glibc2.1, bloody well give it glibc2.1. If you either can't or won't, don't bitch at the software. Are you also the kind of person who demands his car be fixed and then insists on putting Toyota parts into a Honda?

Re:Glibc hell (1)

tzanger (1575) | more than 13 years ago | (#433884)

Me, if it can't come from `apt-get -b source foo' then it doesn't get installed.

Exactly why I dislike package tools of any kind.

Why should I wait for debian (or software writer X) to come out with packages? Grab the source tarball and install it. Hell there are even a couple of really nice "install" replacements which log what got put where and at what time for nice and easy removal.

Requiring everything to be an RPM in order to keep the RPM database happy is what turned me off of RedHat (well that and the idiotic system config structure). The exact same thing keeps me off of Debian and its derivatives. Package management must be as low-level as possible or it is useless.

Re:application binary interface (1)

Malc (1751) | more than 13 years ago | (#433885)

"Another thing: BeOS uses the egcs compiler, and they somehow managed to have very high binary compatibility even with _object oriented_ libraries. For example, they add dummy virtual functions to classes so that the layout of the virtual method table does not change when they add new functions. Linux developers should take a look at this. "

That sounds an awful like one of the original design concepts behind COM.

Re:Wrong way around (1)

Garfunkel (3569) | more than 13 years ago | (#433888)

I agree totally, this happens with Windows OSes too. When was the last time you didn't see a big application require SP3 at least on NT4?

This is true for every operating system out there. You can try and force it to install with a bunch of hacks etc on an "unsupported" system, but really why? What does RH7 give you on an Oracle box that RH6.2 can't besides less stability and incompatibility problems.

Why not have the old ones installed as well? (1)

Mawbid (3993) | more than 13 years ago | (#433889)

You can however force it to use glibc-compat, but that isn't a solution for a production system.
Why not? (I ask because I don't know, not because I disagree)
--

Gee... (1)

sreilly (5153) | more than 13 years ago | (#433892)

...whatever happened to the days when the std in stdlib actually stood for standard? I wonder what it stands for now...

Oracle problems (1)

FORTYoz (5393) | more than 13 years ago | (#433893)

I ran into this oracle install problem on my Debian Sid/ kernel 2.4 system..
Here is a description: http://www.fortyoz.org/files/misc/oracle_trouble.t xt

Anyone have bright ideas? I'm thinking about install redhat 6 on a spare drive, installing it then copying it over..

Sigh, all this just for the client libraries and headers.

Careful with static linking (1)

markster (6135) | more than 13 years ago | (#433894)

Remember that static linking glibc into a commercial application violates the LGPL. The LGPL only allows dynamic linking without forcing you into LGPL/GPL.

Re:I've got an idea. Let's ommit the version numbe (1)

grahamm (8844) | more than 13 years ago | (#433898)

Why is an application attempting to link against glibc-2.1.x? Why is it not linking against libc.so.6? If it did this then, as long as your glibc is not earlier than the one against which the application was built then versioning within the library should take care of everything for you.

Re:RTFM!!!!! (1)

grahamm (8844) | more than 13 years ago | (#433899)

Upgrading from glibc 2.1 to 2.2 should not require upgrading anything else. The upgrade from glibc 2.0 to 2.1 required rebuilding a few things - such as ncurses and gcc - but not many.

Re:RTFM!!!!! (1)

grahamm (8844) | more than 13 years ago | (#433900)

Ooops. I forgot one incompatibility. It is not possible to build gcc 2.95.2 using glibc 2.2, without applying a patch, as there was an incompatible header file change. However the gcc 2.95.2 built using glibc 2.1 will still run perfectly well under glibc 2.2.

I'm glad I'm not the only one... (1)

mtnbkr (8981) | more than 13 years ago | (#433901)

I too have gotten exasperated by this issue. I'm not a developer, nor do I want to be one. I'm not a unix sysadmin by profession either (wouldn't mind learning, but it's not a priority). Windows DLL hell was nothing compared to the problems with dealing with libraries under Linux. I've actually found it easier to "upgrade" the distro to a newer version than update libraries. Granted, once it was working, it never broke, but it was a pain to get there. After nearly 4 years of use, I switched to W2K. It's not perfect, but it lets me do the things I want to do today.

Chris

Re:Glibc hell (1)

Me2v (12239) | more than 13 years ago | (#433908)

The "problem" arises when you expect to compile something "for linux" without releasing any source. In that case you have to support every distro, every version of kernel, every version of every dependent library... but that's your choice for being binary-only.

This is not strictly true. Most applications are well-written enough to work across glibc versions. Most glibc versions are written (or rather, unpatched enoughed, depending on distro) well enough that most applications will not break when glibc is upgraded. One glaring exception is StarOffice--if you'll recall, at one time StarOffice was using private internals of the c library. That's was bad coding on SO's part--not compatibility-breaking by glibc. Another example I can think of (please someone tell me if you've fixed this) is IBM's JDK 1.3--it doesn't work with glibc 2.1.94, as far as I can tell. Sun's JDK, however, works great still.

The point is, I can write an application, for profit, closed source, and distribute it in binary-only form, and have a sure expectation that it will work on a majority of glibc-based systems. That's making the very reasonable assumption that I write my code in such a way that it is NOT relying on private internals of the library, or on known bugs in the library.

Someone else made a suggestion that glibc should be a skeleton, and the internals stored elsewhere. I submit that that is already the type of model glibc follows. Even were glibc to store it's body somewhere else, and present only the skeleton, someone, somewhere, would figure out how to call functions in the hidden body--and that someone would complain very loud and long about the body changing. Even though that someone KNOWS that the body functions are off-limits and subject to change without notice.

Write and link your application and pay attention to the very few caveats revealed by the GNU glibc team, and your app will run well on many different versions of glibc. Ignore the prophecies of the GNU glibc team, and you may be assured that your app will go down in flames.

Re:GCC 3.0 "stable ABI" is irrelevant... (1)

Dionysus (12737) | more than 13 years ago | (#433910)

Then these (badly written) applications that rely on bugs should die and be forgotten

Didn't earlier version of the Linux kernel (I think pre-2.2) rely on bugs in GCC to compile?

One of the reasons you can't take the latest GCC and compile pre-2.2 with.

Re:application binary interface (1)

HeghmoH (13204) | more than 13 years ago | (#433911)

These problems with virtual methods that Be so diligently works around are totally gone if you use a sane object-oriented language, like Objective-C or, better yet, a non-C-based one. Sure, with enough time and effort you could probably mold that hammer into a passable screwdriver, but why not just start out with a screwdriver in the first place? I think Be's work with C++ should be looked at as a massive, stupid effort in using the wrong tool for the job, rather than some amazing achievement to be commended.

Re:If the shoe fits, eat it. (1)

Taurine (15678) | more than 13 years ago | (#433917)

Oracle 8i does work on Slackware. But its installation routine relies on specific paths for some system utilities that only apply to Red Hat and its derrivatives. I have successfully installed 8i on Slackware by following the advice here [slackware.com] .

Re:RTFM!!!!! (1)

maroberts (15852) | more than 13 years ago | (#433918)

Strange - I put 'upgrading glibc' into Google and turned up loads of information. Could the parent post be 'Flamebait' ?

Most of it said upgrade your entire system to be compatible with the new library, but others also said that glibc2.2 was backwards compatible with 2.1.

From your last post I guess the language in your requests just p**ses people off!

I thank god/curse _____ everyday for existing free (1)

JohnnyDoesLinux (19195) | more than 13 years ago | (#433922)


I have had to repair drivers/applications written by people with the best of intentions (but no clue) but I thanked them profusely for their contribution when sending them a patch.

Can you help the glibc effort, other wise -- be constructive.

I understand the desperation in your voice, I feel it everyday, but I also thank god that I am not a MicroSoft slave anymore -- freedom has a high price.

Re:I've got an idea. Let's ommit the version numbe (1)

_Lint_ (30522) | more than 13 years ago | (#433931)

Ive always wondered that. How does one force linking against libc.so.6 insteadof glibc-2.x.x? Thanks

Yes, there's a solution (1)

dhuff (42785) | more than 13 years ago | (#433938)

Do vendors have to recompile there applications for every kernel, every library and every distro? How can I use Linux when the core libraries don't seem to be forwards or backwards compatible across different distributions?

What you need is an open-source Unix that treats the kernel, libs and "user-land" stuff as a whole, not as a mishmash that varies for each "distribution." In other words, you need Free/Net/OpenBSD :) There is no such thing as a "version of Linux" per se. Linux is just a kernel which, as you've discovered, can be distributed with an annoying variety of core libs and user-land applications.

Sorry if this sounds like trolling, but I consider this issue one of the biggest thorns in the side of Linux, and one of BSD's greatest advantages...

Re:Wrong way around (1)

divec (48748) | more than 13 years ago | (#433940)

When using real software like Oracle under linux ...
Aye, you're right in that example. But I think the article means generally all binary apps, not just those as big as Oracle. Say, a non-free decompression util.

I can't believe that nobody has pointed out.... (1)

MartinG (52587) | more than 13 years ago | (#433941)

... that this is almost a non-issue.

Have a look at this [redhat.com] page (written back in July 98), which compares libc with glibc.

In particular:
"Programs compiled to use libc 5 will not run with glibc, and vice versa. However, glibc includes support for "symbol versioning". This will virtually eliminate the need for another incompatible libc switchover, like the one from libc 5 to glibc, to ever happen again."

In other words, you can have multiple versions of libc installed alongside each other and applications that use either will work. I have libc2.1 and 2.2 applications both running right now on my mandrake linux desktop.

(Am I missing something here?)

Inheritance tied to static type checking? (1)

UnknownSoldier (67820) | more than 13 years ago | (#433945)

> Also, the fact that languages like C++ and Java tie inheritance hierarchies to static type checking is an unnecessary and idiosyncratic restriction.

Can you provide some examples of what you mean?

And can you provide some examples of a language that doesn't do this?

Thx

Re:Use the source (1)

CmdData (68013) | more than 13 years ago | (#433946)

This is why windowsNT/2000 is so much more superior. I can run games that were written for DOS 5.x on windows 2000 with no problems. I can also run games in win 95 that also run in 2000. Same with my MSSQL 7.x. It runs on NT4 and it runs on w2k. Same with Oricle. Also I work with an IT staff of 300 people and not one of them would want to take the time to recompile anything. That would make it not ready for production enviraments. We don't have any need for source code. Our programming departments ( and there are many of them ) will not distribute the source code to the custom apps to the IT department for recompilation. Why? BECASE WE DON'T NEED THAT CRAT TO GET OUR JOBS DONE.

Glibc is a mess (1)

akihabara (70553) | more than 13 years ago | (#433947)

and in bad need of a cleanup. The Makefiles are utterly incomprehensible for a start.

Re:Wrong way around (1)

JWW (79176) | more than 13 years ago | (#433951)

This is too true. For some time SP5 for NT broke Lotus Notes servers. On HP-UX Oracle more or less requires you to change your kernel settings and recompile.

Well, just install the correct glibc. (1)

vedge (97920) | more than 13 years ago | (#433960)

You can simply install the correct glibc or why just not ask to Oracle to give they source code. Then you could build your own oracle system to suit your need.

Compiling (1)

JollyTX (103289) | more than 13 years ago | (#433962)

Compiling yourself isn't without its problems, either. Often you find yourself trying to compile a source (which *should* work on your distro, gcc version and so on) and it won't work. 'Course, with a little C++ knowledge, you can often fix the problem, but you shouldn't have to be a programmer to get Linux programs to work.

Re:Source (1)

(void*) (113680) | more than 13 years ago | (#433965)

The only way to make the problem go away is precisely as you described. Write a library using opaque types, and wean the binary developers off the old glibc.

Would those guys be willing to help in the effort to mainatain such a library? I wonder.

Laptop hard drives cost more (1)

yerricde (125198) | more than 13 years ago | (#433967)

In that case, just go out and buy a 20GB IDE hard drive for $99.

20 GB ATA hard drives for laptop computers cost much more than $99. A laptop is all many students at my school [rose-hulman.edu] have, as they are issued one at the start of their freshman year.


Like Tetris? Like drugs? Ever try combining them? [pineight.com]

Glibc hell (1)

StevieT (127285) | more than 13 years ago | (#433970)

Hell because it fails to link because it's not compiled for it?

I won't call this hell.

Hell, like the DLL problems on Windows that where
cited, is when it can be linked but crashes on
execution due to incompabilities.

What we have since the ELF format is security against these crashes.

The problem that Linux Distributions are so different is another. But that's not glibc's fault.

Re:GCC 3.0 "stable ABI" is irrelevant... (1)

sxpert (139117) | more than 13 years ago | (#433974)

applications that relies on bugs in the library in order to work, may break when new the library is updated.

Then these (badly written) applications that rely on bugs should die and be forgotten

Static linking and the LGPL (1)

nkarlsso (144578) | more than 13 years ago | (#433977)

Someone suggested to use static linking for compatibility, but is this allowed if the application is closed-source? I seem to remember that the LGPL is picky about only allowing dynamic linking.

Send in the lawyers.

Re:Wrong way around (1)

bluebomber (155733) | more than 13 years ago | (#433982)

This is exactly right. A system designer looking to implement oracle on linux will research the requirements to be sure that the setup will work properly when everything is installed. I can't imagine that Oracle really cares about Joe Blow trying to run their software on the latest & greatest distro. They care more about the professional systems engineers, who are more likely to be using older, more stable/more well known stuff.

-bluebomber

It's a Redhat only problem??? (1)

LoonXTall (169249) | more than 13 years ago | (#433988)

I've had RH7.0 since October, and the only thing I have ever had trouble compiling/running is stuff from their "sources" (xxxx.src.rpm). Everything I've downloaded (blade, LAME, kernels 2.4.0-test11 and 2.4.1, e2fsprogs, modutils 2.4.2, etc.) works perfectly fine. Even the pre-built (and statically linked) gpart worked, for which my partition table is eternally thankful....

Re:Wrong way around (1)

chompz (180011) | more than 13 years ago | (#433991)

Can oracle not distribute object files and then link them on install? This doesn't involve any source code distribution, but allows for the object files to be linked against any C library.

so sad (1)

ickyfreak (181280) | more than 13 years ago | (#433992)

thats why im using w2k atm... but only till i can afford a ppc running darwin or os x

Re:application binary interface (1)

JamesOfTheDesert (188356) | more than 13 years ago | (#433993)

I read that the upcoming gcc 3.0 will have an "application binary interface", that is a more stable and compatible binary format. If they don't change the signature of the functions in the libraries all the time, this should solve this problem very well.

That's not allowed. MSFT has been doing this for some time with COM, and if MSFT does it, it must be evil.

Re:Source (1)

RatFink100 (189508) | more than 13 years ago | (#433994)

You speak from the point of view of someone who has one Linux box running his application. You download, compile and run.

What about if you're a large corporation? If your app is going to run on all of your many servers - do you really want to have to compile for each? If you want to run it on most of your thousands of desktops - you absolutely don't want to re-compile for each.

My point is that even if you have the source - distributing binary code is extremely useful if not essential.

Re:static libraries (1)

ColdGrits (204506) | more than 13 years ago | (#434000)

"tell your vendor to link it static "

Smart!

On the one hand everyone here blasts M$ for "bloat", and on teh other hand they then tell people to use bloat to get round needless bugs in the glibc fiasco! Smart one!

--

Chalk up another win (1)

fatmantis (218867) | more than 13 years ago | (#434004)


...for the Microsoft Marketting Machine. This story reads as 'Linux is a Broken Mess, film at 11'.

All you me too! posters better watch what you say. you know Microsoft reads all this. If you complain tooo much about Linux, they'll use it against you, on your superiors, and sell them their product, which forces your using it. If you praise it too much, you'll embarrass yourself in front of us, your peers, and we will mock you as a bigot.

I think it would be best to take the middle path here, if I were youse...

Re:static libraries (1)

Evil Grinn (223934) | more than 13 years ago | (#434005)

Harddrive space may be cheap, but not THAT cheap. There are only so many damn copies of the same library I can fit in a 1gig partition... I've never had a problem with backwards compatibility with the MFC etc. libraries.

Then how come one of Microsoft's biggest "innovations" in Windows XP is support for multiple versions of DLLs? Sounds very similar to what has been described here.
---

Re:I've got an idea. Let's spell omit correctly! (1)

jrockway (229604) | more than 13 years ago | (#434007)

Sorry, but I had to. Mod me down, I'm at -1 anyway. :-)

Re:Source (1)

jrockway (229604) | more than 13 years ago | (#434008)

I'm kind of tired of pointing Linux towards people that can't use computers (is that a mouse or a CD-ROM). If you can't type
tar xIvf cool-program-1.0.tar.bz2
cd cool-program-1.0
./configure
make && sudo make install
then use Windows. I can, and like to compile everything from source. It's easy; and compatable.

Re:Oracle problems (1)

Raver X (234351) | more than 13 years ago | (#434010)

I had problem with RH too. I switched to SuSE on a friends advise. I loaded it and downloaded the IBM JDK. Then ftp'd the newest. 8.1.7 from Oracle. NO PROBLEMS! Other than a syntax error in the root.sh script. I could live with that.

Re:Oracle problems (1)

HedCheez (236101) | more than 13 years ago | (#434011)

If you are using KDE don't. I had the same problem and when I tried it in Gnome it worked like a champ.

Don't bother with linux (1)

rsimmons (248005) | more than 13 years ago | (#434015)

Just support the application in FreeBSD. From one FreeBSD machine to the next within one version of FreeBSD, all the libraries are going to be the same. Exactly the same. Also, if there are a selection of different boxes that you want to install on and they are different versions then use cvsup to bring them all to the same point on the source tree and you'll have a set of identical boxen.

Opaque Types (1)

small_box_of_stuff (258902) | more than 13 years ago | (#434020)

I have read more than one post here that says the problem is that you cant write code in C that doesnt encourage tight coupling between application and library.

well it just aint so. opaque types are very easy to produce in C. With these, you can link your applicaion dynamically with its library, and replace the library when ever you want, with no effect at all (as long as the interfaces and semantics of the library is the same). You can add fields, to your hearts content, with almost no penalty, if you pay attention to what your doing, and think things through.

the problem is that the glibc library wasnt written this way, and its way to ugly. it exposes internal structures, which is a very bad thing to do, as we all know. (think objects...)

C++ isnt the only language you can hide implementation details in though.

Just do a google search on opaque types. its possible, and easy, to do in C.

Re:Source (1)

Bobo the Space Chimp (304349) | more than 13 years ago | (#434024)

> First of all, a free system is not aimed
> primarily at making binary aplications work, but
> at making free aplications, which comes with
> source, work

Yeah, that's gonna make Windows obsolete for Joe Sixpack really soon. To make any progress, you have to have one simple install icon to double-click.

Ahh, who cares. I switched from Mac to PC three years ago at home because of the games.

I've got an idea. Let's ommit the version numbers. (1)

asciimonster (305672) | more than 13 years ago | (#434025)

I tried this one before and it worked:
When a programme searches for glibc 2.1.x (where x = integer) and you have glibc 2.2 installed i make a sybolic link glibc2.1.x which links to glibc 2.2.

It works, sometimes...

So why don't we all make a symbolic link named glibc which points at glibc2.178.978 (whatever version you want) and have all programmes look for the file glibc. This will give some compatibility problems, but I'm sure you guys will come up with something...

Re:Word *UP* (1)

Schnedt McWhatever (313008) | more than 13 years ago | (#434029)

[aabbccddeeffgghhiijjkkll] B.L.O.A.T.!!!

application binary interface (2)

Anonymous Coward | more than 13 years ago | (#434038)

Hi,

I read that the upcoming gcc 3.0 will have an "application binary interface", that is a more stable and compatible binary format. If they don't change the signature of the functions in the libraries all the time, this should solve this problem very well.

Another thing: BeOS uses the egcs compiler, and they somehow managed to have very high binary compatibility even with _object oriented_ libraries. For example, they add dummy virtual functions to classes so that the layout of the virtual method table does not change when they add new functions. Linux developers should take a look at this.

greetings,

AC

It could be the programs fault.. (2)

Tom Rini (680) | more than 13 years ago | (#434041)

Why do I say that? The glibc team did their best to maintain compatiblity from glibc 2.0, to 2.1, to 2.2. The only time this isn't true, and isn't a bug, is if the application used some glibc internal function. This is why things like star office (5.0 or 5.1, I forget) broke from 2.0 to 2.1. It's also probably why Oracle broke. But, if it's not the case, file a bug report and maybe it can be fixed for glibc 2.2.2.

Yeah, so there (2)

hawk (1151) | more than 13 years ago | (#434042)

When you use emacs that way, it only requires 125% of system resources, rather thant 250%.

Bah, damned EMACS hating fances of the One True Editor . . .

:)

hawk

oh, and (2)

hawk (1151) | more than 13 years ago | (#434043)

> Everyone misunderstands Emacs.

See, there's the problem. It's mother didn't love it, it's father abandoned them, and a cruel society led it to it's life of crime.

:)

hawk, replying twice to the same message, bad form or not :)

No problem; Blaming the wrong people (2)

John Goerzen (2781) | more than 13 years ago | (#434051)

Why do you jump to the conclusion that the compatibility trouble is the fault of Linux or glibc? Did it ever occur to you that perhaps the application authors were causing the problems?

There is no glibc compatibility problem. Properly-written programs have no problem whatsoever. If, however, programmers use calls that are internal to libc -- that they are told NOT to use -- then what do you expect? They have violated the rules, and it's coming around to haunt them.

I routinely run binaries from pre-2.2 versions of glibc on glibc 2.2 systems, and not only on Intel platforms. I have experienced no difficulties thus far, with either binaries shipped with Debian or otherwise. I do not use Oracle, though.

Perhaps an analogy would be useful. In the days of DOS, applications could totally bypass the operating system for basically any purpose. (Granted, because the OS stank so badly, there was often justification for it.) But when people tried to start using systems like Windows, many of these apps broke -- assuming that they had total control of the machine when they didn't, because they violated the rules before.

In this case, there is no need to violate the rules and frankly if programmers are stupid enough to go out of their way to find undocumented calls liberally sprinkled with warnings, and use them instead of their documented public interfaces, I wouldn't buy their software anyway. PostgreSQL doesn't have this problem.

Re:Compiling (2)

logicTrAp (2864) | more than 13 years ago | (#434052)

Speaking as someone who's used Intel's compiler on Windows, I definitely wouldn't expect it to be any better than gcc.

Re:static libraries (2)

Oestergaard (3005) | more than 13 years ago | (#434054)

Ok, I should have formulated that better...

Shared libraries lose their big benefits when every binary ships it's own version of that shared library. On win2k that has become the rule now, to avoid library version conflicts.

On GNU/Linux (and most other systems), you *can* ship a separate version of that library with each binary that requires it. But the libraries will be installed at the standard locations, so if more apps require the same version, they will actually share the library. This is what RedHat used in the glibc-compat package, which provides a compatibility library so that RedHat 6.2 binaries will run flawlessly on RedHat 7.0, using the proper versions of their shared libraries, while the "native" RedHat 7.0 binaries run on the newer libraries. Simple, elegant.

Our Linux apps [sysorb.com] (currently on RedHat 6.2 only) ships with a special libstdc++, which we will probably be the only ones using. However, because of RedHat's approach, we will not do this in the next release, it is perfectly reasonable to simply use the compatibility libraries on 7.0 and the native libraries on 6.2. Once we start supporting a stable 7.X platform, we will of course run on the native library versions there.

On NT and Win2K we must ship specific versions of some DLLs in order to get anything running. There is no backwards compatibility, and there is no DLL versioning. It is of course very simple to just ship your own DLLs, and it works perfectly well, I am just arguing that I fail to see the problem with the GNU/Linux (and most UNIX like systems) approach. Especially given great vendor support such as what we see from RedHat (and probably others too).

Re:There really shouldn't be a problem (2)

Oestergaard (3005) | more than 13 years ago | (#434055)

Huh ?
I'll check up on the DHTML thing tomorrow when I've gotten some sleep...

The network monitoring system is a client/server system. The server is a large program (it is a distributed database and a remote monitoring system), and the client is a very small easily portable program.

Thus, the server is available for RedHat 6.2 (and therefore also 7.0, the 6.2 version will work there), Debian 2.2, FreeBSD 4.0, NT 4.0 (and therefore also Win2K).

The client is available for the same platforms, plus, RedHat 5.2 (with a Linux 2.0 kernel), and FreeBSD 3.4.

As you will have noticed, the software is in beta, but we are *very* close to a release. There are bugs left, but we will have a release out fixing the last known ones, probably around the weekend.

Should anyone out there have oppinions, suggestions, demands or "other", for a commercial program soon-to-ship for Linux among other platforms, I would welcome such feedback.

Please use this e-mail address [mailto] and check out the website [sysorb.com] .

And please accept my apologies for going slightly off-topic on the subject here.

It's Oracle in particular (2)

Ed Avis (5917) | more than 13 years ago | (#434064)

A while ago, as an experiment, I installed the glibc from RedHat 7.0 on a 6.2 system. Almost everything worked exactly as before, the only thing that broke was Emacs due to some problem with Berkeley DB.

So I'd say it is Oracle that's being downright stupid. I don't know what is meant by 'tries to relink itself with the new library', but it sounds pretty unpleasant and unnecessary. Hundreds of other binary packages just kept on working when glibc was upgraded.

Oracle is well known for being a pig to install and get running, and this is just another thing to add to the list.

Glibc 2.2 is backward compatible (2)

grahamm (8844) | more than 13 years ago | (#434068)

Glibc 2.2 is suposed to be backward compatible with 2.1 (and 2.0). I am running 2.2 and have not had any problems with programs built against 2.1 (and an even running some built against 2.0.7). The library which seems to cause problems is not glibc but libstdc++.

Re:static libraries (2)

uradu (10768) | more than 13 years ago | (#434070)

> Linux/x86/glibc-2.2 - 310696 (76 pages)

Damn! And I thought Delphi Hello World! programs were pigs at 270K. Of course, it remains to be seen how bloated Kylix programs will be.

Re:static libraries (2)

uradu (10768) | more than 13 years ago | (#434071)

> Yet her you are advocating EXACTLY the same approach to get round the glibc fiasco.

I believe he's not thinking of static linking as actually "shipping your own shared libraries", which in essence it really is.

Re:Source (2)

Dionysus (12737) | more than 13 years ago | (#434073)

On my system, I have 558 different packages. To compile all of them from source would take days, if not weeks (I know compiling X would 3+ hours).

It's nice that you have the time to recompile all your programs from source. I don't.

Re:This begs the question... (2)

GregWebb (26123) | more than 13 years ago | (#434085)

Surely C suffers from this problem, though? It's possible, IIRC, to (under some circumstances IIRC - I'm not a C expert) write a value of one type to a variable of another directly, while it doesn't check the bounds on an array. Both produce strange errors which aren't that easy to spot as the crash happens when the wrong data is read (could be at pretty much any time) as opposed to when it's written, plain and obvious for all...

Re:Glibc hell (2)

PigleT (28894) | more than 13 years ago | (#434087)

"The problem that Linux Distributions are so different is another. But that's not glibc's fault."

Quite so.

So what if I have a better version of glibc as provided by debian unstable than yours provided by RH7.0? That's not an issue in the slightest, you can always upgrade, I can do so easier... ;)
The "problem" arises when you expect to compile something "for linux" without releasing any source. In that case you have to support every distro, every version of kernel, every version of every dependent library... but that's your choice for being binary-only.

Me, if it can't come from `apt-get -b source foo' then it doesn't get installed.
~Tim
--
.|` Clouds cross the black moonlight,

"glibc hell" (2)

Elbereth (58257) | more than 13 years ago | (#434094)

The easiest solution, one which I have recommended to the company for which I work, is to statically link commercial products. That way, you don't need to worry at all about what libraries are installed. It's much easier for the users and the tech support people. IMHO, more Linux apps should be distributed as both static and dynamic, so that more intelligent/experienced users have a choice.

Also, it's a fairly good idea for people to have older libraries installed on their system. I don't see why you wouldn't, except if you're out of disk space. In that case, just go out and buy a 20GB IDE hard drive for $99.

Re:Source (2)

dpilot (134227) | more than 13 years ago | (#434103)

So you want me to go do my chip design work on some form of Windows, then?

I'm trying to get the chip design job moved to commodity PC hardware, using Linux. Guess what, we may be able to program our way out of a paper bag, but that's not the job at hand. I need to spend my time on my primary job, not settling glibc incompatibilities.

Windows is starting to become practical for chip design, but the opportunity for Linux is still wide open - for a while. Last time I tried building the gEDA suite, I had to give up after a while - there just wan't time to get all the library issues resolved.

Re:Emacs example is flawed. (2)

The Pim (140414) | more than 13 years ago | (#434104)

Everyone misunderstands Emacs. Emacs is NOT bloated. It is "extensible". There is a difference.

Yes, and I tend to agree that an editor should be extensible in this way. Extension languages are cool! But I don't think that having a small, extensible core makes the system fundamentally "small".

The problem is that the extension components are inextricably tied to Emacs. They don't do "one thing well" in the traditional sense, because they can't be used by arbitrary other programs, ony Emacs. Even within Emacs, their interfaces make them much less reusable than Unix utilities that communicate mostly by command name, stdin, stdout, and exit status. Thus, I think it is only fair to call the Emacs core plus the set of extensions you are using a single whole. And that's "big".

If you startup a barebones version of Emacs using "emacs -q", you will get an editor that starts up instantaneously, consumes little memory and is lightning fast.

Isn't that emacs "binary" an undumped image with all the required lisp already byte-compiled?

Re:Source (2)

ichimunki (194887) | more than 13 years ago | (#434110)

So while we're whining about a binary incompatibility issue that is well known and been discussed to death, can we add in pleas for venduhs of commercial proprietary everpresent software (like Shockwave/Flash and PDF) to please start recompiling every piece of software they make/release for each architecture as well as each version of Red Hat? I can't begin to recall all the times I've downloaded a "linux" version of something for my PPC that didn't work. And don't even get me started on source distributions that don't compile from source because the developers don't seem worried about protability in the least.

The age old wisdom of pick the software you want to run, then find a system that runs that software applies here.

Oracle on RH7: (2)

hezron (209071) | more than 13 years ago | (#434111)

I'm running it, thanks to this page [valinux.com] .

Emacs example is flawed. (2)

Magnus Pym (237274) | more than 13 years ago | (#434116)

Everyone misunderstands Emacs. Emacs is NOT bloated. It is "extensible". There is a difference.

All the extra functionality in Emacs is implemented through the use of Lisp packages which are not loaded unless the user explicitly asks them to be loaded.

If you startup a barebones version of Emacs using "emacs -q", you will get an editor that starts up instantaneously, consumes little memory and is lightning fast.

Magnus.

Re:Source (2)

small_box_of_stuff (258902) | more than 13 years ago | (#434117)

this kind of elitest stance will do absolutely no good for linux, open source, or any of the stuff that we seem to hold so dear here on slashdot.

The customer is not just a dumb lump that needs to get out of your way. they are what makes your software viable. with out them, your just a lone hacker hiding in your room, writing stuff noone will ever use.

Telling them they just shouldnt use that stuff that wont work with the newest linux glibc is not the answer. providing ways for vendors to provide glibc neutral software is. Having oracle for linux was one of the biggest wins linux has had in a long time, makinng it seem almost viable to the business world.

It is possible to avoid dll hell, (or .so hell, in this case), but it takes work, and forethought. hacking together a library just wont do it. unfortunately, the clibrary was written a long time ago, and there is almost no chance of getting it rewritten using better opaque types, and hiding its internal implementation.

But that is the only way to make this problem go away.

Cart Before The Horse (2)

ddillman (267710) | more than 13 years ago | (#434118)

In a home user world, sure, you can pick your OS and then install whatever software yu want/can get to work.

The business world is a different matter. The rule of thumb in business is that you specify and purchase your software that you need to do your job FIRST, and then install the OS you need to run the software.

If Linux is to survive in the business world, there needs to be a shift in thinking away from the 'OS first' model. On the other hand, verndors need to be absolutely specific as to what target they built their applications, so you have some idea what you need before purchasing.

Hell? (3)

Ektanoor (9949) | more than 13 years ago | (#434120)

I really don't see the point. What Hell is happening on glibc 2.2? As far as I see it is the first glibc that smoothly installs over older versions without cramming the whole system. And not only.

One development/testing system is working here since July 2000. It suffered more than 30 glibc upgrades, ranging from late 2.1 version, running through a whole series of pre-2.2 and right now working on 2.2.1.

During these upgrades, apps suffered some serious crashes during two-three pre-2.2 versions. Not more. Some applications, based on older 2.1 and even 2.0, have kept working until now. For example, Netscape and Quake2. Besides, I didn't note serious problems with 2.1-based apps.

Due to the purpose of this machine, I managed to see how most of these apps are rebuilt up to 2.2 glibc. Here, some incongruences did appear but I cannot say they are a "Hell". Most cases are the result of a few differences in variables. This can be a serious hassle for an average user but it does not hamper his use of a Linux box just by upgrading to 2.2.

Most of the packages I used came from Mandrake Cooker project.

Source (3)

redhog (15207) | more than 13 years ago | (#434121)

First of all, a free system is not aimed primarily at making binary aplications work, but at making free aplications, which comes with source, work.

Of course binary compatibility is nuice - it means you, or your software vendor, doesn't have to recompile everything now and then. But it comes at a high price - unexpandability. You can not add a field to a datastructure, since that makes the struct bigger, and breaks compatibility. In source, adding a field is never a problem, and compatibility amounts to preserving old fields that someone might expect, and put values that they won't dislike, into these fields.

Of course, you can do uggly tricks like a hash-table of the extra fields for all objects of one type, that you index with the pointer to the original object. This is for example supported in glib. But it's terribly uggly, and is to beg for problems (like mem. management problems).

I agree however that glibc have had some problems - it hasn't allways been 100% source-compatible...

And - try to search for 100% binary compatibility between say Windows 95 and Windows NT 4.0. Have fuN!

Thank you! (3)

avdp (22065) | more than 13 years ago | (#434122)

You are 100% correct. People that have the money to run Oracle (and we are talking about LOTS of money here) go to Oracle and find out what it will run on and go with that. Oracle says RH6.2, then RH6.2 it is. You feel you must be using RH7? Great. Put RH7 on another machine and go play there.

Re:Why not have the old ones installed as well? (3)

jguthrie (57467) | more than 13 years ago | (#434123)

I disagree strongly with the statement that using glibc-compat isn't a solution for a production system. In fact, I'd rephrase the statement to be "You can, however, use the system that was put in place for just such a purpose, but that isn't a solution for a production system."

The people who came up with glibc-compat did so because they anticipated the difficulties associated with upgrading the systems as a whole to newer libraries. There's nothing wrong with installing the older libraries and they're not something that should be avoided for "production" systems.

To be sure, it would be nice if Oracle got with the program and updated their tools to run on a more recent glibc, but until that happens, you have an alternative to sitting around, scratching your head, and saying "Gee, it doesn't work." That makes more sense than the "you should statically link all major applications" crud that others have posted.

Re:Source (3)

dsplat (73054) | more than 13 years ago | (#434124)

The customer is not just a dumb lump that needs to get out of your way. they are what makes your software viable. with out them, your just a lone hacker hiding in your room, writing stuff noone will ever use.


Even if every free software developer wanted to limit the scope of our market to other free software developers, there is the issue that each of us has a finite amount of time. I use more software than I will ever have the time to actually work on. Even rebuilding everything against each new release of glibc and gcc that I install takes time. Being able to install binary distributions of large amounts of free software saves me time to work on the projects I'm involved with.

Bundles (3)

Matthias Wiesmann (221411) | more than 13 years ago | (#434127)

Static linking is one solution, but it seems a little bit heavy handed. Disk space is one problem, but indeed not major one. Another problem is that the library cannot be shared. This means that two programs using the same library will have to load it in their memory space. This means more memory consumption and more loading time.

Another nice solution could be something like bundles under Mac OS X/Darwin. First the library system knows the version number of each library, and can load the one the application needs - this alone would solve the problem described here. Secondly the library can be installed inside the application's framework, so you have the benefit of static linking without having to build a monolithic program.

This means that you can solve such problems easily. Need a specific library? Move it into the bundle. Can use the normal library? Move it out of the bundle. Simple. The DLL-hell problem comes, IMHO from the rather simplistic dynamic libray handling codesystem.

To have an idea about bundles, have a look at the article in on Ars Technica [arstechnica.com] .

If the shoe fits, eat it. (3)

Gendou (234091) | more than 13 years ago | (#434128)

Linux is an open source architecture that's geared towards users building their programs from source. Duh. This works great. However, there're are a few specific cases where you have to bite the bullet and use whatever distro big programs like Oracle were built for. Here's why:

Oracle was originally built for specific operating systems, and in the non Windows arena, specific versions of UNIX. It's not at all surprising that you'd need to run a specific version of Linux from a particular vender in order to use it. Sad but true fact. It really can't be helped at this point so focus on running your organization, not resisting some obvious limitations of the current architecture. (Oracle doesn't work on Debian or Slackware either - my shop tried, and as much as we hated doing it, we were forced to run it on RedHat.)

On another issue... Some people say, "companies should static link libraries to their programs!" Well, this is only taking a bad situation and making it worse. If this is done, binary only releases of software will suffer with flaws in existing versions of whatever system libs they're linked against. Then you have to wait for said company to release a new version whenever the bugs in a system library are fixed. Eventually, we'll manage to do what Windows does, and that is have readily backwards compatable libs that actually work properly.

For now, conform and produce working results.

GCC 3.0 "stable ABI" is irrelevant... (4)

Per Abrahamsen (1397) | more than 13 years ago | (#434129)

...or rather, it is only relevant for C++ libraries, the C ABI has been stable for a long time.

So has the glibc ABI actually, except that it is not 100% bug compatible. I.e., applications that relies on bugs in the library in order to work, may break when new the library is updated.

The flaw in GLibC (4)

jd (1658) | more than 13 years ago | (#434130)

Everything the FSF produces (with one notable exception) follows the philosophy that "small is beautiful" and that N reusable components will always beat 1 system with N features.

GLibC doesn't do this. Everything's crammed in. And that is bound to make for problems.

IMHO, what GLibC needs to be is a skeleton library with a well-defined API but no innards. The innards would be in seperate, self-contained libraries, elsewhere on the system.

This would mean that upgrading the innards should not impact any application, because the application just sees the exoskeleton, and the API would be well-defined. The addition of new innard components would then -extend- the API, but all existing code would be guaranteed to still work, without relinking against some compat library.

Re:static libraries (4)

LinuxGeek (6139) | more than 13 years ago | (#434131)

Read the whole message that you responded to and he explains the problem *and* the fix that is possible on Unix type systems. I can have five apps of various vintages that each require a different set of libraries that they linked against.

Like so:
exec /usr/$sysname-glibc20-linux/lib/ld-linux.so.2 \ --library-path /usr/$sysname-glibc20-linux/lib \ $netscape $defs $cl_opt "$@" ( real world example )

With names like:
libORBit.a
libORBit.la*
libORBit.so@
libORBit.so.0@
libORBit.so.0.5.1*
libORBit.so.0.5.6*

We can keep different versions of files for use, not like the different versions of mfc42.dll that all have the same name. If another version of a library is completely backwards compatible, then a simple symbolic link gives the complete name that the run-time linker is looking for.

static libraries (4)

Dr. Tom (23206) | more than 13 years ago | (#434133)

tell your vendor to link it static (using .a libraries instead of .so).

also remind them that a "Linux" version is
meaningless, they should say "Linux/x86" or
"Linux/Alpha" or whatever.

I hate it when a vendor supplies a "Linux" version
that won't work on my hardware, and I can't tell until *after* I've downloaded it.

Solaris Hell (4)

ajs (35943) | more than 13 years ago | (#434134)

Getting tired of getting a copy of Oracle for Solaris 2.3, iPlanet for SunOS 4.1.3 and Veritas for Solaris 7 and finding that none of them support my Solaris 8 system. Dammit, what is Sun doing wrong!?

You'd think that you would actually have to pick an OS revision based on the least-common denominator of the supported platforms for your application needs!

Someone needs to go write a Python-based OS and then never change anything. That'll solve it.

;-) for those who did not guess....

Vendors ? (4)

blakestah (91866) | more than 13 years ago | (#434136)

For instance take Oracle Applications, it is nearly impossible to install it on RedHat 7.0 or any glibc 2.2 based distro since the applications were built against 2.1.x. When you install this software it tries to relink itself with the correct libraries and fails miserably.

If there are substantial glibc 2.1-> 2.2 problems it is really poor coding on the part of the vendors. The use of private (but available) glibc functions was made impossible in the changeover.

There are a few models that will work in this case. First, the older version of glibc can be included with Oracle, and set LD_LIBRARY_PATH or LD_PRELOAD to load those libraries first. Then there is no problem.

Talk to your vendor. Ultimately, if you want to pay to use their software, they have a responsibility to ensure you can use it with some ease.

Re:The flaw in GLibC (4)

The Pim (140414) | more than 13 years ago | (#434138)

Everything the FSF produces (with one notable exception) follows the philosophy that "small is beautiful"

Oh my God would any UNIX old-timer laugh at that! First, you seem to be claiming that the only exception to this "rule" is GNU libc. Ever heard of EMACS? It's the absolute antithesis of "small is beautiful"! Second, even though GNU has reproduced most of the tools that gave UNIX its minimalist slant, in almost all cases, they extended them to be much larger and more featureful than the originals. Go install FreeBSD sometime, take a sampling of programs, and compare binary sizes and manpages. tar(1) will provide an instructive example.

I'm not saying this is bad--I mostly like the GNU environment. But compared to real UNIX, it's heavy.

Wrong way around (4)

Alatar (227876) | more than 13 years ago | (#434139)

When using real software like Oracle under linux, you find out what the requirements are for the application you're going to run, and install a compatible setup. You don't just run out to the ftp site, burn a copy of the latest distro of Mandrake, and expect every application you install onto the new system to work flawlessly. Maybe you can run something like apache on every machine everywhere, but Big Important things like Oracle generally have pretty specific system requirements, even under other unicies.

Re:This begs the question... (4)

q000921 (235076) | more than 13 years ago | (#434140)

Well, there are several related issues, and I probably didn't explain the differences well enough in such a short space. Dynamic languages avoid this problem, but I didn't mean to imply that statically typed languages can't also avoid it.

Java, for example, couples libraries and user code much less tightly, yet uses statically type checked interfaces. Java's type checking is actually unnecessarily strict: classes are considered incompatible on dynamic linking even though only some aspects of their implementation changed. ML implementations could easily do the same thing.

Also, the fact that languages like C++ and Java tie inheritance hierarchies to static type checking is an unnecessary and idiosyncratic restriction. You can have perfectly statically type-safe systems that do not have these kinds of inheritance constraints: as long as the compiler and/or linker determines that the aspects of the interfaces you are relying on are type-compatible, it can make the two ends fit together safely, no matter what other changes or additions have happened to the classes. The "signature" extension for GNU C++ did this at compile time, and something similar could be done by the dynamic linker when libraries are loaded.

The efficiency issue is not significant. Even for a completely dynamic object system like Objective-C, a good runtime will have hardly more overhead for a dynamic method call than a regular function call. Any of the systems based on static type checking I mentioned above would do even better. And Java, of course, can actually do better than C/C++ when it comes to libraries because the Java runtime can (and does) inline library code as native code at load/execution time.

Of course, sometimes, things just have to change incompatibly. But as far as I can tell, almost none of the changes in glibc (or most other C/C++ libraries I use regularly) should affect any user code. Almost any kind of library interface would be less problematic than what exists right now.

So, I agree: statically typed languages will not go away. But "DLL hell" is avoidable whether you use statically or dynamically typed languages. In fact, as I mentioned, you could even make it go away in C/C++ by introducing a special library calling convention that has a bit more information available at load time. However, why beat a dead horse?

There really shouldn't be a problem (5)

Oestergaard (3005) | more than 13 years ago | (#434141)

I work for a company building a network montioring system available for FreeBSD, NT (and 2K), and both RedHat and Debian Linux. We're adding platforms as people request them.

Really, RedHat 7.0 includes the libraries that shipped with 6.2, so while we only support RedHat 6.2 we still work out-of-the-box on RedHat 7.0. Why not use the compatibility libraries ? That's what they're there for - they're not performing worse or anything, they are just older versions of the library.

On UNIX-like systems you actually have VERSIONING on your system libraries. So you can have a perfectly running system with ten different versions of the C library, and each application will use the version it requires.

You're welcome to check out our beta-versions available from sysorb.com [sysorb.com] , if you don't believe me :)

Re:static libraries (5)

Oestergaard (3005) | more than 13 years ago | (#434142)

It is not usually an option for a vendor to link statically because of license restrictions.

However, a vendor is allowed to ship a specific version of glibc and libstdc++ with the software, as long as they provide some reasonable access to the source code as well.

As posted somewhere else, that is what we ended up doing for the RedHat 6.2 port of our network monitoring software [sysorb.com] . We ship a version of libstdc++ that matches our binary, it is installed without interfering with the other versions of libstdc++ that may be installed on the system, and everyone's happy.

Really, I am surprised how well this stuff works, and I cannot understand why so many people keep complaining about how horrible the system is. I think it's brilliant. And programs can still share the shared libraries, it's not like the Win2K way of doing things, where each app ships it's own set of so-called "shared libraries".

This begs the question... (5)

adubey (82183) | more than 13 years ago | (#434144)

You have some interesting viewpoints, but I think you're avoiding the question rather than dealing with it.

In the programming language research community, the feeling is that dynamic languages are very good for things like scripting and prototyping, but are not as good an idea for large software systems.

The problem is twofold - first, as you mention, dynamic languages always get a performance hit. But the second reason - which you miss - might be more important - fewer errors can be detected at compile time... they would only turn up at runtime, or worse, end up as hard to detect bugs. Moreover, the runtime may fail in someplace other than where the error occured. For example, let's say you have a bunch of "polygon" objects in a linked list, and you mistakenly put a "circle" object in that list as well. Much later, you're traversing the list and expect to find a polygons, but instead you find a circle. Type error! But the real error was where you put the circle in the linked list. In a dynamically typed language, you'd have to look to see where the circle was inserted - and the bigger the software system, the harder that becomes. However, in a statically typed language, the compiler tells you right away "hey buddy, you're putting a circle in to polygon list. Fix that, or you don't get object code!".

I don't think that statically typed languages are going to go away. As it often is with issues with software development, the real problem is psychological rather than technological. If backwards compatibility across ".x" releases was a priority for the glibc team, perhaps we wouldn't have this problem. As it is, they are probably more driven to adding new features or fixing really bad old problems in ways which break compatibility... if there are people willing to work on the project who have different goals perhaps it may be time to fork libc again?

glibc is incredibly compatible (5)

The Pim (140414) | more than 13 years ago | (#434145)

The glibc (and gcc) developers are so careful about binary backwards compatibility, it's not even funny. If you feel like getting thoroughly flamed by folks much smarter than the slashdot crowd, go suggest an incompatible change on the glibc mailing list (and if you're not such a masochist, read the list archives).

However, they offer clear conditions. First, they don't guarentee upwards-compatibility, that is code compiled against glibc 2.2 working with 2.1. Second, C++ is currently off limits (which will change with gcc 3.0). Third, it applies only to shared versions of the library. Fourth, private internal interfaces are off limits.

The Oracle problem is simple: they're using static libraries (ie, ar archives of object files). This doesn't work because symbol versioning (the magic that enables compatibility in shared libraries) isn't implemented for object files. HJ Lu has a page [valinux.com] on this issue and possible resolutions.

90% of other compatibility problems result from using private interfaces. This happened to Star Office a while back.

Re:Source (5)

ColdGrits (204506) | more than 13 years ago | (#434146)

"And - try to search for 100% binary compatibility between say Windows 95 and Windows NT 4.0"

Bzzzzzzt! Wrong answer, thanks for playing.

Here's a clue for you - Win95 and NT4 are two TOTALLY SEPERATE PRODUCTS from seperate code bases, whereas glibc is glibc - the same (ha ha!) library, just different versions.

Of course, what you OUGHT to have written was try to search for 100% binary compatibility between say Windows 95 and Windows 98 or try to search for 100% binary compatibility between say Windows 2000 and Windows NT 4.0 which is extremely easy to do. But then why let trivial things like facts get in the way of a good troll, eh? :(

--

it's the "c" in glibc (5)

q000921 (235076) | more than 13 years ago | (#434147)

When you pass arguments or structures across the C ABI, each side has a lot of detailed, intricate knowledge of the layouts and sizes of data structures and other details. That means that even fairly minor changes, like adding another field to a structure, may mean that everything needs to be recompiled. Having that kind of detailed knowledge has efficiency advantages, but you pay a serious price in terms of software configuration problems. In the days of the PDP-11, it may have been worth making that tradeoff for most function calls, in the days of 1GHz P4's, it probably isn't except in rare cases.

Are there alternatives? Plenty, actually:

  • COM was an attempt to address some of these issues in a C++ framework. Unfortunately, the road to hell is paved with good intentions. Trying to retrofit this infrastructure on top of C++ leaves you with a bad kludge on top of an already cumbersome object system.
  • Dynamic languages like Python, CommonLisp, Smalltalk, etc. generally don't suffer from this problem: as long as the objects you are passing around to roughly the right thing, it usually doesn't matter what you change behind the scenes: the code will still "link" and run.
  • This problem could have been addressed easily without straying much from traditional C if people had adopted Objective-C. Objective-C is a minimalistic extension of C that adds just these kinds of "flexible" and "loosely coupled" interfaces to the C language.
  • Java is halfway there: there are a lot more kinds of changes and upgrades you can do to libraries than in C, but it isn't quite as flexible as more dynamic languages.

You could probably invent a new calling convention for C together with some changes to the dynamic linker that would address this problem for C libraries. While you are at it, you should probably also define a new ABI for C++, something that avoids "vtbl hell" using an approach to method invocation similar to Objective-C. These new calling conventions would be optional, so that you can pick one or the other, depending on whether you are calling within the same module or between different modules. Perhaps that's worth it given how much C/C++ code is out there, but it sure would be a lot of work to try and retrofit those languages. Why not use one of the dozens of languages that fix not just this problem but others as well?

A related approach is to still write a lot of stuff in C/C++ but wrap it up in a dynamic language and handle most of the library interactions through that. That was the dream of Tcl/Tk (too bad that the language itself had some limitations).

Altogether, I think the time to fix this in C/C++ has passed, and COM-like approaches didn't work out. My recommendation would be: write your code in a language that suffers less from these problems, Python and Java are my preference, and add C/C++ code to those when needed for efficiency or system calls.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?