Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

GNU C Library 2.17 Announced, Includes Support For 64-bit ARM

timothy posted about 2 years ago | from the well-armed-society dept.

GNU is Not Unix 68

hypnosec writes "A new version of GNU C Library (glibc) has been released and with this new version comes support for the upcoming 64-bit ARM architecture a.k.a. AArch64. Version 2.17 of glibc not only includes support for ARM, it also comes with better support for cross-compilation and testing; optimized versions of memcpy, memset, and memcmp for System z10 and zEnterprise z196; and optimized version of string functions, on top of some quite a few other performance improvements, states the mailing list release announcement. Glibc v 2.17 can be used with a minimum Linux kernel version 2.6.16."

Sorry! There are no comments related to the filter you selected.

LOLOLOLOL (-1)

Anonymous Coward | about 2 years ago | (#42406377)

GNU is for steers and queers. And I don't see any steers around here.

Re:LOLOLOLOL (0)

tobiasly (524456) | about 2 years ago | (#42406727)

GNU is for steers and queers. And I don't see any steers around here.

Good lord, you can't even troll right. "And programmers don't have horns" might have been slightly better. Please try harder next time.

Re:LOLOLOLOL (1)

larry bagina (561269) | about 2 years ago | (#42406963)

The GNU Mascot has horns.

Re:LOLOLOLOL (0)

Anonymous Coward | about 2 years ago | (#42415461)

Yes I would agree. Given the often frustrating lack of options for interactive carnal satisfaction, most GNUbies are indeed hornier than most. Is there an app for that?

Always changing... (-1)

Anonymous Coward | about 2 years ago | (#42406459)

Will it be compliant with existing applications ?

Video [youtube.com]

Re:Always changing... (0)

Anonymous Coward | about 2 years ago | (#42406677)

Nice way to drive traffic to your totally unrelated shitty youtube video. Well played good sir!

Christ... (-1, Offtopic)

Frosty Piss (770223) | about 2 years ago | (#42406465)

I was greatly anticipating this, and when the box arrived from RMS Labs, I eagerly opened the box. Immediately, I noticed everything was covered with what looked like beard hair, but might have been pubic hair. And the fleas. Jesus, the fleas...

Never again, at least when I open a Microsoft shrink-wrap, there is no hair - bald, if you will - and it only smells faintly of flatulence.

I haven't yet tried Ubuntu, but I hear they ship with a small square of South African hash.

Re:Christ... (0, Offtopic)

Anonymous Coward | about 2 years ago | (#42406487)

The hash square gets smaller every release, but at least you get two of them with an LTS release.

Re:Christ... (-1)

Anonymous Coward | about 2 years ago | (#42406507)

I really don't understand why a few people here attack anything to do with Linux. Linux is different, not a threat. The nerd version of homophobia?

Re:Christ... (-1)

Anonymous Coward | about 2 years ago | (#42406531)

I really don't understand why a few people here attack anything to do with Linux.

I really don't understand what your post has to do with the parent comment, which is clearly a PARODY...

Re:Christ... (-1)

Anonymous Coward | about 2 years ago | (#42406569)

People here don't like Linux.
They don't like MacOS either.
They don't like FreeBSD either.
So maybe the better question is "What do they like?" Is it Windows?
To me Widows looks and smells like a ubiquitous piss in a subway.
But once I tried to argue this while ago, people said: oh no! windows is a very capable system. you just don't understand!

Maybe people should just get it that Linux and MacOS and Windows are here to stay. To each its own.
They are all quite mature and capable.
So maybe a good time to shut the fuck up and accept what the reality is.

This comment about Linux is totally uncalled for. Even though I can't stand RMS either.
Once I looked into bison sources a while ago (written mostly by RMS) I saw how stinky a mess it is!
It just begs "C++, please!".
But RMS hates C++ and said "no C++ for you! No C+ for anybody!".

Re:Christ... (4, Interesting)

mcgrew (92797) | about 2 years ago | (#42406647)

really don't understand why a few people here attack anything to do with Linux. Linux is different, not a threat.

It is a threat. A threat to MS developers, a threat to MS shareholders, a threat to those who earn their living cleaning malware off of folks' computers.

For the rest of us, though, it's a blessing.

Re:Christ... (3, Interesting)

BitZtream (692029) | about 2 years ago | (#42407145)

Linux is no threat to anyone, its just an excuse people like to use.

Programers, even MS ones can move elsewhere. Linux won't 'take over the world' ever if the $7/hour programmers can't write code for it. Most companies are unwilling to spend large sums of money for programmers who have their head up RMSes ass and think they're worth ridiclous sums of money, so its not a threat. Programming isn't all that difficult if you have the proper info. Not everyone can deal with the requirements of C, but then thats why you see companies like Google coming up with new languages to solve that problem so they can waste less time with programmers who are meticulous about memory management and more time getting shit done.

MS Shareholders, meh, thats debatable. MS writes apps for other OSes already and makes a fortune off of them, hell Office is the best Office for OSX, Apple doesn't even bother making a truely competing product. Pages/Numbers are like MS Works, not Office. If the world really did jump ship to Linux, they'd move Office over as well. They aren't going to go out of business of some political ideal, thats what Linux people do. MS will follow the money, everything they do is about the money. Right now its more profitable to spend some of that money keeping people on Windows, but if that changes they won't roll over and die.

If Linux became main stream, it would just get malware as well. You don't need to exploit a machine to get malware on, you need to exploit a user. Its been easier to exploit users than it has been to exploit windows since AT LEAST XP SP2, probably a little before that. Windows Vista dropping admin as default pretty much ended the easy way to get an entire machine. But why do you need an entire machine? You don't. You just need to be able to run apps, and the less intrusive you are the longer you'll go without being detected. Windows gets targeted due to market share. Android sees some of the same issues due to its popularity, but lets not recognize that and pretend its perfect, shall we? Windows users don't notice malware until they've got 900 different variants installed that do something like change their home page. You are no different just because you run Linux. You wouldn't notice intelligent malware any faster than a Windows user would, you just think you are immune. You probably aren't stupid enough to download the wrong thing most of the time, but I'd be a months pay that you've only not been caught in a malware scam by dumb luck and obscurity. Eventually, something will come through that is so close to legit looking that it will/has got you and the only thing that stopped it was that it wasn't targeting your OS. Thats only because you're OS isn't popular enough to waste their time on, not because its saving you.

For the rest of us, though, it's a blessing.

While some people prefer to live in a world of obscurity (thats not exclusive to Linux fanboys btw, you aren't any more original than goths), its rather silly to call it a blessing. Instead of removing malware you spend your time dicking with things that were solved 20 years ago but everyone thinks they need to reinvent and do differently without ever asking why it should be done differently in the first place. Its no more a blessing then any other mental illness, you just don't recognize it.

Re:Christ... (4, Informative)

aztracker1 (702135) | about 2 years ago | (#42407519)

you spend your time dicking with things that were solved 20 years ago but everyone thinks they need to reinvent and do differently without ever asking why it should be done differently in the first place. Its no more a blessing then any other mental illness, you just don't recognize it.

Well, wooden wheels were first.. eventually metal with tires... as materials improve, sometimes re-inventing the wheel becomes a good idea...

Re:Christ... (1)

BitZtream (692029) | about 2 years ago | (#42409259)

There is a difference between improving and re-inventing.

I also didn't say that things never get re-invented, just that Linux fans tend to do it without asking why. Everyone thinks they're the new Linus thats going to invent the new way of doing everything that kicks everyone elses way to the curb. There are plenty of people that do make awesome improvements, but those people are drown out by the shear number of people that change things without asking why things should be changed.

Smartphones are a great example. Before the iPhone you had 3 or so 'smart phone' OSes. WinMobile was a crappy copy of the desktop UI to a phone. Blackberry had a start, but just thought the world should stick with their 90s tech for ever and that no improvements were needed because ... well, I don't know why, and neither do they. The ONLY thing it had going for it was push email really on the software side, and that is an absolutely shitty implementation in every way. They keyboard on the device was great for most people however, that helped a lot. If anyone else had ANY sort of push email, RIM wouldn't ever have mattered. Symbian was popular, but not because it was great to use, but because the big guy on the street liked to use it. It wasn't horrible compared to other offerings, but thats about it. It was popular enough that it too stagnated due to lack of real competition. They all sucked like a donkey show. They all kind of stole each others crappy ideas but did things differently.

Then Apple comes along, puts some REAL effort into figuring out how to make a smart phone not suck for MOST people. I'm excluding FOSS fanboys from most people as they are a tiny portion of the population and think the entire world should share their viewpoint and priorities, if you can't accept this theres no point in reading on. Made the UI kick ass. Gave you push email without the craptastic Blackberry software. Put A REAL WEB BROWSER on the phone, not some half assed pile of shit like Mobile IE, or Pocket IE or whatever the hell it was that I can't remember now. I worked hard to block that part of my memory out. Not that it was perfect, iOS has grown from its own learning experiences, such as no apps - use HTML! to having an app store. Multitasking that doesn't kill your battery (most of the time anyway, GPS apps are still great for killing it thanks to shitty things like Google latitude). Each update is almost universally a incremental but welcome improvement to the majority of the users. Sure, you can bring up the antennae thing, but you only can do so if you ignore the fact that when it got a shitty signal it was still better than about 95% of the phones on the market. You can bring up Maps, but most people have never gotten wrong directions like you hear so often, what they get for the most part is little POI info. Of course, they also now get turn by turn which they didnt' have before and even the new Google Maps for iOS is pretty shitty compared to the original for iOS, while it gives you a turn by turn UI, its a pretty shitty one that makes me just want the original Google Maps for iOS back. Especially since it doesnt' have an actual iPad version, only iPhone. But I'm off topic and fanboying for Apply myself now, I digress.

Android tried to do the same, but really didn't do it as well. Its still tries to hard to fit everyone's wants and requirements and as such it falls short. Not saying its not popular, but outside of geeky fanboys, few people give a shit about android other than it has Google's name attached to it. It has taken the place of Symbian and has allowed a lot of people to market phones with little OS development. It also allows them to ... change shit for no reason other than to be different so they aren't like every other phone sold by everyone else. Its a race to the bottom because of this. We'll have to see if Google ever gets their shit together, but being the owner of 2 Nexus 7's, I doubt they will. Its just a way to push all their services on you just like all the other Android mods made by manufactures. Its not trying to be a good phone/tablet OS, its trying to be Googles method of getting more of your data. Google is trying new things on their own, not directly copying Apple, some of which they do well with, but they are just too geeky to get it still. They caved in to all the stupid complaints about the iPhone by techies. Remember when 'flash' was a selling point for Android? See how many complaints there are about battery life due to crappy apps and multitasking? I could go on, but it'll piss off the few fanboys left reading. Some of its issues are due to patents owned by others. No rubberbanding is the one that really pisses me off, but thats not something that can do anything about (well, they can now that the patent is invalidated). Android has permissions and lets you see them and determine if you're okay before you install the app so you can determine if you want to let it out of its sandbox, but its all or nothing. You can't use the app without granting it silly permissions even if you want nothing to do with that sort of functionality.

Then theres Windows Phone since the iPhone came out ... where they switched the UI to be a half assed copy of iOS. They just turned off multitasking at one point to copy iOS rather than understand and solve the problem. Windows Phone has spent the last 5 years getting it wrong because they've tried to copy iOS and Android without understanding why iOS and Android do what they do. Take everything I've mentioned above and apply it to Windows Phone, only done wrong.

This is pretty much exactly how every Linux distro other than Android behaves. And thats my point. You don't need to re-invent the wheel for everything every 6 months, you CAN make incremental updates but you actually have to understand the problems you are encountering and why people complain about them rather than what they are saying.

Using the multitasking example. Android doesn't give a crap and lets you multitask, battery life is a complaint. They ignore the problem. Or try to give you a app to let you decide. My wife doesn't want to decide, she wants the computer to be intelligent and not fucking waste her battery. Apple initially didn't allow multitasking for user apps, not because it couldn't, because shitty apps made by $7/hour programmers cause battery drain doing something like while(1) { if(isinputready){} } rather than waiting for an event and suspending. It took several iterations before they figured out a way to resolve that issue and a way to limit the amount of draining that would happen to most users. Angry birds doesn't need to run in the background, yet before iOS apps like that on phones DID do stupid shit in the background. So they came up with some rules for things that SHOULD be allowed to happen in the background, and then set cpu limits on those so if they got out of control it would be terminated rather than kill battery. Microsoft just disabled multitasking all together rather than do something like set background CPU limits. They just copied what they saw Apple doing from a public view point without ever asking why Apple was doing it.

Mod parent up... (1)

flargleblarg (685368) | about 2 years ago | (#42409617)

It's the best comment I've read all day.

Re:Christ... (0)

Anonymous Coward | about 2 years ago | (#42410087)

Fortunately Apple weren't around at the time ....

Apple Lawyer (tautology noted): Hmmm, your wheel has rounded corners. That's our IP buddy, pay your tithes to your round-corner owning overlords.

Re:Christ... (0)

Anonymous Coward | about 2 years ago | (#42408143)

Only average programmers worry about reinventing things. Those with a deep enough understanding to push technology forward know that there are two parts, concepts and code. Ultimately the code is disposable and it is the concepts that matter. Sometimes moving the tech forward requires that you dispose of some of the code because it doesn't fit the technology, the platform, or it has grown inefficient or insecure.

Code is cheap, it is easy, the understanding of concepts however is invaluable.

Re:Christ... (2)

BitZtream (692029) | about 2 years ago | (#42409273)

When it comes to code, 99 times out of 90, starting over is a stupid idea. You don't throw out the idea of a toilet in space, you refine the toilet to work in space based on existing tech and new requirements. A space toilet works a lot differently, but its not a re-invention.

Linux audio is a perfect example of needless reinvention. OSX and Windows both seem to have no problem with one sound system even with apps that require very low latency and supporting audio in general yet Linux has how many different sound daemons? And none of them do more than one task worth a damn if they even do the one profile they were 'designed' for.

Re:Christ... (0)

Anonymous Coward | about 2 years ago | (#42409953)

OSX and Windows both seem to have no problem with one sound system even with apps that require very low latency and supporting audio in general

That is because you are doing exactly what the developers decided that you should be doing. Good luck if you ever try to livestream or whatever.
Once you want to do things like bringing in a skype call, adding some background music and stream it to twitch or whatever it will not even be the latency that is the issue, OSX and Windows both lack very basic functionality.

Re:Christ... (1)

mcgrew (92797) | about 2 years ago | (#42410759)

If Linux became main stream, it would just get malware as well.

That's true, any OS can be trojaned.

Windows users don't notice malware until they've got 900 different variants installed that do something like change their home page.

Even that doesn't faze them. I don't know how many computers I've seen with Yahoo! as the home page and so many damned toolbars (most of which are redundant) that there's hardly any room on the screen for content. And that's not even considered by most to be "malware" just because a "respectable" company like Yahoo or its affiliates puts it there.

It is a tiny bit harder to trojan a Linux machine (but not much). In Windows all you have to do is click a link and click "ok" a few times. In Linux you either have to add it to your trusted repositories or install it manually.

I'd be a months pay that you've only not been caught in a malware scam by dumb luck and obscurity.

You would win that bet. My PC was infected by Sony's XCP vandalism when my daughter played a CD she bought in the record store she worked in.

Of my 3 computers only one runs Linux, one runs XP and one runs W7. I'm not too worried about any of them (but I do run AV on the Windows boxes).

While some people prefer to live in a world of obscurity (thats not exclusive to Linux fanboys btw, you aren't any more original than goths), its rather silly to call it a blessing. Instead of removing malware you spend your time dicking with things that were solved 20 years ago but everyone thinks they need to reinvent and do differently without ever asking why it should be done differently in the first place.

Actually, that's Windows and was one of the main reasons I went to Linux. Microsoft changes everything around with every "upgrade" and adds few or no real features, while a Linux upgrade usually makes the machine run faster and almost always adds useful features that Windows usually lacks.

When I update Linux, one click and it's done. With Windows it's several clicks and a reboot, then reopen all my apps and docs. When I reboot Linux (I only have it turned on when I'm using it) all the apps and docs that were open when I shut it down are repoened. When I boot Windows I have to enter the password, the Linux box enters it for me (you can choose this on installation or change it in the settings from its version of MS Control Panel).

Microsoft likes hiding things. I had a W7 notebook a few years ago with an annoying "tap to click" "feature", and nowhere in the Control Panel's mouse configuration was there anywhere to shut it off. They'd hidden it in an unlabeled icon on the taskbar that opened more unlabeled icons, one of which shut the "feature" off about ten screens in. It took me two months to find it. When I installed Linux dual-boot on it, it was in the mouse configuration right where you would expect it to be and took less than two minutes.

Windows simply isn't user-friendly (although there are some user unfriendly Linux distros out there; I hate Gnome). Microsoft is McDonalds, Linux is Burger King -- "have it your way."

Linux is also more robust. A flaky power supply will have Windows bluescreening, crashing, freezing, and/or rebooting (often with data loss or corruption), while Linux on the same machine will just slow down a little.

It isn't viruses that make me like Linux, it's the added features, functionality, user-friendliness, and robustness that make it far preferable to Windows.

So why is only one in three of my computers running Linux? Well, the notebook is newish (I've had a coupld stolen) and I've simply not gotten around to installing Linux, and the other is running XP for a single piece of software I need that has no functional equivalent in Linux, an audio editor similar to Audacity that has features I need that Audacity lacks. That and storage is all I use that PC for.

Re:Christ... (1)

Eunuchswear (210685) | about 2 years ago | (#42411189)

I'm a GNU/Linux fanboy, but this:

Linux is also more robust. A flaky power supply will have Windows bluescreening, crashing, freezing, and/or rebooting (often with data loss or corruption), while Linux on the same machine will just slow down a little.

Is purest grade-A bullshit.

Broken hardware is broken hardware - neither Linux nor Windows has any means to work around a bad power supply.

Re:Christ... (1)

mcgrew (92797) | about 2 years ago | (#42419487)

I experienced it firsthand in a dual-boot computer with XP and Mandriva five or so years ago. Win got flakier and flakier while Lin went on seemingly unfazed until Windows wouldn't boot. Two days later nothing would boot.

Re:Christ... (1)

Eunuchswear (210685) | about 2 years ago | (#42422985)

Two days later nothing would boot.

So, frankly, by purest accident Linux hid a serious hardware problem from you.

This is nothing to do with Linux being more robust than Windows. It may mean that on your hardware Linux was using less (electrical) power than Windows, or it may have been simple chance.

Re:Christ... (1)

Eunuchswear (210685) | about 2 years ago | (#42411155)

you spend your time dicking with things that were solved 20 years ago but everyone thinks they need to reinvent and do differently without ever asking why it should be done differently in the first place

A better description of Windows has never been written.

Re:Christ... (0)

Anonymous Coward | about 2 years ago | (#42410727)

You're a retard if you think Microsoft considers Linux a credible threat to their bottom line, no matter how much you wish that were the case.

Re:Christ... (-1)

Anonymous Coward | about 2 years ago | (#42408551)

Obviously the freetards that downmodded this entire thread has no sense of humour whatsoever. Or at least one identifiable outside of their inner circle.

Re:Christ... (0)

Anonymous Coward | about 2 years ago | (#42410769)

And how.

Why the Linux kernel limitation (1)

Anonymous Coward | about 2 years ago | (#42406555)

I read the mailing list post and they do mention the minimum Linux kernel version needed to work with this C library, but it doesn't say why. I'm curious as to what new features they are using that are not in early 2.6.x kernels. For that matter I'm curious as to whether Hurd works with this C library. The Hurd project isn't mentioned anywhere in the mailing list post.

Re:Why the Linux kernel limitation (3, Informative)

kthreadd (1558445) | about 2 years ago | (#42406585)

I read the mailing list post and they do mention the minimum Linux kernel version needed to work with this C library, but it doesn't say why. I'm curious as to what new features they are using that are not in early 2.6.x kernels. For that matter I'm curious as to whether Hurd works with this C library.

Because it relies on the kernel API compatible with 2.6.16 and later.

The Hurd project isn't mentioned anywhere in the mailing list post.

It's my understanding that the Hurd project uses a customized version of glibc.

Re:Why the Linux kernel limitation (1)

debiansid (881350) | about 2 years ago | (#42409911)

The Hurd project isn't mentioned anywhere in the mailing list post.

It's my understanding that the Hurd project uses a customized version of glibc.

That should be read as: "If you're compiling on Linux, we're assuming that you have 2.6.16 or later". This is because we assume presence of some features in some of the Linux-specific code. Hurd is not mentioned anywhere because there weren't any noteworthy changes to hurd-specific requirements.

Re:Why the Linux kernel limitation (4, Informative)

broken_chaos (1188549) | about 2 years ago | (#42406635)

Best guess I have is that they removed their implementations of the *at syscalls added in 2.6.16 (since almost no one is using that old a kernel anyway and presumably it made the maintenance easier), and will always directly make use of the kernel versions of those. There were a few other syscalls added in 2.6.16, so it might be related to one of those instead (but the *at ones look most likely to me).

Re:Why the Linux kernel limitation (4, Informative)

CODiNE (27417) | about 2 years ago | (#42407297)

For those of you trying to figure out what he's talking about, here's a list of *at syscalls.

http://linux.die.net/man/2/openat [die.net]

Notice at the bottom of the page the group of similar file access system calls ending in "at". So for handling files with certain kinds of paths you use openat() instead of open()

I learned something.

Re:Why the Linux kernel limitation (0)

Anonymous Coward | about 2 years ago | (#42406713)

I read the mailing list post... For that matter I'm curious as to whether Hurd works with this C library. The Hurd project isn't mentioned anywhere in the mailing list post.

The mailing list post says it is used '... in GNU systems, and is careful to also say '... most systems running the Linux kernel ...', from which I infer that they're two different things. If GNU systems doesn't mean the Hurd, then what would it mean?

Re:Why the Linux kernel limitation (3, Informative)

dreamchaser (49529) | about 2 years ago | (#42407669)

The mailing list post says it is used '... in GNU systems, and is careful to also say '... most systems running the Linux kernel ...', from which I infer that they're two different things. If GNU systems doesn't mean the Hurd, then what would it mean?

Besides Linux there is at least one variant of BSD [wikipedia.org] to add to the GNU family mix. Note on the same page there is an OpenSolaris flavor too.

optimized better than the builtins? (0)

Anonymous Coward | about 2 years ago | (#42406663)

> ... optimized versions of memcpy, memset, and memcmp for System z10 and zEnterprise z196; and optimized version of string functions, on top of some quite a few other performance improvements,

Since gcc usually gives me its builtins – unless I remember to use -fno-builtin, -fno-builtin_memcpy, etc, – then I confess I'm a bit underwhelmed. While it's good to have choices, I'm left wondering if, rather than have libgcc and libc bloated with duplicates, it wouldn't be better to have libc provide decent default implementations and let gcc, clang, icc, etc. provide the optimized versions as builtins?

Re:optimized better than the builtins? (3, Funny)

VortexCortex (1117377) | about 2 years ago | (#42406751)

Do you even Assembly?!

Re:optimized better than the builtins? (1)

Anonymous Coward | about 2 years ago | (#42406767)

got verb?

Re:optimized better than the builtins? (2, Insightful)

Anonymous Coward | about 2 years ago | (#42406853)

He must have accidentally it. The whole thing!

Re:optimized better than the builtins? (1)

Anonymous Coward | about 2 years ago | (#42407173)

That will mean one group of programmers is left stuck with the boring maintenance and bug fixing while the other teams are doing the fun bits. In a company this might fly for a short while until people start quiting their jobs. In open source this idea will never work.
This is why most FOSS projects are so short lives. It's easy to have a good idea. Prototyping software is also considerably easier than say, a combustion engine. The hard part is making it proper and maintaining it that way.

Also Why (0)

Anonymous Coward | about 2 years ago | (#42410395)

M$FT never fixed their incomplete and dog-slow implementation of hashtables in MFC. They were too busy to randomly change the GUI so that they could $ell a "radically new" version of NT.

Open Source is as good as it can theoretically be, because there is lots of competition, cruft gets named, shamed and replaced.

Re:optimized better than the builtins? (4, Informative)

paskie (539112) | about 2 years ago | (#42407689)

It's not so simple for two reasons:

(i) Builtins are used only when gcc wants to inline their code. This may not always be the case. Their usage may (I think) also depend on the nature of the arguments (e.g., are they constant? is their length known? etc.). And there are other weird cases (passing memcpy() function pointer or whatever). Even if you don't explicitly disable builtins, your program may call these functions.

(ii) This specific part of the announcement concerns ifunc-related optimizations. This means that the version appropriate (most optimized) for the processor the program is _currently running on_ is chosen at runtime. In x86 world, at runtime (during the first call to the function), a SSE4-enabled function may be chosen over default function if the processor supports SSE4, for example. And you have just a single binary of your program to handle.

Re:optimized better than the builtins? (2)

TheRaven64 (641858) | about 2 years ago | (#42409919)

The non-builtin ones are typically used for large arguments for memcpy. I'm not familiar with the GCC implementation, but LLVM's memcpy() optimisation knows roughly the cost of a call to memcpy() and will use a sequence of inline loads and stores to avoid the call, but will just call memcpy otherwise. The reason for this is better instruction cache usage. A call to memcpy() requires saving any caller-save registers (this can be cheap, as the register allocator will try to put temporaries that are not used after the call in these registers, so that they don't need to be saved), putting the arguments in registers (this is quite cheap, as they've typically just been used and so the register allocator will try to ensure that they're in the correct registers already), or on the stack on x86 (not too expensive, as push and pop are really register-register moves on modern chips), and of loading the arguments back. For memcpy(), the cost of a single L2 cache miss in the src will likely dwarf the cost of a slow implementation, so it's worth sticking prefetching in very early, if you can. The memcpy() call in position-independent code will be indirect, which means that it will take up an entry in the branch predictor's table. These are a relatively scarce resource - you have something like 512 of them on a modern processor - at each call site.

The ifunc stuff is not necessarily a win, for the same reason. It turns every memcpy() call into an indirect call (or a double-indirect call in PIC), which means that it now needs to go through the branch predictor. This is the kind of thing that is a clear win on microbenchmarks, but has the potential to make real code slower: each ifunc costs either a branch miss (pipeline stall and flush) or a branch target buffer entry (slowing down some other code). Or, in really bad cases, both, because it and another hot function end up trying to use the same BTB entry so you always get a branch miss for both.

Re:optimized better than the builtins? (0)

Anonymous Coward | about 2 years ago | (#42411917)

It was my understanding that ifuncs were resolved via. the linker (I.e there is a STT_GNU_IFUNC symbol type that marks ifunc entry points). So once the first fixup is done there is no more overhead to an ifunc call than any other DSO function. Or am I totally wrong?

Ulrich Drepper (2)

destiney (149922) | about 2 years ago | (#42406709)

is not gonna be happy

Re:Ulrich Drepper (2, Funny)

Anonymous Coward | about 2 years ago | (#42406881)

I'm not sure the man has ever been happy. I'm pretty sure he'd grit his teeth through an orgasm, if he ever had one. I wager he would reproduce in a laboratory though.

Re:Ulrich Drepper (1)

KiloByte (825081) | about 2 years ago | (#42407031)

I wager he would reproduce in a laboratory though.

Fortunately the tech isn't ready yet. And we can hope, perhaps something bad could happen to Lennart Poettering as well?

eglibc (5, Interesting)

Microlith (54737) | about 2 years ago | (#42406777)

Looks like the eglibc fork was a good thing for the project. Rather than having one maintainer that resists and fights an architecture for personal reasons, the project is now being proactive in integrating a new ARM architecture.

Now if we could only get away from having so many Android-only bionic-targeting blobs.

FLOSS development as it should be (5, Insightful)

Alwin Henseler (640539) | about 2 years ago | (#42406841)

From the release announcement:

* Port to ARM AArch64 contributed by Linaro.

From that organization [linaro.org] 's website:

"it wants to provide the best software foundations to everyone, and to reduce non-differentiating and costly low level fragmentation."

"Linaro was established in June 2010 by founding members ARM, Freescale, IBM, Samsung, ST-Ericsson and Texas instruments (TI). Members provide engineering resources and funding. Linaro's goals are to deliver value to its members through enabling their engineering teams to focus on differentiation and product delivery, and to reduce time to market for OEM/ODMs delivering open source based products using ARM technology."

(member list quite a bit longer than above names)

In other words: many commercial enterprises, that are in it for the money and fighting each other in the marketplace, but working together to improve something that's out there in the open, free for all to use. So that what's common to all, is the best it can be, and each vendor can focus its resources on what makes their product different from the rest of the pack.

Sigh - how much better life could be if that principle were applied more often...

Re:FLOSS development as it should be (2)

Kjella (173770) | about 2 years ago | (#42408611)

In other words: many commercial enterprises, that are in it for the money and fighting each other in the marketplace, but working together to improve something that's out there in the open, free for all to use. So that what's common to all, is the best it can be, and each vendor can focus its resources on what makes their product different from the rest of the pack. Sigh - how much better life could be if that principle were applied more often...

Before you go bubbling over with the nobility of it all, I'd say its a pretty ruthless business decision based on "near" and "far" competition. All the ARM companies are competing between themselves, but they also know ARM as a whole is competing with Intel and x86 so they're allies in fighting the bigger enemy. The same way RHEL and SLES and Ubuntu LTS and whatnot are competing for server support, but they're also all fighting Microsoft in the grander OS market. It happens very often in business that your competitor is both your friend and your enemy depending on context, often shifting depending on what seems the most immediate and dangerous threat. When cooperation is necessary open source has proven an useful means to that end, but I doubt they care much about the same things you care about. This is all still about trying to improve their bottom line.

lyrro (1)

Anonymous Coward | about 2 years ago | (#42406953)

I wish I had a 64-bit arm.

Re:lyrro (1)

jones_supa (887896) | about 2 years ago | (#42409249)

Well played!

Take THAT (0)

Anonymous Coward | about 2 years ago | (#42406965)

Intel compiler guys!

Re:Take THAT (1)

BitZtream (692029) | about 2 years ago | (#42407181)

... You do realize if they just used the intel compiler they wouldn't need their own customized assembly versions of standard (i.e. simple) library functions, right?

I fail to see the impressive part. Impressive would have been fixing GCC to optimize simple functions on its own.

memcmp (and the other functions like it) is something that gets repeated in slight variations in code A LOT, and is trivial to implement. This is almost a textbook optimizer target if I've ever seen one.

Re:Take THAT (1)

Microlith (54737) | about 2 years ago | (#42407333)

You do realize if they just used the intel compiler they wouldn't need their own customized assembly versions of standard (i.e. simple) library functions, right?

Well, that would impose a dependency on the Intel compiler to get the performance.

Impressive would have been fixing GCC to optimize simple functions on its own.

That would be nice, but these aren't the GCC developers.

Re:Take THAT (5, Interesting)

dalias (1978986) | about 2 years ago | (#42407681)

In fairness, this is complicated a lot by two issues:

1. Many of the optimizations that help things like memcpy, memcmp, etc. are utterly wrong and backwards in any loop that actually DOES SOMETHING in its body; they only end up being optimal in the degenerate case where everything but the load and store is loop overhead and the optimal result is achieved by eliminating overhead. And on some CPU models such as most modern 32-bit x86's and some 64-bit ones, the optimal result is actually attained with a special instruction that's not usable in general for more complex loops (i.e. "rep movsb"). Factors like these make optimizing these specific functions in the compiler a task that's largely separate from general-case optimization, and when the main target libc is already providing the asm anyway, there's little demand/motivation to get the compiler to do something that won't even be used.

2. Distros want a binary library that can run optimally on all variants of a particular instruction set architecture. Relying on the compiler to optimize functions for which the optimal variant is highly cpu model specific would only give a binary that runs optimally on one model, unless a lot of logic is added to the build system to rebuild the same source file with different optimizations. This is not prohibitively difficult, but it's also not easy, and it's not worthwhile when the compiler can't even deliver the desired optimization quality yet.

Overall I agree that machine-specific asm in glibc (and elsewhere) is a disease that results in machine-specific bugs and maintenance hell, but when there are people demanding the performance and pushing benchmark-centric agendas, it's hard to fight it...

Re:Take THAT (1)

BitZtream (692029) | about 2 years ago | (#42409377)

1. There are plenty of people writing things VERY close to the exact same think that in many cases the compiler could reduce to the same thing if it knew how to recognize it.

2. I fail to see why the compiler can pull in someone else's asm and work perfectly but it can't be made to figure it out on its own. It already has to do exactly that.

I understand why they did it, but in regard to the post I was responding to, this isn't something to go around bragging about when your basically saying "we finally hand crafted something that works as well as what you do automatically"

Re:Take THAT (0)

Anonymous Coward | about 2 years ago | (#42409997)

Overall I agree that machine-specific asm in glibc (and elsewhere) is a disease that results in machine-specific bugs and maintenance hell, but when there are people demanding the performance and pushing benchmark-centric agendas, it's hard to fight it...

It's not so much the demanding but what it gives back.
An application like Firefox is used by 100 million people. If those user load 10 web pages each day then an optimization of 1 ms would save a total of 1000000 seconds every day.
I have no idea of how many times every glibc function is called every day but I imagine that if you can save one CPU cycle or two somewhere that the saved time everywhere will add up to something significant.

Re:Take THAT (1)

debiansid (881350) | about 2 years ago | (#42409937)

I fail to see the impressive part. Impressive would have been fixing GCC to optimize simple functions on its own.

memcmp (and the other functions like it) is something that gets repeated in slight variations in code A LOT, and is trivial to implement. This is almost a textbook optimizer target if I've ever seen one.

The optimizations talked about here are specific to processor models (e.g. AVX and SSE for intel) and not just architectures. Compiling them into programs is a bad idea^^, so improving gcc is not an option. The glibc mem* functions have implementations for each of those processor features and the right function to use gets implemented via the STT_GNU_IFUNC [kernel.org] mechanism based on the features the current processor has.

^^ If you don't know why it's a bad idea, it's because you don't want to compile your program for every machine you want to run it on. You want to compile for the general architecture so that it can be distributed widely.

aarch64 is a terrible name for arm64 (0)

Anonymous Coward | about 2 years ago | (#42407005)

Which idiot picked aarch64 as an architecture name for the 64-bit variant of arm? arm64 might have made sense, although with all the chaos around with armN means for various N, who knows.

Re:aarch64 is a terrible name for arm64 (0)

Anonymous Coward | about 2 years ago | (#42407289)

Linus agrees with you: https://lkml.org/lkml/2012/7/15/133

    - aarch64 is just another druggie name that the ARM people came up
    with after drinking too much of the spiked water in cambridge.

    It is apparently ARM deciding to emulate every single bad idea that
    Intel ever cam up with, and is the ARM version of the "ia64" name
    (which is Intel's equally confusing name for "Intel architecture 64",
    which a few years ago meant Itanium, but now in more modern times when
    Intel tries to forget Itanium ever existed it means x86-64).

    "ia64" was a horrible name, aarch64 is the exact same mistake. It sucks.

Re:aarch64 is a terrible name for arm64 (2)

petermgreen (876956) | about 2 years ago | (#42407833)

although with all the chaos around with armN means for various N, who knows.

mmm, AIUI aarch64 has NO asm level compatibility with 32-bit arm architectures so having build systems treat it as a completely unknown architecture (and hence using the generic C implementations) rather than treating it as a variant of arm (and hence trying to use arm assembler routines and failing to build because of it) is probablly a good thing

Re:aarch64 is a terrible name for arm64 (1)

TheRaven64 (641858) | about 2 years ago | (#42409931)

AArch64 is the 64-bit instruction set, AArch32 is the 32-bit instruction set. The arm64 designator is usually reserved for CPUs that support AArch64, but currently all of those also support AArch32, so using arm64 for anything other than a kernel or hypervisor is confusing. The 'chaos' around armN is only if you fail to understand that ARMN and ARMvN are different things. For example, ARMv7 is a description of an architecture, ARM11 is an ARM processor design. ARMvN is always a superset of ARMvN-1 in terms of unprivileged instruction set.

Bleh (3, Interesting)

alistairk (2803493) | about 2 years ago | (#42407043)

Oh man so I went multiarch on Debian for nothing

Please ignore this post (4, Insightful)

Curupira (1899458) | about 2 years ago | (#42407369)

I'm trying to undo an unfair mod I applied to an insightful post. Slashdot should let us (at least for one minute) undo a mistaken mod :(

Re:Please ignore this post (2)

jones_supa (887896) | about 2 years ago | (#42409271)

Exactly. There could appear an "Undo" link next to the moderation pull-down list after moderation, which would be valid for, say, 5 minutes.

strings eh (0)

Anonymous Coward | about 2 years ago | (#42410061)

You know, one day we're gona be done optimizing those string/byte manipulation functions ... I swear I see those being optimized in every release of such libs :)

strlcpy? (1)

spitzak (4019) | about 2 years ago | (#42422369)

Did they add strlcpy and strlcat?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?