Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

A Review of GCC 4.0

Hemos posted more than 9 years ago | from the only-time-will-tell dept.

Programming 429

ChaoticCoyote writes " I've just posted a short review of GCC 4.0, which compares it against GCC 3.4.3 on Opteron and Pentium 4 systems, using LAME, POV-Ray, the Linux kernel, and SciMark2 as benchmarks. My conclusion: Is GCC 4.0 better than its predecessors? In terms of raw numbers, the answer is a definite "no". I've tried GCC 4.0 on other programs, with similar results to the tests above, and I won't be recompiling my Gentoo systems with GCC 4.0 in the near future. The GCC 3.4 series still has life in it, and the GCC folk have committed to maintaining it. A 3.4.4 update is pending as I write this. That said, no one should expect a "point-oh-point-oh" release to deliver the full potential of a product, particularly when it comes to a software system with the complexity of GCC. Version 4.0.0 is laying a foundation for the future, and should be seen as a technological step forward with new internal architectures and the addition of Fortran 95. If you compile a great deal of C++, you'll want to investigate GCC 4.0. Keep an eye on 4.0. Like a baby, we won't really appreciate its value until it's matured a bit. "

cancel ×

429 comments

Sorry! There are no comments related to the filter you selected.

I'll tell you what the problem is... (5, Funny)

Anonymous Coward | more than 9 years ago | (#12408863)

Well clearly the problem is that you compiled GCC 4.0.0 with GCC 3.4.3! What I did was go through the GCC 4.0 source code in two seperate windows, fire up hexedit in another, and go through line by line "compiling" GCC 4.0 with the GCC 4.0 source, in my head. I wouldn't recommend doing this with -funroll-loops, my hands started cramping up.

Or you could wait to compile 4.0 until the 3.0 branch makes it to 3.9.9, then it will be close enough anyway. YMMV, people say I give out bad advice, go figure...

GCD~? (1)

essreenim (647659) | more than 9 years ago | (#12408920)

No, the problem is we need D jk. But really, is anybody taking D seriously? here [digitalmars.com]

Re:GCD~? (1)

pclminion (145572) | more than 9 years ago | (#12408951)

No, the problem is we need D jk.

Okay, I must be a total fucking geek, because at first I thought you had abbreviated "Dijkstra" as "Dijk" and left the 'i' out for some reason. Then I wondered why the hell Dijkstra would waste his time compiling code by hand...

Re:GCD~? (1)

essreenim (647659) | more than 9 years ago | (#12408985)

Sorry, D jk means D j.ust k.idding. Actually, Im wrong anyway as it is GCC (GNU compiler collection or something. so it would stil be GCC. Anyway, boring story really...

Re:I'll tell you what the problem is... (3, Interesting)

Inkieminstrel (812132) | more than 9 years ago | (#12409113)

Gee, I would have just compiled 4.0.0 with 3.4.3, then compiled 4.0.0 again with 4.0.0.

Re:I'll tell you what the problem is... (4, Funny)

DJCacophony (832334) | more than 9 years ago | (#12409221)

but then the gcc 4 you compiled with gcc 3.4.3 would produce tainted compilations, and the second 4.0.0 compilation would lean towards 3.4.3 because it was compiled with a compiler that was compiled by 3.4.3. You would have to then take the second compilation of 4.0.0 and compile 4.0.0 with it, at which point the similarity to 3.4.3 would make it somewhere along the lines of 3.7.0. If you continue to compile it, while it will never reach 4.0.0, it will approach closely enough that for all intents and purposes, it will be 4.0.0. The forumula is as follows:

V3-V1~2(V3-V2)

where V1 was used to compile V2, and V2 was used to compile V3.

Re:I'll tell you what the problem is... (2, Interesting)

confused one (671304) | more than 9 years ago | (#12409324)

I believe the Linux from Scratch (LFS) folks have found you have to repeat this three (3) times to have, what is effectively, a clean 4.0.0 compile.

Re:I'll tell you what the problem is... (5, Informative)

Anonymous Coward | more than 9 years ago | (#12409230)

This was meant as a joke, but for those who took this too seriously: if you have ever tried building GCC yourself, you should know that it always recompiles itself.

A gcc "stage 1" build is gcc compiled with your old compiler. The "stage 2" build is gcc compiled with the compiler created in the previous stage. This is the one that gets installed. The "stage 3" build is optional and verifies that the "stage 2" compiler creates the same output as the previous one.

Re:I'll tell you what the problem is... (1, Informative)

Anonymous Coward | more than 9 years ago | (#12409373)

Nice try. When you build GCC you do a "make bootstrap", which does things in multiple stages:\
1- Compiles 4.0 with system compiler (3.4.3) and no optimizations
2- Compiles 4.0 with stage 1 compiler, full optimization
3- Compiles 4.0 with stage 2 compiler, full optimization.
4- Checks that stage 2 and 3 produced the same code. Results of stage 3 is the final compiler.

(I might be missing a stage there)

Modern GCCs even have a bootstrap target that adds an extra stage where GCC is profiled, to see which branches are taken more often, and the results are fed back into the next stage so the compiler is optimized for real world usage. Nice stuff really.

Re:I'll tell you what the problem is... (0)

Anonymous Coward | more than 9 years ago | (#12409421)

Nice try? Uh, I think the parent post was a JOKE... Here, this [wikipedia.org] might help.

Expected (4, Interesting)

Hatta (162192) | more than 9 years ago | (#12408887)

It was a long time before GCC 3 got better than 2.95. I expect the same thing will happen here.

Re:Expected (0)

guyfromindia (812078) | more than 9 years ago | (#12408945)

Very true!

when? why? (2, Interesting)

ChristTrekker (91442) | more than 9 years ago | (#12409200)

At what point (of 3's evolution) would you say it surpassed 2.95? Why?

Re:Expected (5, Insightful)

Rei (128717) | more than 9 years ago | (#12409208)

I think the problem is that, if I'm not mistaken, he's testing all C code except Povray. The biggest reported improvements in 4.0 were for g++, so using such a small C++ sample base (Povray - one purpose, one set of design principles, few authors) seems bound to produce inaccurate benchmarking.

Further, on his most reasonable C benchmark (the Linux kernel), he only records compile time and binary size, but no performance. I call it the most reasonable benchmark because it has thousands of contributors and covers a wide range of code purposes and individual coding habits - and yet, performance is omitted.

In short, I wouldn't trust this benchmark. Probably the best benchmark would be to build a whole Gentoo system with both, with identical configurations, and check build times and performances ;)

Re:Expected (4, Interesting)

ajs (35943) | more than 9 years ago | (#12409309)

I'm not convinced that this test shows that gcc4 is less effecitve than gcc3, though.

First off, all of the programs tested are programs that use hand-tooled assembly in the most performance-sensitive code. That has to mean that the compiler is moot in those sections.

A better test would be to compare three things: the hand-optimized assembly under gcc 3 vs the C code (usually there's a configure switch that tells the code to ignore the hand-tuned assembly, and use a C equivalent) under gcc4 vs that same C code under gcc4.

I think you'd see a surprising result, and if the vectorization code is good enough, you should even see a small boost over the hand-tuned assembly (since ALL of the code is being optimized this way, not just critical sections).

first post (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#12408894)

first post

What about... (2, Interesting)

elid (672471) | more than 9 years ago | (#12408901)

...Tiger? Wasn't it compiled with GCC 4.0?

Re:What about... (0)

Anonymous Coward | more than 9 years ago | (#12408993)

Most likely the author is using x86...

The parent does bring up a good point: What about benchmarks on the G5 architecture? Apple was who made 4.0 happen...

Power is different. (0)

Anonymous Coward | more than 9 years ago | (#12408996)

You've got to benchmark each platform separately. Sure, they share some higher level optimizations, but the type of processor determines a lot of important stuff (like instruction scheduling) and is dominant.

Re:What about... (5, Informative)

scotlewis (45960) | more than 9 years ago | (#12409023)

Yes and no. The default compiler is GCC4, however, the kernel and much of the OS (pretty much all of Darwin, in fact) are still compiled with GCC3 because they haven't completely cleared the codebase of GCC3-isms.

That said, remember that the submitter is talking about GCC4 on x86 platforms, and remember that Apple is putting a lot of work into making sure the PowerPC optimizations are as good as possible. Not to mention things like GCC4's auto-vectorization of code to take advantage of the Altivec unit (which has a more noticeable effect than MMXing x86 code).

It would be nice to see some test results for Apple's GCC versions 3 and 4.

The performance of compiled code (5, Informative)

pclminion (145572) | more than 9 years ago | (#12408921)

This has always bugged me.

Some people spend 10 hours tweaking compiler settings and optimizations to get an extra 5% performance from their code.

Other people spend 2 hours selecting the proper algorithm in the first place and get an extra 500% performance from their code.

To semi-quote The Matrix: One of these endeavors... is intelligent. And one of them is not.

Re:The performance of compiled code (1)

GoCoGi (716063) | more than 9 years ago | (#12408938)

Combine a) with b).

Re:The performance of compiled code (1, Interesting)

Chirs (87576) | more than 9 years ago | (#12408948)

At some point you've got the best algorithm, you've profiled, you've hand-optimised, you've got the fastest hardware you can afford....and you *still* need that last 5%.

That's when you spend 10 hours tweaking compilers settings...

Re:The performance of compiled code (3, Insightful)

pclminion (145572) | more than 9 years ago | (#12408982)

At some point you've got the best algorithm, you've profiled, you've hand-optimised, you've got the fastest hardware you can afford....and you *still* need that last 5%. That's when you spend 10 hours tweaking compilers settings...

If you really, positively need an extra 5% performance, you might as well just buy a computer that's 5% faster.

Re:The performance of compiled code (1)

CoderBob (858156) | more than 9 years ago | (#12409029)

Didn't he say "You've got the fastest hardware you can afford"?? There's no guarantee that the hardware will help, either. Some OSes **cough**Windows**cough** seem to like not using hardware effectively.

Re:The performance of compiled code (4, Funny)

Stiletto (12066) | more than 9 years ago | (#12409031)

If you really, positively need an extra 5% performance, you might as well just buy a computer that's 5% faster.

You work at Microsoft, right? No? Intel?

Re:The performance of compiled code (5, Insightful)

Minwee (522556) | more than 9 years ago | (#12409055)

Unfortunately, including a faster computer with every copy of the code you distribute may be prohibitively expensive.

Re:The performance of compiled code (1, Interesting)

Anonymous Coward | more than 9 years ago | (#12409061)

If you really, positively need an extra 5% performance, you might as well just buy a computer that's 5% faster.

Why should they? What problem do you have how someone spends their time? If someone wants to make their system as fast as they can, that's their business.

You're obviously a small box user. Have you ever worked in the real world where huge batch runs can take weeks? You think companies should splash out another million or too on new hardware, just because you use a pissy little machine?

Re:The performance of compiled code (4, Interesting)

pclminion (145572) | more than 9 years ago | (#12409106)

You're obviously a small box user. Have you ever worked in the real world where huge batch runs can take weeks?

Yes.

You think companies should splash out another million or too on new hardware, just because you use a pissy little machine?

I think that companies should re-evaluate their "need" for an extra 5% performance. Here's an idea -- if you need something 10 minutes faster, why not start the process 10 minutes sooner?

5% just gets lost in the noise. You beef up your system, making it 5% faster... And then some retard in production makes a mistake and sets you back six weeks.

Re:The performance of compiled code (3, Interesting)

The Snowman (116231) | more than 9 years ago | (#12409249)

I think that companies should re-evaluate their "need" for an extra 5% performance. Here's an idea -- if you need something 10 minutes faster, why not start the process 10 minutes sooner?

In any large organization, the process gets in the way. Some suit decides the product needs a new feature, or needs to ship sooner, or whatever, and this slowly trickles down to the developers who suddenly are put in crunch time where every minute counts. Schedules and deadlines may change daily. People's jobs may be at risk. Shit happens.

Nobody really likes it, but that is sometimes how we arrive at the point where we "need" an extra 5% performance, where we "need" the program to finish ten minutes sooner. Starting earlier is not always an option, usually because you don't know you even have to start *at all* until the last minute.

Re:The performance of compiled code (1)

Avenger337 (840754) | more than 9 years ago | (#12409260)

I disagree -- 5% can be a LOT, particularly if you're running something that takes an incredibly long time

Sure, maybe your 3.5 hour process only runs 10 minutes faster -- but on the other hand, your 10-week-long simulation of quantum physics finishes several days sooner. 5% is a lot.

Re:The performance of compiled code (1)

putko (753330) | more than 9 years ago | (#12409078)

When you are dealing with expensive hardware, or a cluster, buying more hardware is likely a lot more expensive than optimizing the software.

E.g. buy 500 faster CPUs? Buy some more CPUs? If the CPUs cost enough and take time to order, optimizing the software might be the easy way out.

Re:The performance of compiled code (1)

pclminion (145572) | more than 9 years ago | (#12409138)

When you are dealing with expensive hardware, or a cluster

If you're dealing with a cluster, chances are you can make improvements significantly larger than 5%.

Re:The performance of compiled code (0)

Anonymous Coward | more than 9 years ago | (#12409343)

If you really, positively need an extra 5% performance, you might as well just buy a computer that's 5% faster.

To see the problem with that reasoning, ask yourself this question: when should you first apply it? Are you sure that now is a good time?

Progress in compiler optimization is made in increments of 5% or less. If you dismiss insignificant-sounding optimizations in favor of waiting for the next big breakthrough, you will find that those breakthroughs are few and far between.

At this point, the low-hanging fruit has all been picked. It's incrementalism or nothing.

Re:The performance of compiled code (3, Insightful)

Jeremy.DeGroot (878927) | more than 9 years ago | (#12408999)

You should do both. Choosing the right algorithms is crucial, no doubt about it. But if you've got a massive database application, that 5% can represent a huge amount of work and be worth the trouble. A little bit of extra performance can, in many cases, go a long, long way towards adding to the value of the software. Both endeavors are intelligent in many (if not most) cases. Performance is important in software, and any little bit you can squeeze out will likely be a big deal.

Re:The performance of compiled code (5, Insightful)

kfg (145172) | more than 9 years ago | (#12409064)

And in both groups you will find people who believe that execution speed is the measurement of code quality.

KFG

Re:The performance of compiled code (1)

mattgreen (701203) | more than 9 years ago | (#12409283)

Probably because execution speed is objective and therefore suitable for chest-thumping. You do not see nearly as many people advertising the clarity of their code in the same way.

(Well, maybe the Java crowd at times, but they tend to confuse having only one way to do things with clarity.)

But, most people... (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#12409176)

Just throw some bloated slow-ass algorithm together in Java.

Ah Java, where time is glacial.

Re:The performance of compiled code (4, Insightful)

mattgreen (701203) | more than 9 years ago | (#12409212)

It is because it is easier to delve into needlessly technical aspects afforded by compiler settings and 'optimizations' than it is to admit that one's algorithm is not sound. Kids running Gentoo delude themselves into thinking that omitting the frame pointer on compiles is going to make a massive difference in terms of performance, and fail to remember it makes bug hunting far more difficult when applications crash. Additionally, the 5% gain mentioned can be a severe overstatement. I frequent a game programming board, and the widespread use of C++ has led to an abundance of nano-optimization threads, the most amusing of which was an attempt to optimize strlen().

Optimizing every single line of code is a complete waste of time, since the 80/20 rule generally applies. Use a profiler to determine where that 20% is.

Re:The performance of compiled code (3, Insightful)

IHateSlashDot (823890) | more than 9 years ago | (#12409399)

The point of this article is compiler optimizations, not algorithm selection. At the point that I look at compiler performance, I've already done all of the algorithm tuning so your point is moot. This is a very interesting benchmark for those that of who already write good code and want the compiler to make the best of it.

intel compiler (1, Interesting)

Anonymous Coward | more than 9 years ago | (#12408924)

wasn't one of the goals of 4.0 to become more like intel's prized x86 compiler?

Re:intel compiler (5, Interesting)

MORB (793798) | more than 9 years ago | (#12409153)

Intel compiler's reason why it generate faster code is because it does auto-vectorisation (ie, it automatically finds out how to transform some code patterns to take advantage of native vector operation, such as those provided by sse). They started to implement this in gcc 4.0, but it's a veyr first iteration that for what I know is still kinda limited. I'm not even sure it's enabled by default, even in -O3. There are lots of improvement there targeted at gcc4.1.

JOIN THE CAMPAIGN TO ANNIHILATE GARCIA (-1, Troll)

SneakyTroll (880717) | more than 9 years ago | (#12408941)

In my other highly moderated account I get mod points regularly. I have set about systematically modding down Garcia with them to strike a blow for good taste and good sense at Slashdot.

But the fight is long and arduous! I need other strong, willing souls to fight the good fight! Join me in modding Garcia to the level of a pathetic troll! Whenever you get mod points, go through each of Garcia's posts in his history and mod them 'Overrated'. That way they will not be meta-modderated and you can continue your work indefinitely. Try and mod the older, less active stories where he is less likely to be modded back up. Join with me in my glorious quest! Join me in the first mod lynching on Slashdot of this serial irritant.

Yours,

SneakyTroll.

I got your value (4, Funny)

Anonymous Coward | more than 9 years ago | (#12408943)

"Like a baby, we won't really appreciate its value until it's matured a bit."

Does this mean I have to wait until it's 18?

Re:I got your value (5, Funny)

Martin Blank (154261) | more than 9 years ago | (#12408995)

Generally yes, although you'll only have to wait until 16 in some states.

email (1, Informative)

Anonymous Coward | more than 9 years ago | (#12408954)

I tried to email you about your "thethere" mistake, but you don't want to talk to people apparently. Not the most important of corrections maybe, but anyway...

scott.ladd@coyotegulch.com
SMTP error from remote mailer after RCPT TO::
host smtp.secureserver.net [64.202.166.12]:
553 217.209.223.* mail rejected due to excessive spam

Fast KDE compile. (4, Informative)

Anonymous Coward | more than 9 years ago | (#12408965)

It's damn fast for KDE compile as someone tested [kdedevelopers.org] .

Re:Fast KDE compile. (3, Informative)

badfish99 (826052) | more than 9 years ago | (#12409016)

Well, the article you link to starts with the words:
KDE sources now blacklist gcc 4.0.0 because it miscompiles KDE
It must be easy to compile fast if you don't mind getting the wrong answer.

Re:Fast KDE compile. (0)

Anonymous Coward | more than 9 years ago | (#12409125)

It miscompiles KDE for what architecture, with what CFLAGS, what optimization, what system ?

It has been blacklisted because some core KHTML components failed.

But all in all the compiled KDE 4.0.0 that I am using works perfectly but otoh I compiled it with these flags:

-O0 --Wl,--as-needed -pipe -g

Re:Fast KDE compile. (0)

Anonymous Coward | more than 9 years ago | (#12409143)

KDE 4.0.0

I want a time machine too!!

Re:Fast KDE compile. (0)

Anonymous Coward | more than 9 years ago | (#12409165)

... erm quite obvious a typing mistake :) KDE and GCC both 3 letters :)

Re:Fast KDE compile. (0)

Anonymous Coward | more than 9 years ago | (#12409187)

Just out of interest, when is KDE 4 scheduled for release? I've tried googling, but all I get is a bunch of hopelessly out of date links (e.g. suggesting December 2004, etc). What's the most recent estimate, straight from the horses mouth, if possible?

Re:Fast KDE compile. (0)

Anonymous Coward | more than 9 years ago | (#12409223)

At least a year away, probably more. The porting of the base libraries to Qt4 will start after the long awaited switch from cvs to svn is done, then it will be a kde 3.5 "usability and application" release, then kde 4.

Re:Fast KDE compile. (1)

molnarcs (675885) | more than 9 years ago | (#12409257)

Can someone give us a link to an explanation of why it was blacklisted? What specific problems they had with it?

Re:Fast KDE compile. (0)

Anonymous Coward | more than 9 years ago | (#12409320)

it compiles, but sometimes the code it generates gives unexpected results (AFAIK only for khtml, the other stuff works well)

The Future? (2, Interesting)

liam193 (571414) | more than 9 years ago | (#12408968)

Version 4.0.0 is laying a foundation for the future, and should be seen as a technological step forward with new internal architectures and the addition of Fortran 95.

While I know the benefits of Fortran 95 are a big thing, saying it's a technological step forward to incorporate for the first time a 10 year old standard seems a bit ridiculous. When I first saw this article I had to check my calendar to make sure it was May 1st and not April 1st.

Re:The Future? (5, Funny)

discordja (612393) | more than 9 years ago | (#12409087)

I can see how that'd throw you off since it's May 2. :)

Re:The Future? (2, Insightful)

Anonymous Coward | more than 9 years ago | (#12409158)

Considering the glacial pace of change in the Fortran world, using a "standard" created as recently as 1995 is considered to be a radical and reckless act.

We have this thing on Earth, called tact. (3, Funny)

Tim Browse (9263) | more than 9 years ago | (#12408977)

Like a baby, we won't really appreciate its value until it's matured a bit.

Is that what you say to new parents? :-)

Re:We have this thing on Earth, called tact. (1)

t_allardyce (48447) | more than 9 years ago | (#12409070)

Yeah, just before they find out about shitting, puking and waking up at 3am.

Screenshots? (4, Funny)

jmcneill (256391) | more than 9 years ago | (#12408983)

Where are the screenshots?

Re:Screenshots? (1)

wawannem (591061) | more than 9 years ago | (#12409135)

Here you go: bash$ gcc -o test main.c bash$ As you can see, the new version of GCC clearly has some hurdles to jump before being production ready.

Re:Screenshots? (2, Informative)

wawannem (591061) | more than 9 years ago | (#12409182)

damn... I meant:

Here you go:

bash$ gcc -o test main.c
bash$

Compilation Speed Test by a KDE developer (5, Interesting)

Anonymous Coward | more than 9 years ago | (#12408992)

http://www.kdedevelopers.org/node/view/1004

Qt:
-O0 -O2
gcc 3.3.5 23m40 31m38
gcc 3.4.3 22m47 28m45
gcc 4.0.0 13m16 19m23

KDElibs (with --enable-final)
-O0 -O2
gcc 3.3.5 14m44 27m28
gcc 3.4.3 14m49 27m03
gcc 4.0.0 9m54 23m30

KDElibs (without --enable-final)
-O0
gcc 3.3.5 32m56
gcc 3.4.3 32m49
gcc 4.0.0 15m15

I think KDE and Gentoo people will like GCC 4.0 ;)

Re:Compilation Speed Test by a KDE developer (1)

taniwha (70410) | more than 9 years ago | (#12409252)

except that some stuff doesn't quite compile correctly yet ... hold off on building KDE with it for a while

Question to the maker of Acovea (0)

Anonymous Coward | more than 9 years ago | (#12409010)

What happened to the -ftracer option? Is is still generally useful for optimization in GCC 3.4 (and later)? Or has it now become part of one of the -O options?

What about windows? (1)

Spy der Mann (805235) | more than 9 years ago | (#12409011)

One of the problems with MINGW32 is that the linker doesn't respect the --gc-sections. When I read about the improved dead code ellimination (in the what's new of GCC 4.0), i wonder if we windows users could *finally* deliver executables as small as those made with the VC++ compiler.

Any info on this? When's the mingw port going to be done? Has anyone tested the unofficial mingw build (forgot the url, sorry)?

fpmath=sse (1)

Chemisor (97276) | more than 9 years ago | (#12409027)

Where would I find on which architectures fpmath is set to sse by default? The info file doesn't seem to have that information. And is it possible to tell the compiler to always use SSE instead of the FP registers, so I wouldn't have to have so many emms's all over the code? (In generic MMX-using routines which don't usually know what'll happen after them)

Re:fpmath=sse (0)

Anonymous Coward | more than 9 years ago | (#12409366)

It's the default setting for x86-64 architecture. However, I tried it on an Athlon XP, and wasn't really impressed by the results, on my project it was significantly slower than 387. So it might be sensible to try both settings on a x86-64 too.

kettle? black? (2, Insightful)

dem3tre (793471) | more than 9 years ago | (#12409044)

I love the open source movement but I wonder why the following comment is OK for open source projects and not close source?

quote "That said, no one should expect a "point-oh-point-oh" release to deliver the full potential of a product, particularly when it comes to a software system with the complexity of GCC."

I bet no one would dare say that about certain product from Redmond.

Re:kettle? black? (5, Informative)

Dan Berlin (682091) | more than 9 years ago | (#12409082)

Significant difference. If you ask gcc folk (like me), we'd happily tell you that 4.0 will probably be, performance wise, win in some cases, and a lose in others. Anytime you add large numbers of optimizations, it takes a while to tune everything else so that we get good generated code. 4.0 is more a test of the new optimizers than something that is supposed to produce spectacular results in all cases.

Re:kettle? black? (1)

superpulpsicle (533373) | more than 9 years ago | (#12409397)

Boy that sounds like we should stay away from GCC 4.0 until at least 4.1 and beyond.

Re:kettle? black? (1)

Spy der Mann (805235) | more than 9 years ago | (#12409139)

He did NOT say "fix bugs". He said "deliver the full potential", i.e. "developing over the new features".

Re:kettle? black? (1)

cbrocious (764766) | more than 9 years ago | (#12409146)

MS doesn't use the "release early, release often" methodology.

Re:kettle? black? (1)

canadiangoose (606308) | more than 9 years ago | (#12409296)

If the point releases from Redmond were free upgrades, then it might not matter so much. Last I checked, upgrading from Windows NT 5.0 to 5.1 costed several hundred dollars.

Re:kettle? black? (1)

k98sven (324383) | more than 9 years ago | (#12409361)

quote "That said, no one should expect a "point-oh-point-oh" release to deliver the full potential of a product, particularly when it comes to a software system with the complexity of GCC."

I bet no one would dare say that about certain product from Redmond.


I don't know about that.. But first off GCC (and other free software projects) tend to be much more upfront about the fact that radical changes also means new bugs. Slashdot types (read: engineers) tend to value that kind of technical honesty over marketspeak.

Secondly.. when did Microsoft last develop something radically new? Most of their products are over a decade old, and a lot of those which aren't are based on technology they bought up from competitors.

Thirdly, unlike Microsoft, the GCC guys aren't doing everything in their power to get me to upgrade.

Like a baby (1)

d_54321 (446966) | more than 9 years ago | (#12409096)

Like a baby, we won't really appreciate its value until it's matured a bit.

And if it dies, we'll be left with a lot of jokes [dead-baby-joke.com]

The value of a baby (5, Funny)

Laxitive (10360) | more than 9 years ago | (#12409099)

"Like a baby, we won't really appreciate its value until it's matured a bit."

Seriously, this is why I don't appreciate babies. At least after about 4 or 5 years, they're useful for mild manual labour. Sure they'll complain and cry, but all you gotta do is tie their dishwashing to the number of fish heads they're allotted that week. Works pretty well, I gotta say. Anyway, at least they're not a net productivity drain like babies are.

Anyway, what I mean to say is: from your description, it looks like I'll be staying away from GCC 4 for a while, too. Goddamn babies.

-Laxitive

Re:The value of a baby (1)

CoderBob (858156) | more than 9 years ago | (#12409150)

Don't forget that you can chain them to whatever equipment they will be operating by that age. You don't have to worry about them ripping the chain off yet!

Kind of a weird review (4, Interesting)

Just Some Guy (3352) | more than 9 years ago | (#12409109)

As far as I'm concerned, unless you're using "-Os" because you're deliberately building small binaries at the expense of all else - say, for embedded development - the resulting binary size is completely irrelevant as a compiler benchmark. What if the smaller result uses a slower, naive algorithm (which in this case would mean choosing an obviously-correct set of opcodes to implement a line of C instead of a less-obvious but faster set)?

Second, the runtime benchmarks were close enough to be statistically meaningless in most cases. The author concludes with:

Is GCC 4.0 better than its predecessors?

In terms of raw numbers, the answer is a definite "no".

My take would have been "in terms of raw numbers, it's not really any better yet." It's close enough to equal (and slower in few enough cases that I'd be willing to accept them), though, that I'd be willing to switch to it if I could do so without having to modify a lot of incompatible code. It's clearly the way of the future, and as long as it's not worse than the current gold standard, why not?

Re:Kind of a weird review (2, Informative)

larien (5608) | more than 9 years ago | (#12409245)

Smaller binaries = quicker load time (less disk I/O or memory being moved around) and smaller memory footprint. Yes, this is mostly in embedded apps where memory sizes might still be in KB rather than GB, but if you're analyzing performance, memory usage is relevant, even if it may not be your primary concern.

Re:Kind of a weird review (1)

MORB (793798) | more than 9 years ago | (#12409362)

It's a very weird review.

I find it strange that the guy didn't mention the new autovectorisations optimisations that have been added (granted, they are still young and far from complete in 4.0 from what one can read on the gcc wiki).
After all, the speed of the generated code seems to be his main concern.

However, not a single word about other useful and interesting improvements and new features.

- Faster C++ compilations, especially with -O0. For this alone, this makes this release very worthy for C++ developers.

- Ability to define which symbols to exort and not to export in shared objects (.so). It won't mean anything for the end-user right now, but down the line, things like KDE that are written in C++ and uses a lot of shared objects will see their size and loading times decrease notably thanks to this.
Of course, they first need to do the necessary changes on their side to take advantage of this.

- Compile-time memory and pointers debugging and protection, using a new library called libmudflap.
This should prove easier to use and much less of a performance hit than tools like valgrind, as well as probably not too hard to port to windows.

Also, he doesn't talk about how much GCJ got upgraded for 4.0. They completed the merge with classpath, added the ability to gcj to compile whole .jar as native .so, and improved the way native and non native java code works together, up to the point that it can run some big applications like eclipse out of the box.

Basically, this is a review form a gentoo end-user, who as such consider gcc only from the performance standpoint.
Don't forget that the main target audience of compilers are developers, and for developers, gcc4.0 is a very attractive release.

Oustanding!!! (0)

Anonymous Coward | more than 9 years ago | (#12409127)

This is just what Linux needs more bloat and less speed.

Oh well, release early, release often. Especially when you don't have improvements to release.

The ? operator (5, Funny)

shreevatsa (845645) | more than 9 years ago | (#12409155)

The worst part is that they now say that the
<?
,
>?
,
<?=
and
>?=
operators are deprecated, and will be removed. Damn, I liked them so much. Sure, they weren't part of the standard, and only a GCC extension, but it's just so much more fun to say
a = b <? c
than to say
a = min(b,c)
or even
a=b<c?b:c
. The best use was saying
a<?=b
instead of the painful
a=min(a,b)
.

Re:The ? operator (0)

Anonymous Coward | more than 9 years ago | (#12409301)

min(you should get, out more)

Re:The ? operator (3, Interesting)

keshto (553762) | more than 9 years ago | (#12409321)

Dude, you've never coded in a commercial environment , have you ? Or are all your company's projects meant to be compiled by a specific version of gcc only, regardless of the OS and architecture? I use gcc exclusively these days, but it's for my research. Back when I was working, we had to code for both VC++ and g++ . Atleast, the ones of us who worked on core-engine code. Fixing some moron's VC++ -specific idiocy sucked.

Re:The ? operator (0)

Anonymous Coward | more than 9 years ago | (#12409401)

Sure, they weren't part of the standard...

Doesn't that violate the purpose of having standards? Remember Microsoft's "Java"? Their CSS/PNG-rendering in IE? Their new "XML" proposal?

a = b <? c
a = min(b,c)

The latter is far more clear, IMO. Anyone can look at it and guess that it returns the minimum of b and c. The first? I'd have to look it up in the documentation.

It compiles KDE incorrectly (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#12409157)

Put it back in it's box and return to sender.
Had MS's compiler done this imagine the flames!

Do the new models replace or confuse old ones? (3, Interesting)

expro (597113) | more than 9 years ago | (#12409159)

I agree that this compiler is a cornerstone of free software.

But it was very frustrating to me to try to port the compiler to a new platform by modifying existing back ends for similar platforms.

After spending a few months on it (m68k in this case), I could not escape the layers of hack upon cruft upon hack upon cruft, that made it extremely difficult to make even fairly superficial mods because everyone seemed to be using the features differently and all the power seemed lost in hacks that made it impossible to do simple things (for me anyway). I am quite familiar with many assemblers and optimizing compilers.

I hope that the new work makes a somewhat-clean break with the old, otherwise, I would fear yet another layer to be hacked and interwoven, with the other ones that were so poorly fit to the back ends.

I suspect that not all backends are the same and perhaps the same experience would not be true for a more-popular target, but it seems to me it shouldn't be that hard to create a model that is more powerful yet more simple. Such would seem to me to be a major step forward and enable much greateer optimization, utilization, maintainability, etc.

-ftree-* (5, Interesting)

Anonymous Coward | more than 9 years ago | (#12409172)

The whole point of gcc4.0.0 is the tree-ssa thing. The author of this test didn't seem to notice that this stuff doesn't get enabled in -O2 nor -O3, but does have to be enabled by hand. This includes autovectorization (-ftree-vectorice) among other things which may make a difference.

If I was him, I'd repeat the tests again enabling the -ftree stuff when building with gcc4.0.0.

Re:-ftree-* (1)

Navreet (703315) | more than 9 years ago | (#12409335)

mod parent up!

This is true! I gained a 20%+ boost [with my personal project] when I compiled with the -ftree-vectorize option.

Anyone know why this isn't included in -O2?

More is Slower??? (1)

Nom du Keyboard (633989) | more than 9 years ago | (#12409180)

If you compile a great deal of C++, you'll want to investigate GCC 4.0.

So the more code you have, the more you want it to run more slowly. Perfect sense.

Nice analogy... (1, Insightful)

DarcSeed (636445) | more than 9 years ago | (#12409201)

Like a baby, we won't really appreciate its value until it's matured a bit

I'll just have to make sure you never babysit for me, if babies are that value-less to you.

What the hell? (1)

oGMo (379) | more than 9 years ago | (#12409218)

This is the stupidest review of GCC4 I can imagine. I won't go so far as to say typical gentoo stereotype focusing on lots of flags to get "really fast" results, but you know some people are thinking it.

Where are the C++ benchmarks? That's what's supposed to be big in GCC4. Compiling C++, and doing it faster than GCC3 did (which isn't hard). How fast does this compile KDE? Qt? How big are the binaries? etc. I see all of ONE compile time statistic, and that's on the linux kernel.

WTF is this? Can someone with a clue please do a real review?

Re:What the hell? (2, Interesting)

Kupek (75469) | more than 9 years ago | (#12409285)

One of the major changes in the 4.0.0 release is the internal reorganization that allows for more aggressive optimizations. Hence, he tested how the optimized performance of the latest 3.x versus 4.0.0. How do you tell the compiler to optimize? Well, you have to pass it "lots of flags."

non-x86 arch? (3, Interesting)

ChristTrekker (91442) | more than 9 years ago | (#12409235)

What about the performance on MIPS? PPC? C'mon, people...enquiring minds want to know!

This is expected, I think (5, Informative)

diegocgteleline.es (653730) | more than 9 years ago | (#12409290)

I found this in the osnews announcement

"Before we get a bunch of complaints about the fact that most binaries generated by GCC 4.0 are only marginally faster (and some a bit slower) than those compiled with 3.4, let me point out a few things that I've gathered from casually browsing the GCC development lists. I'm neither a GCC contributor nor a compiler expert.

Prior to GCC 4.0, the implementation of optimizations was mostly language-specific; there was little or no integration of optimization techniques across all languages. The main goal of the 4.0 release is to roll out a new, unified optimization framework (Tree-SSA), and to begin converting the old, fragmented optimization strategies to the unified framework.

Major improvements to the quality of the generated code aren't expected to arrive until later versions, when GCC contributors will have had a chance to really begin to leverage the new optimization infrastructure instead of just migrating to it.

So, although GCC 4.0 brings fairly dramatic benefits to compilation speed, the speed of generated binaries isn't expected to be markedly better than 3.4; that latter speedup isn't expected until later installments in the 4.x series."

Like a baby (4, Funny)

Anonymous Coward | more than 9 years ago | (#12409395)

Like a baby, we won't really appreciate its value until it's matured a bit.

"Come here son. Did you know your mother and I almost decided to not keep you when you were born? You were just a baby at the time, you didn't seem to have any value. I mean, seriously, what use is there for a baby? I'm glad we didn't make that mistake.
Now go play outside and don't come back before dinner time, and pick up the trash when you leave."

There are a bunch of papers on Tree-SSA... (1)

tcopeland (32225) | more than 9 years ago | (#12409398)

...on Diego's web site here [redhat.com] .

It really does depend on the code (4, Informative)

DoofusOfDeath (636671) | more than 9 years ago | (#12409418)

There was one test case I did for my own use. I've got a small C++ program that's computationally heavey and has a small working set of memory.

On that program (on a P4) I got an 11% reduction in runtime using GCC 4 vs. GCC 3.3.5. This was actually a big deal for me work.

The lesson here: You're mileage with GCC 4.0's improvements may vary from the benchmarks, and you might want to try it on your own code.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>