Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Speed Test: Comparing Intel C++, GNU C++, and LLVM Clang Compilers

samzenpus posted about a year ago | from the greased-lightning dept.

Software 196

Nerval's Lobster writes "Benchmarking is a tricky business: a valid benchmarking tries to remove all extraneous variables in order to get an accurate measurement, a process that's often problematic: sometimes it's nearly impossible to remove all outside influences, and often the process of taking the measurement can skew the results. In deciding to compare three compilers (the Intel C++ compiler, the GNU C++ compiler (g++), and the LLVM clang compiler), developer and editor Jeff Cogswell takes a number of 'real world' factors into account, such as how each compiler deals with templates, and comes to certain conclusions. 'It's interesting that the code built with the g++ compiler performed the best in most cases, although the clang compiler proved to be the fastest in terms of compilation time,' he writes. 'But I wasn't able to test much regarding the parallel processing with clang, since its Cilk Plus extension aren't quite ready, and the Threading Building Blocks team hasn't ported it yet.' Follow his work and see if you agree, and suggest where he can go from here."

Sorry! There are no comments related to the filter you selected.

first post (-1)

Anonymous Coward | about a year ago | (#45329469)

first post

Re:first post (5, Funny)

Anonymous Coward | about a year ago | (#45329523)

compiled with clang

Re:first post (2, Funny)

Anonymous Coward | about a year ago | (#45329719)

man, it took a long time to read it.

Re:first post (-1)

Anonymous Coward | about a year ago | (#45330259)

You're that retarded?

Re:first post (0)

Anonymous Coward | about a year ago | (#45329565)

Mod this up please.

Re:first post (-1)

Anonymous Coward | about a year ago | (#45329633)

fuck you, and fuck the GP

Re:first post (2)

Mitchell314 (1576581) | about a year ago | (#45330097)

Why, so we can have more first posters?

Re:first post (1)

Anonymous Coward | about a year ago | (#45331273)

Why, so we can have more first posters?

Based on my analysis of 1,000 random /. stories I have found that the average number of first posts per story is exactly 1. There's no reason to spread this FUD about more first posters.

Re:first post (1)

sce7mjm (558058) | about a year ago | (#45329603)

first post++

Re:first post (3, Funny)

neokushan (932374) | about a year ago | (#45330191)

first ++pre

Re:first post (4, Funny)

TheNastyInThePasty (2382648) | about a year ago | (#45330433)

int FirstPost(int a, int b)
{
   if(a < b)
       printf("I got first post!");
   else
      printf("No, I got first post!");
}

int main(int argc, const char** argv)
{
   int i = 0;

   // What prints out here?
   FirstPost(i++, i++);
}

Re:first post (2)

mark-t (151149) | about a year ago | (#45330593)

Assuming typical C calling convention.... "No, I got first post" will be printed, where a will be 1 and b will be 0 in the call to FirstPost. This is because generally, final arguments are evaluated and pushed onto the stack before earlier ones.

Although typically, the standard may say this behavior is undefined, in practice, almost all modern C compilers will produce the output I've described here.

Re:first post (1)

Anonymous Coward | about a year ago | (#45330815)

Actually, it's not undefined behavior. It's unspecified behavior.

Re:first post (4, Informative)

EvanED (569694) | about a year ago | (#45330987)

If it were just up to the order of evaluation of the function arguments, then it would be unspecified. However, the program also modifies the same object twice without an intervening sequence point, and that puts it into undefined behavior territory (6.5/2, C99 draft standard [open-std.org] ).

Re:first post (2)

elbonia (2452474) | about a year ago | (#45331105)

Clang will just issue a warning that you are making multiple unsequenced modifications. This is undefined in the C spec and the compiler just increments i sequently printing "I got first post!." Sequence points like this are hard to clarify for all cases which is why the C99 spec leaves it undefined. In C11 a detailed memory model has been created which should define most cases. http://en.wikipedia.org/wiki/C11_(C_standard_revision) [wikipedia.org]

Confirmed with:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
Target: x86_64-apple-darwin13.0.0
Thread model: posix

Re:first post (1)

Anonymous Coward | about a year ago | (#45330773)

The result may be nasal demons.

Re:first post (0)

Anonymous Coward | about a year ago | (#45330933)

It will not print anything. I compile with -Wall -Werror and get:


x.C: In function ‘int main(int, const char**)’:
x.C:17:23: error: operation on ‘i’ may be undefined [-Werror=sequence-point]
        FirstPost(i++, i++);
                                              ^
cc1plus: all warnings being treated as errors

Re:first post (1)

Anonymous Coward | about a year ago | (#45330993)

hello world

Funny benchmarks (2, Insightful)

Anonymous Coward | about a year ago | (#45329551)

The benchmarks in TFA are a little funny. Why is system time so large while user time so small? The only time I've seen this in real applications is when there is major core contention for resources.

Re:Funny benchmarks (3, Interesting)

Trepidity (597) | about a year ago | (#45329935)

This looks like it's testing the compile-time, in which case a large % of time being system time isn't that uncommon. Lots of opening and closing small files, generating temp files, general banging on the filesystem. Can heavily depend on your storage speed: compiling the Linux kernel on an SSD is much faster than on spinning rust.

Re: Funny benchmarks (1)

Anonymous Coward | about a year ago | (#45329969)

No compiler version numbers provided ? G++ versions differ wildly even in small increments. One needs to know the version of the benchmarked software for a fair comparison.

Re: Funny benchmarks (1)

coats (1068) | about a year ago | (#45330905)

Nor what flags he means by "full optimization". Nor what the hardware really is - for some problems, with the right flags Haswell performance is very different from Core-2.

Re:Funny benchmarks (4, Interesting)

godrik (1287354) | about a year ago | (#45330317)

that's normal. There is hyperthreading on that machine, it screws up that kind of measurement. You should always use wall-clock time when dealing with parallel codes. You should also repeat the test multiple times and discard the first few results which the author did not do. It is very standard in parallel programming benchmark. And since the author did not do that, I assume he does not know much about benchmarking. Lots of parallel middleware have high initialization overhead. This tends to be particularly true for intel tools.

Re:Funny benchmarks (5, Insightful)

robthebloke (1308483) | about a year ago | (#45330515)

I agree. He's not testing compiled code performance, he's just created a set of tests which will all be memory bandwidth limited. FTA:

I’m testing these with an array size of two billion.

That's all I needed to read to ignore him completely. Completely and utterly pointless. If g++ won, it is likely because it utilised stream intrinsics to avoid writing data back to the CPU cache, which would have freed up more cache, and minimised the number of page faults. This will not in anyway test the performance of the CPU code, it will just prove that your 1333Mhz memory is slower than your 3Ghz processor . This is why you don't profile code (wrapped up in a stupid for loop), but profile whole applications instead. From my own tests (measuring the performance of large scale applications using real world data sets), intel > clang > g++ (although the difference between them is shrinking). The author of the article hasn't got a clue what he's doing. FTA:

Notice the system time is higher than the elapsed time. That’s because we’re dealing with multiple cores.

No it isn't. It's because your CPU is sat idle whilst it waits for something to do.

Re:Funny benchmarks (5, Insightful)

Anonymous Coward | about a year ago | (#45331043)

Here's another tipoff that the guy is clueless about benchmarking, talking about a test which does FP math:

I’m not initializing the arrays, and that’s okay

Actually, it's not. This is a bad mistake which totally invalidates the data. Many FPUs have variable execution time depending on input data. There is often a large penalty for computations involving denormalized numbers. If uninitialized data arrays happen to be different across different compilers (and they might well be), execution time can vary quite a lot for reasons completely unrelated to compiled code quality.

It's not limited to FP, either. I remember at least one PowerPC CPU which had variable execution time for integer multiplies -- the multiplier could detect "early out" conditions when one of the operands was a small number, allowing it to shave a cycle or two off the execution time.

The moral of the story: making sure that input data for benchmarks is always the same is very important, even when it's trivially obvious that the code will execute the exact same instruction count for any data set.

Compile time is irrelevant. (4, Insightful)

Anonymous Coward | about a year ago | (#45329569)

Which one produced the fastest code?

Re:Compile time is irrelevant. (2, Insightful)

0123456 (636235) | about a year ago | (#45329627)

Which one produced the fastest code?

My current project takes two hours to compile from scratch, and uses around 20% CPU when it runs. So yes, compile time can be more important than how fast the code runs.

Re:Compile time is irrelevant. (4, Insightful)

ledow (319597) | about a year ago | (#45329685)

But over the lifetime of any average, runtime should outweigh compile time by orders of magnitude.

Otherwise, honestly, why bother to write the program efficiently at all?

And if you want to decrease compile times, it's easy - throw more machines and more power at the job. If you want to decrease runtime, then ALL of your users have to do that.

Honestly, if your compile times are that much, and that much of a burden, you need to upgrade, and you also need to modularise your code more. The fact is that most of that compile time isn't actually needed for 90% of compiles unless your code is very crap.

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45329815)

But over the lifetime of any average, runtime should outweigh compile time by orders of magnitude.

Which is irrelevant for a application that is sitting idle for 80% of the time.

Honestly, if your compile times are that much, and that much of a burden, you need to upgrade, and you also need to modularise your code more. The fact is that most of that compile time isn't actually needed for 90% of compiles unless your code is very crap.

And if he does all that, the faster compiler still wins.

Re:Compile time is irrelevant. (5, Informative)

rfolkker (443051) | about a year ago | (#45330089)

I have worked on projects that have taken upwards of 8 hours for a full compile. There is a lot of validity behind the business impact of different compilers.

The current mentality of throw more horse power at a problem is not always the practical, or the logical conclusion. If you can improve your overall compile time, it can improve your productive time.

From a Build Engineering perspective, analyzing why it takes time for a project to compile is one of the most important metrics.

Not only do I monitor how long a project takes to compile, but I also keep an active average, and try to maintain highs and lows to identify compile spikes.

We monitor processor(s), disk access speeds, memory loads, build warnings, change size, concurred builds, etc.

We look at all possible solutions. With the current build tools we have, we can either provision another build system for the queue, or if necessary increase memory, or disk space, or faster drives, more processors, or even upgraded software. We have gone as far as home-grown fixes to get around issues until better solutions become available.

All of this needs to be accounted for, so, not only is compile time relevant, but what is CAUSING compile times is relevant.

Re:Compile time is irrelevant. (1)

non0score (890022) | about a year ago | (#45330265)

Or maybe people should just stop using so much templates, and use code generation instead. Oh, and also supporting the Apple/LLVM initiative to modularize C++ would be double plus good too (think #import in C++).

Re:Compile time is irrelevant. (1)

ebno-10db (1459097) | about a year ago | (#45330897)

Lipstick on a pig.

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45330533)

Eight hours? Try a week.
Grab this book and fix your build: Large-Scale C++ Software Design by John Lakos

Re:Compile time is irrelevant. (2, Funny)

F.Ultra (1673484) | about a year ago | (#45330951)

A week? Try Gentoo.

Re:Compile time is irrelevant. (1)

Anonymous Coward | about a year ago | (#45330673)

Who is your customer?

If you're working on a worthwhile project you will have %BIGNUMs of customers (or at least %BIGNUMs of customer hours per day). You have to compile each build once. Your customers will see the benefit of faster code all day, every day. So unless you're working in a very low-competition niche, improving the efficiency of your customers should be a plus to your marketability. See how hard and fast the droid manufacturers play with benchmark rules just to snatch a percent or so. So compile time is important (and yes, the workarounds, architectural solutions etc. you mention are valid but ultimately speed of compiled code should take priority over speed to compile the code.

Four-hour compile times means a 1 day turnaround (1)

Anonymous Coward | about a year ago | (#45331435)

Four-hour compile times means a 1 day turnaround for any bugfix for production.

A one-hour compile time means four to six bugfix/test cycles per day.

Re:Compile time is irrelevant. (1)

0123456 (636235) | about a year ago | (#45330179)

Honestly, if your compile times are that much, and that much of a burden, you need to upgrade, and you also need to modularise your code more. The fact is that most of that compile time isn't actually needed for 90% of compiles unless your code is very crap.

Hint: I said 'two hours to compile from scratch'. You can't avoid compiling all your source if you just did a clean checkout from SVN into an empty source tree; as you would, for example, before building a release or release candidate.

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45330417)

I have plenty of code that takes GCC 4.7 over 8 hours for a single C-file, where GCC 4.4 or clang can compile it in just over 2 minutes. And what's worse: GCC 4.4 produces faster code than GCC 4.7 in this case. The C++ rewrite of GCC is pure crap.

I also have plenty of generated code that is typically compiled and executed only once or twice. Here compile-time is king.

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45330463)

Oh yeah, forgot to mention. That's gcc -O0 taking 8 hours for GCC 4.7. It is the front-end that got slower, not the optimizer... So the developers don't care.

Re:Compile time is irrelevant. (1)

flargleblarg (685368) | about a year ago | (#45331407)

A single C source code file? 8 hours? How many lines? Even 8 minutes seems off by several order of magnitude.

Re:Compile time is irrelevant. (1)

adisakp (705706) | about a year ago | (#45331169)

Faster compile times make for faster iteration... which lets you test global changes for examples - which optimizations actually work - more easily. Not to mention that having better iteration on a program usually produces a superior product.

Also, as a developer, faster compile times make my life a little less frustrating so I'll be less likely to pull out all my hair while waiting on the computer.

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45331343)

But over the lifetime of any average, runtime should outweigh compile time by orders of magnitude.

This is not +5 insightful thinking. Counterintuitively, for large projects, a very fast compiler with middling optimization will often produce superior runtimes compared to a slower compiler with high quality optimization. A programmer able to do a lot more compile-test cycles in the same amount of time is one who effectively enjoys more time to prototype, test, and debug large-scale algorithmic optimizations. These tend to absolutely kill the kind of micro-optimizations compilers are good at.

Consider a reductio ad absurdum: would anybody use a compiler which compiles 1000x slower to deliver 10x better runtimes than the next best compiler? Such a compiler actually would find a niche, IMO, but it would be just a niche: large projects simply wouldn't be possible to develop if they relied exclusively on that one compiler. Provided that there is object file / linker compatibility, many projects would use it for a handful of performance-critical functions which seldom need recompilation, relying on a faster compiler to compile the rest of the project.

Real world projects have to deal with finite time to write and debug code. Ignoring this and insisting on a single figure of merit (optimization quality) may produce poor decisions.

Re:Compile time is irrelevant. (2)

wiredlogic (135348) | about a year ago | (#45331393)

An excessively long build time can inflate development costs if the delay in testing new code becomes prohibitively long. A large codebase that takes 4 hours to build on a slow compiler will force developers to frequently wait over night for test results to come back. If a different compiler can build that code 4x faster you have many more opportunities to observe test results during a work day. Upgrading the build system isn't always an option when you have to support legacy platforms with inherently slow hardware and cross compiling isn't an option.

YOU Are The Problem! (-1, Troll)

Anonymous Coward | about a year ago | (#45329743)

Which one produced the fastest code?

My current project takes two hours to compile from scratch, and uses around 20% CPU when it runs. So yes, compile time can be more important than how fast the code runs.

You are the problem with most software today. Your utter selfishness or plain ignorance results in slow software that wastes countless time and offloads your temporary pain to possibly millions of users for years to come.

Great! You saved 20 to 40 hours on the project. But, because you put your needs ahead of the end user's needs your compiled code may be slow, or even extremely slow, because you chose the wrong compiler. Now, your software will run for thousands or millions of man hours, wasting FAR more that the 20 or 40 hours that you save for yourself.

This is the same sort of asshattery that seems to go on everywhere. 'RAM/Disk/CPU is cheap so I'm justified in making it bloated.' 'Why would any other application be running at the same time as my own? I can consume all resources for my bubble sort.'

Re:YOU Are The Problem! (1)

Anonymous Coward | about a year ago | (#45330053)

Speed up from algorithmic changes will far out perform anything an optimizing compiler can produce. A compiler that compiles faster lets you focus on algorithmic optimizations more than a slow compiling compiler.

Besides if it's spending 80% of the time idle, then the program is waiting for the user not the other way around.

Re:YOU Are The Problem! (1)

0123456 (636235) | about a year ago | (#45330137)

Besides if it's spending 80% of the time idle, then the program is waiting for the user not the other way around.

Bingo. When the software is waiting for something to do 80% of the time, and nothing else of any importance is running on that machine, optimization is pretty much irrelevant; at best it would save a tiny amount of power by slightly reducing CPU usage.

Bingo Fail (0)

Anonymous Coward | about a year ago | (#45330617)

Besides if it's spending 80% of the time idle, then the program is waiting for the user not the other way around.

Bingo. When the software is waiting for something to do 80% of the time, and nothing else of any importance is running on that machine, optimization is pretty much irrelevant; at best it would save a tiny amount of power by slightly reducing CPU usage.

Even if it sits idle for 80% of the time, that's still 4.8 hours per day suffering at the hand of a selfish programmer and his crap software.

Don't try to justify the issue. OWN IT!

Re:Compile time is irrelevant. (2)

marcello_dl (667940) | about a year ago | (#45329817)

The time spent on running code vs compiling code, for me, is like 10000:1, to be optimistic. Compilation time is pretty irrelevant for me and I daresay most users.

Re:Compile time is irrelevant. (1)

BasilBrush (643681) | about a year ago | (#45330249)

Compilation time is pretty irrelevant for me and I daresay most users.

Compilers are selected by developers, not users.

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45330715)

Developers might scratch their own itch, but >=99% of them want others to use their software and thereby feel good about themselves. Faster execution time will impact on potential users. Therefore developers should pay more attention to that than to compile time. Few users (gentoo exempt!) care about compile times.

Re:Compile time is irrelevant. (3, Interesting)

TheGavster (774657) | about a year ago | (#45330509)

While any user-facing application is going to spend most of its time waiting for the user to do something, the latency to finish that task is still something the user will want to see optimized. Further, if a long-running task tops out at 20% CPU, apparently optimization was weighted too much towards CPU and you need to look into optimizing your IO or memory usage.

Re:Compile time is irrelevant. (2)

Gothmolly (148874) | about a year ago | (#45330525)

You're IO bound. Get a real disk subsystem.

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45329637)

g++.
It's right in TFS.

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45329663)

g++.
It's right in TFS.

Without knowing the margin over the others, it's meaningless.

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45329655)

'It's interesting that the code built with the g++ compiler performed the best in most cases,

are you too stupid to read?

Re:Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45329779)

Not really. Once you use LLVM in a JIT environment, you care.

Re: Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45329791)

Visual C++ 2013, but they were too scared to include it in their tests.

Re: Compile time is irrelevant. (0)

Anonymous Coward | about a year ago | (#45329919)

They only tested on a Linux box... they even say the reason they don't test TBB with clang is because they couldn't find a Mac.

Re:Compile time is irrelevant. (4, Insightful)

ShanghaiBill (739463) | about a year ago | (#45329809)

Which one produced the fastest code?

It doesn't matter. It may matter which one compiles your code faster. Depending on your use of things like templates, classes, etc. that may be a different compiler than the best for the benchmarks. But even that is unlikely to matter much. I doubt if their is much more than a few percentage difference. More important are issues like standard compliance, good warning messages, tool-chain/IDE integration, etc.

How about benchmarking the binary? (0)

Anonymous Coward | about a year ago | (#45329575)

How about benchmarking the binary? Isn't what it matters anyway? Let's us see compiler benchmarks and the benchmarks of the compiled binary itself.

Re:How about benchmarking the binary? (1)

adisakp (705706) | about a year ago | (#45329871)

You obviously don't work on large projects where build times can be 30 minutes and link times can be 5-10 minutes on top of that. In the past we have tried just about everything possible to make our compiles faster because it allows more iteration and less time waiting on code building. This include minimize include dependencies and looking at dependency graphs, benchmarking distributed build systems (incredibuild), working with pre-compiled headers, examining unity-builds / unified builds (think one CPP that includes many other CPP's in the same system), etc. We also buy fast hardware (8 core CPU's with 16 threads), 32 GB Memory, and fast SSD's. All because minimizing build time is means more productive time for developers.

Re:How about benchmarking the binary? (1)

TapeCutter (624760) | about a year ago | (#45330955)

Compiling SOAP on Windows or Linux takes about 20min on a well managed VM with respectable grunt. The couple of dozen other binaries that go with our application take about half that time to build in total. SOAP is not even the largest of the component source trees we have, but from the compiler's POV it's certainly takes the most effort.

Re:How about benchmarking the binary? (1)

flargleblarg (685368) | about a year ago | (#45331439)

What's the turnaround time if you change, say, one tiny part of a C function having no ramifications to other modules? Do you have a 1-second recompile time (just for that module) followed by 5-10 minutes of link time before you can re-test? Is there no incremental linking? No dynamic libraries? I'm curious what type of program you have. That seems excessively slow to me.

Re:How about benchmarking the binary? (0)

petteyg359 (1847514) | about a year ago | (#45329923)

Can you read?

It’s interesting that the code built with the g++ compiler performed the best in most cases, although the clang compiler proved to be the fastest in terms of compilation time.

Just in case you didn't get that: They did benchmark the resulting binaries, and g++ made the best ones.

Re:How about benchmarking the binary? (0)

Anonymous Coward | about a year ago | (#45330369)

They did benchmark the resulting binaries, and g++ made the best ones.

Not always. In some tests it made the slowest.

DIe Buisness Intelligence DIE (5, Interesting)

Bill, Shooter of Bul (629286) | about a year ago | (#45329597)

What on earth does compiler benchmarking have to do with the BI section of slashdot?

Furthermore, why on earth are you idiots creating a blurb on the main screen that just links to a different slashdot article? Its such terrible self promotion. Just freaking write the main article as the main article. No need to make it seem as if the Buisness Intellegence section is actually worth reading, its not.

Re:DIe Buisness Intelligence DIE (0)

Anonymous Coward | about a year ago | (#45331165)

What's the Business Intelligence section of /.?

Measuring pebbles (5, Insightful)

T.E.D. (34228) | about a year ago | (#45329775)

Interesting info, but I have a couple of issues:

First off, why wasn't Microsoft's C++ compiler included in this? That's the one we use at work, so that's the one I'd really like compared to all those others. Are we the only ones still using it or something?

More importantly, why on earth was compilation speed the only thing compared? I mean, I suppose its nice for g++ users to know that their 10 minute compiles would have been 2 minutes longer if they used the Intel compiler, but Intel users might not really care if they believe their resulting code is going to run faster. Speed of compilation of optimized code is a particularly useless metric, because different compilers have different definitions of "unoptimized", so its guaranteed you aren't comparing apples to apples.

I suppose compilation speed is a nice metric to brag about between compiler writers. But for compiler users, the most important things are roughly these, in order: Toolchain support, language feature support (eg: C++2012/14 features), clarity of error/warning messages, speed of generated code (optimization), and lastly speed of compilation. I'm not really sure why you took it upon yourself to measure the least important factor, and only that one.

Re:Measuring pebbles (2, Informative)

Anonymous Coward | about a year ago | (#45329981)

Your pre-elementary reading and comprehension skills leave much to be desired.

It’s interesting that the code built with the g++ compiler performed the best in most cases, although the clang compiler proved to be the fastest in terms of compilation time.

Just in case you didn't get that: They did benchmark the resulting binaries, and g++ made the best ones.

Re:Measuring pebbles (0)

Gothmolly (148874) | about a year ago | (#45330233)

Mod parent up. GP is an asshat.

Re:Measuring pebbles (4, Interesting)

T.E.D. (34228) | about a year ago | (#45330271)

OK. Much abashed, I went back through the article.

It turns out that there are numbers for an actual code benchmark. Its found about 2/3rds of the way through the report, in the third graph (untitled), after the balance of the text had already been devoted to compilation speed comparisons. Also, it only listed 2 of the 3 compilers, half the data was for unoptimized code (and thus useless), and it was hidden behind a sign that said "Beware of leopard".

OK, perhaps I made that last part up.

For the curious, the difference in at least that one table was never more than 5%. In my mind, hardly a differentiator unless you are doing heavy number crunching or stock trading programs.

Perhaps the remaining 1/3rd is all about more important things? I've lost interest. You're right. I'm weak.

Re:Measuring pebbles (-1)

Anonymous Coward | about a year ago | (#45330107)

Because nobody cares about windoze losers. Jeez!

Re:Measuring pebbles (0)

Anonymous Coward | about a year ago | (#45330309)

While I agree with the priorities you enumerate, I believe those priorities change at different development stages.

>>> Speed of compilation of optimized code is a particularly useless metric, because different compilers have different definitions of "unoptimized", so its guaranteed you aren't comparing apples to apples.

This could be the difference between an hour and 10 minutes for builds of some projects. Think of all the productivity lost waiting to debug a build.

Re:Measuring pebbles (3, Interesting)

T.E.D. (34228) | about a year ago | (#45330453)

This could be the difference between an hour and 10 minutes for builds of some projects.

If that's really the delta (one is 600% slower), then something is likely seriously pathologically wrong with one of those two compilers. Submit a bug report (not that it helps you, but it will help someone else).

But yes, different users in different phases will have different priorities. I'm not laying down an immutable law here, just trying to restore the proper proportion to a situation that we both agree is way out of wack.

Re:Measuring pebbles (1)

CastrTroy (595695) | about a year ago | (#45330501)

Fast compilation can have it's advantages. It's one of the reasons some developers like working in scripting languages (PHP, Ruby, etc...). If simple mistakes in coding don't cost you 20 minutes of compile time, it can speed up development a lot. I use .Net, which i think has a nice balance between the two. Reasonable compile times, while still having compile time type checking and other advantages of a compiled language.

Re:Measuring pebbles (0)

Anonymous Coward | about a year ago | (#45330765)

>If simple mistakes in coding don't cost you 20 minutes of compile time, it can speed up development a lot.
Are incremental compile no longer used for development anymore?

Re:Measuring pebbles (0)

Anonymous Coward | about a year ago | (#45331201)

No, because Makefiles are too ugly. And by "ugly" I mean, people don't get declarative languages and fallback to brain-dead solutions.

Re:Measuring pebbles (-1)

Anonymous Coward | about a year ago | (#45331049)

First off RTFA - resulting binaries were also bench marked.

Secondly, Microsoft's compiler doesn't run on any of the important development platforms so no one really cares how it runs.

future compiler trends (1)

sander (7831) | about a year ago | (#45329783)

The main claim for g++ for a very long time was "while it does not optimize much or support all of the language, it is FREE". With clang on the scene and offering a comparable feature set and speed of compiled code, it will be interesting to see how g++ and the gnu compiler collection in general will fare over time. Especially as a part of the canonical GNU core.

Re:future compiler trends (2)

ShanghaiBill (739463) | about a year ago | (#45330189)

The main claim for g++ for a very long time was "while it does not optimize much or support all of the language, it is FREE".

I have never heard that claim, maybe because it isn't true. g++ has always been one of the best at language support. It has not always been the best at low-level processor specific optimizations, but it has made up for that by being really good at higher level optimizations, like recognizing unused code, inlining, and code hoisting. I haven't seen a better compiler at any price.

Re:future compiler trends (4, Informative)

ebno-10db (1459097) | about a year ago | (#45330883)

it has made up for that by being really good at higher level optimizations

Heh, heh, heh, don't remember the great EGCS split of '97, do you sonny? Yep, us old timers knew that gcc was a dog of an optimizer, but them EGCS whippersnappers fixed it, and even got the fork accepted as the official gcc. Remember, you probably got to where you are today by running over the body of some crusty old-timer.

clang? (0)

Anonymous Coward | about a year ago | (#45329799)

I take it they never looked here [intel.com] ? TBB works with clang... sort of.

Re:clang? (0)

Anonymous Coward | about a year ago | (#45329887)

Oh, I see... "Also, Threading Building Blocks hasn’t yet been ported to clang on Linux, and as such, clang wasn’t included in that part of the test. It has on the Mac, but I didn’t have access to a Mac for these tests."

Yes, your benchmarking test is vast and impressive and newsworthy... you're such an amazing developer... that you cannot find anyone from whom you can borrow a Mac for an hour.

Crappy benchmark (5, Informative)

raxx7 (205260) | about a year ago | (#45329917)

The code in the benchmark runs a parallel for over a 10 billion element array but in steps of 100 elements.
It's going to be limited by the creation and destruction of threads.

Also, by not initializing the input array, the floating point arithmetic is vulnerable to eventual denormal values.

other compilers (0)

Anonymous Coward | about a year ago | (#45329979)

don't forget the Microsoft Visual Express and Watcom C++ compilers. :D I must be the only person who uses Watcom C++ compiler.

Link does not work in some browsers (0)

Anonymous Coward | about a year ago | (#45330075)

If you're using something like a privacy enhanced chromium, and you have referrer headers turned off, the link to the BI site endlessly reloads and never displays the article.

Speed is irrelevant (-1, Troll)

Anonymous Coward | about a year ago | (#45330213)

My company (10,000+ developers, billions of lines of code across hundreds of products some of which everyone uses every day) dumped gcc as soon as LLVM was able to compile our code. We do not care for organizations like the FSF that try to dictate what we can and cannot do with open source so we choose to support projects that are MORE FREE than any bullshit GPL covered crap ever can be.

Re:Speed is irrelevant (0)

Anonymous Coward | about a year ago | (#45330385)

Why is a company with 10,000+ developers, with billions of lines of code across hundreds of products, some of which everyone uses every day, using free compilers in the first place, particularly if they're so willing to dump gcc as soon as LLVM appeared? Surely such a behemoth could have afforded to buy a different compiler that didn't come with such a prohibitive license as the GPL? Surely you're not just another of the bullshitters riddling Slashdot today!

Re:Speed is irrelevant (0, Informative)

Anonymous Coward | about a year ago | (#45330695)

You are an idiot.

Re:Speed is irrelevant (0)

Anonymous Coward | about a year ago | (#45330879)

NO U

Forgot multiple platforms too. (0)

Anonymous Coward | about a year ago | (#45330239)

Yah, besides missing compiler flags, how does it perform on different intel processors, how about different AMDs?

Plus, the huge system times seems to indicate this more a kernel test than a compiler one.

I see why you care about compile speed (0)

Anonymous Coward | about a year ago | (#45330247)

Cogswell takes a number of 'real world' factors into account, such as how each compiler deals with templates, and comes to certain conclusions

Being that this was the only thing said to be explicitly tested, I'm guessing that is the focus. If your code is so overwhelmed with template code that compiling speed is an issue, perhaps that is a good indicator that you are abusing templates instead of using proper inheritance/overloading

This benchmark is pointless (4, Informative)

godrik (1287354) | about a year ago | (#45330431)

I am a scholar and study parallel computing. These benchmarks are pretty much pointless. You can not make any conclusions out of these results. Here the author take the time whole time of the execution for the creation of the process to its destruction. That means that are included lots of overhead which would be included in startup time in a real application.

There is also apparently no thread pinning to computational cores. This is known to make a HUGE difference.

Then the authors compared cilk result. cilk is known to be slow for simple codes that do not require workstealing and have complex dependencies. For the record, I know they are also comparing TBB. But TBB is implemented on top of the cilk engine in the intel compiler (I don't know about gcc).

In these results hyperthreading is enabled. The proper use of hyperthreading is complicated. There are some problems where it helps, other where it harms, and I would not be surprise that this behavior be compiler dependent.

Finally, it is almost impossible to compare compilers. On different platforms, with the same compilers you will get different results. Some functions are better compiled by one compiler and some functions are better compiled by the other compiler. This has been reported over and over and over again.

If you care about performance, you should not rely on what your compiler is doing in your back. You need to know what it is doing. Depending on memory alignment (and what the compiler knows about it), depending how the vectorization happen, depending on potential memory aliasing you will get different results.

If you care about performance, you need to benchmark and you need to optimize and you need to know what the compiler does.

Re:This benchmark is pointless (0)

Anonymous Coward | about a year ago | (#45331155)

I am a scholar...

But are you a gentleman as well?

Re:This benchmark is pointless (0)

Anonymous Coward | about a year ago | (#45331185)

I am a scholar and study parallel computing.

aka I'm a second year computer science student.

Intel C++ produced fastest code for us (4, Insightful)

pauljlucas (529435) | about a year ago | (#45330813)

This information is perhaps 2 years out of date, but back for one of my projects, when we switched from g++ to Intel C++, our software got about twice as fast with no other changes. It got even faster when we took advantage of SSE3 instructions.

Re:Intel C++ produced fastest code for us (-1)

Anonymous Coward | about a year ago | (#45331125)

lol, no.

Re:Intel C++ produced fastest code for us (0)

Anonymous Coward | about a year ago | (#45331323)

Intel's compiler suite has always (and continues) to be faster than GCC. But that isn't super unexpected, they have a better working knowledge of their CPUs quirks, 10x more comp sci PHDs on staff than the gcc team can muster, and their product isn't portable to dozens of archs like gcc.

for complilers (2)

mjwalshe (1680392) | about a year ago | (#45331197)

It is speed that is important which is why a lot of HPC people still prefer the intel compilers.

Compilation time (1)

amightywind (691887) | about a year ago | (#45331423)

Compilation time is irrelevant if you use ccache or distcc, nitwits. GCC is and has always been the logical choice.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?