Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Optimizations - Programmer vs. Compiler?

Cliff posted more than 9 years ago | from the who-can-obfuscate-better dept.

Programming 1422

Saravana Kannan asks: "I have been coding in C for a while (10 yrs or so) and tend to use short code snippets. As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'. The reason someone might use the former code snippet is because they believe it would result in smaller machine code if the compiler does not do optimizations or is not smart enough to optimize the particular code snippet. IMHO the latter code snippet is clearer than the former, and I would use it in my code if I know for sure that the compiler will optimize it and produce machine code equivalent to the former code snippet. The previous example was easy. What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?""How would your answer differ (in terms of the level of trust on the compiler) if I'm talking about compilers for Desktops vs. Embedded systems? Compilers for which of the following platforms do you think is more optimized at present - Desktops (because is more commonly used) or Embedded systems (because of need for maximum optimization)? Would be better if you could stick to free (as in beer) and Open Source compilers. Give examples of code optimizations that you think the compiler can/can't be trusted to do."

cancel ×

1422 comments

Sorry! There are no comments related to the filter you selected.

fp (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#11781192)

i should get first post

Ask the compiler... (5, Funny)

inertia@yahoo.com (156602) | more than 9 years ago | (#11781199)

Programmer: Hey, compiler. How do you like optimizing?
Compiler: Optimizing? Optimizing? Don't talk to me about optimizing. Here I am, brain the size of a planet, and they've got me optimizing inane snippets of code. Just when you think code couldn't possibly get any worse, it suddenly does. Oh look, a null pointer. I suppose you'll want to see the assembly now. Do you want me to go into an infinite loop or throw an exception right where I'm standing?
Programmer: Yeah, just show me the stack trace, won't you compiler?

Security (0, Interesting)

Anonymous Coward | more than 9 years ago | (#11781203)

The future's pretty clear.

You MUST trust the compiler more and more to protect the code from buffer overflows and other trivial, but hard-for-humans-to-detect mistakes.

Clear Code (5, Insightful)

elysian1 (533581) | more than 9 years ago | (#11781211)

I think writing clear and easy to understand code is more important in the long run, especially if other people will have to look at it.

Re:Clear Code (5, Insightful)

normal_guy (676813) | more than 9 years ago | (#11781228)

That should be "especially _since_ other people will have to look at it."

Re:Clear Code (-1, Troll)

Anonymous Coward | more than 9 years ago | (#11781254)

No one cares, fuckstick.

GET A LIFE.

Re:Clear Code (1, Insightful)

Anonymous Coward | more than 9 years ago | (#11781321)

Other people don't have to have a look at it if it works. If it doesn't work, you've better write new code to replace it. Modifying legacy code is always a security risk.

Re:Clear Code (0)

Anonymous Coward | more than 9 years ago | (#11781244)

if other people will have to look at it

Why would you? There's always going to be a highly specialized, small team working on a particular piece of code. Unless, of course, you're talking about OSS.

Re:Clear Code (5, Insightful)

daveho (235543) | more than 9 years ago | (#11781262)

I agree 100%. Write code that is easy to understand and modify, then optimize it, but only after you have profiled it to find out where optimization will actually matter .

Re:Clear Code (1)

amembleton (411990) | more than 9 years ago | (#11781294)

Parent is correct, although on some embedded systems it may be more important to have compact and efficient code but the compiler should do this.

I probably spend at least twice as long trying to understand what old (mostly uncommented) code currently does than actually fixing it.

You should always... (5, Funny)

Anonymous Coward | more than 9 years ago | (#11781218)

Optimize. Using cryptic, short variable names also shaves valuable microseconds off compile time and run time.

Re:You should always... (4, Funny)

FyRE666 (263011) | more than 9 years ago | (#11781366)

... and by god don't let me see anyone using comments - comments are the devil's alphabet soup! Every programmer worth his/her salt knows that source code is self documenting...

Re:You should always... (1)

mjc_w (192427) | more than 9 years ago | (#11781402)

Yeah, right!

And if you don't use variable names at all, it's even better!

Time to post the famous Knuth quote... (4, Informative)

xlv (125699) | more than 9 years ago | (#11781223)

Donald Knuth wrote "We should forget about small efficiencies, about 97% of the time. Premature optimization is the root of all evil."

Re:Time to post the famous Knuth quote... (0)

Anonymous Coward | more than 9 years ago | (#11781287)

Knuth must be blessed if he's working in an environment where premature optimization is the root of all of his evil.

Re:Time to post the famous Knuth quote... (2, Insightful)

Anonymous Coward | more than 9 years ago | (#11781316)

I second that.

Optimisations at such a low level (especially without profiler evidence to prove it) is often a complete waste of time when the remainder of the code is slow due to crappy algorithm or structure choices.

...I remember a guy I worked with wrote a "faster" atol type function. His had less code and did much less. I suggested we profile it to demonstrate is coding prowess. Of course his executed slower than the shipped crt version...his suggestion of taking the crt verson and "hacking out the junk" amused the rest of us for a while hehe (Lee, you know who you are)

use a good profiler!! (2, Informative)

EccentricAnomaly (451326) | more than 9 years ago | (#11781413)

Trying to optimize in your head rarely does any good... Use a good profiling tool (like Apple's Shark [apple.com] ) to find out what part of your code uses the most time and then just concentrate on making that part faster.

Using a profiler and your own brain you can often significantly improve over what a compiler can do.

Re:Time to post the famous Knuth quote... (1)

Rahga (13479) | more than 9 years ago | (#11781346)

"It is practically impossible to teach good programming style to students that [sic] have had prior expose to BASIC; as potential programmers they are mentally mutilated beyond hope of regeneration.
--E. W. Dijkstra"

So... Can anybody prove that the root of BASIC is premature optimization?

Re:Time to post the famous Knuth quote... (0)

Anonymous Coward | more than 9 years ago | (#11781365)

Premature optimization is the root of all evil

And all this time my g/f was telling me it was premature ejaculation.

That's a Tony Hoare quote, not Donalded Knuth (4, Informative)

Dan Ost (415913) | more than 9 years ago | (#11781371)

Donald Knuth was quoating Tony Hoare when he said that.

Algorithms, Not Stupid Processor Tricks (5, Insightful)

American AC in Paris (230456) | more than 9 years ago | (#11781229)

This is marginally away from the submitter's question, but it warrnats attention:

The sad truth is that, as far as optimization goes, this isn't where attention is most needed.

Before we start worrying about things like saving two cycles here and there, we need to start teaching people how to select the proper algorithm for the task at hand.

There are too many programmers who spend hours turning their code into unreadable mush for the sake of squeezing a few milliseconds out of a loop that runs on the order of O(n!) or O(2^n).

For 99% of the coders out there, all that needs to be known about code optimization is: pick the right algorithms! Couple this with readable code, and you'll have a program that runs several thousand times faster than it'll ever need to and is easy to maintain--and that's probably all you'll ever need.

Re:Algorithms, Not Stupid Processor Tricks (0)

Anonymous Coward | more than 9 years ago | (#11781401)

Heh. Wasn't that long ago that every other guy's homegrown 3D engine (software rendering, mind you, this was the 100mhz pentium era) had an ultra-optimized version of bubblesort doing the depth sorting of polygons in a painter's algorithm type affair.

I certainly hope those people are better educated these days.

How efficient is this? (-1, Troll)

johndeeregator (549310) | more than 9 years ago | (#11781232)

if (!question.isInteresting()) { frontPage.purge(question); }

More like this... (0)

Anonymous Coward | more than 9 years ago | (#11781367)

if(story->isOnFrontPage())
{
postDupe(story);
}
else
if(story->sumitter == "Roland")
{
post(story);
}

Let me guess (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#11781234)

Gentoo fag right?

You also drive a honda civic with the coffee can exhaust I bet.

The .0001% improvement of these stupid little things isnt worth the time invested retard.

Re:Let me guess (0)

Anonymous Coward | more than 9 years ago | (#11781406)

1MIPS microcontroller.
1000 samples per second required.
Need to gather 4 ADC input values, 4 binary port input values, perform some calculations, perform sanity checks, handle errors and send the digest and status over RS232, plus intercept interrupts in the meantime.
1000 cycles becomes horribly tight limit. And the compiler doesn't optimize.

Neither are correct. (0)

Stavr0 (35032) | more than 9 years ago | (#11781237)

if (0!=ptr) is the best way.
  • Putting the value on the left side prevents == VS = mixups
  • Relying on NULL #define-ition to be compatible with all pointer types is risky. When the compiler sees 0 and a pointer comparison it'll do the right thing.

Re:Neither are correct. (1)

nurd68 (235535) | more than 9 years ago | (#11781279)

However, this is predicated upon the fact that the system defined NULL is 0 (because typically, if whatever you're trying to do fails, the pointer you supplied would be set to NULL, which does not have to be 0).

Re:Neither are correct. (1)

haagmm (859285) | more than 9 years ago | (#11781338)

#ifndef NULL #define NULL 0 #endif always works fine for me.

Re:Neither are correct. (1)

Acy James Stapp (1005) | more than 9 years ago | (#11781400)

Typically NULL is defined as ((void *)0). In most cases it doesn't make any difference but when you have two overloaded functions differentiated by only an int/pointer parameter, it's easier to select the correct one using NULL when it is defined as a pointer type.

That being said, I believe NULL is an abomination and should be introduced into the language as a special token 'null', similar to 'true' and 'false'.

Re:Neither are correct. (1)

Grant_Watson (312705) | more than 9 years ago | (#11781404)

In ANSI C, the compiler is guaranteed to figure out what a zero means when used as a pointer, even if the system's value for a null pointer is different.

Re:Neither are correct. (1)

ikegami (793066) | more than 9 years ago | (#11781357)

Relying on NULL #define-ition to be compatible with all pointer types is risky.

No more risky than assuming that 0 is compatible with all pointer types.

Bad example (0, Redundant)

nurd68 (235535) | more than 9 years ago | (#11781240)

if(!ptr) is equivalent to if(ptr == 0)

The problem is that there is nothing that says that NULL must be 0. Potentially, one could define NULL to be something else - like -1. Therefore, one should always use if(ptr == NULL).

Re:Bad example (2, Insightful)

hpa (7948) | more than 9 years ago | (#11781274)

Bullshit.

Read the C standard about the definition of a null pointer constant.

Wrong, wrong, wrong (4, Informative)

JoeBuck (7947) | more than 9 years ago | (#11781299)

Don't give advice when you don't know C. C requires that when a 0 is converted to a pointer, the result is NULL, so it is absolutely false to claim that NULL could be defined as -1.

"ptr == 0" must give the same result as "ptr == NULL", always.

Re:Wrong, wrong, wrong (-1, Troll)

Anonymous Coward | more than 9 years ago | (#11781375)

Don't give advice not to give advice when you don't know C when you don't know C.

A particular C implementation is free to set a pointer's value to -234234234 when 0 is assigned to it, if that's what the underlying platform wants. In such a case it would be correct for NULL to be -234234234.

Re:Bad example (1, Redundant)

Webmonger (24302) | more than 9 years ago | (#11781301)

I can't say whether the NULL macro is formally defined, but the null pointer is always 0, even if the bitwise machine representation is different.

Re:Bad example (0)

Anonymous Coward | more than 9 years ago | (#11781381)

Well, this is plain wrong... A constant zero in pointer context always *is* a null pointer, regardless of its bit pattern. The only reason NULL exists is for stylistic reasons, and is defined to a constant '0' or '0' cast to (void *). Questions 5.2 and 5.4 in the comp.lang.c FAQ are relevant... http://www.faqs.org/faqs/C-faq/faq/ [faqs.org]

write clearer code first (1, Redundant)

Deadbolt (102078) | more than 9 years ago | (#11781252)

Unless your benchmarks show that writing:
if (!ptr) { ...
saves you significant time/size from using:
if (ptr == NULL) { ...
write the clearer code, which is the second option.

Re:write clearer code first (1)

JustNiz (692889) | more than 9 years ago | (#11781319)

I disgree.
I think the first option is much clearer as it is more meaningful.

Consider:
MyClass* instance = new(MyClass);
if (!instance) ....

Testing for "Not an instance" makes more readable sense to me than testing for the case that a pointer to my instance happens to equal a particular value (even NULL).

Re:write clearer code first (2, Informative)

hpa (7948) | more than 9 years ago | (#11781368)

The former is a lot more concise, and conciseness helps getting the *big* concepts down.

Anyone with any familiarity with C will consider the latter form unnecessarily more verbose, and therefore less clear.

There is one exception to that, and that is when "ptr" is in fact a complex expression that it isn't obvious at a glance is a pointer expression. In that case, == NULL or != NULL spells out to the reader of the code "oh, and by the way, it's a pointer." That is the ONLY reason to write this for clarity.

There is a whole category of "bad commenting" in which comments left that are only useful to someone who don't know the programming language actually makes the code a lot harder to read. A comment like:

a += 2; /* Add two to a */

... is not helpful in any shape, way or form, and just provides mental clutter.

You're asking the wrong crowd (1)

CarrionBird (589738) | more than 9 years ago | (#11781257)

Prepare for 93242.5 posts telling you to use java or perl or ada instead.

Re:You're asking the wrong crowd (1)

slavemowgli (585321) | more than 9 years ago | (#11781331)

Ada? o.o I can understand Java and Perl, yes, but who on earth still uses or recommends Ada, especially on Slashdot?

Re:You're asking the wrong crowd (1)

Zphbeeblbrox (816582) | more than 9 years ago | (#11781337)

you should use perl instead. :-) sorry I couldn't resist.

Re:You're asking the wrong crowd (1)

karniv0re (746499) | more than 9 years ago | (#11781397)

Get it right. This is Slashdot, where Python is god.

everything (1)

miu (626917) | more than 9 years ago | (#11781260)

I trust the compiler to optimize almost everything. Use a profiler, since hand optimized code is more difficult to code correctly and more difficult to maintain you should only do it when it matters.

Clear & Concise Code (4, Interesting)

kwiqsilver (585008) | more than 9 years ago | (#11781263)

It's better to write clear, legible code that saves a human minutes of reading, than complex code that might save a computer a few milliseconds of processing time per year, because human time costs more than machine time.
Also the clear code will result in fewer misinterpretations, which will mean fewer bugs (especially when the original author is not the one doing maintenance years later), further reducing costs in dollars, man hours, and frustration.

Compile?? (1)

unixsavant (807669) | more than 9 years ago | (#11781264)

Who compiles code anymore? Between Perl, PHP and Shell, I get everything done that I need too!

NULL not always 0 (2, Insightful)

leomekenkamp (566309) | more than 9 years ago | (#11781266)

Aren't there machines out there where the C compiler specifically defines NULL as value that is not equal to 0? I recall reading that somewhere, and that was my reason for using ==NULL instead of !. My C days are long gone though...

Re:NULL not always 0 (1)

camcorder (759720) | more than 9 years ago | (#11781324)

NULL is never equal to 0, if you talk about C.

Re:NULL not always 0 (0)

Anonymous Coward | more than 9 years ago | (#11781344)

Yes. But these are mostly ancient, obsolete platforms.

GIGO (1)

Skiron (735617) | more than 9 years ago | (#11781268)

The compiler will always produce asm better than a human, but you have to remember - GIGO (garbage in, garbage out).

Make the code EASY to read and logical... 5 lines is better than trying to get it in 1 line - the compiler will still produce the same end product, anyway.

Code Twiddling (2, Informative)

bsd4me (759597) | more than 9 years ago | (#11781269)

There really is no one answer to this, as it depends on the compiler itself, and the target architecture. The only real way to be sure is to profile the code, and to study assembler output. Even then, modern CPUs are really complicated due to pipelining, multilevel cache, multiple execution units, etc. I try not to worry about micro-twiddling, and work on optimizations at a higher-level.

From the "Patenting Fire" department (4, Funny)

slipnslidemaster (516759) | more than 9 years ago | (#11781270)

I just checked the U.S. Patent office and sure enough, just minutes after your post, Microsoft patented "if (!ptr)" as a shorthand for "if (ptr==NULL)".

Prepare to be sued.

Tradeoffs (4, Insightful)

Black Parrot (19622) | more than 9 years ago | (#11781273)


Hard to measure, but what is the tradeoff between increased speed and increased readability (which is a prerequisite for correctness and maintainability)? And if you can estimate that tradeoff, which is more important to the goals of your application?

As a side note, it is far more important to make sure you are using efficient algorithms and data structures than to make minor local optimizations. I've seen programmers use bizarre local optimization tricks in a module that ran in exponential time rather than log time.

Optimize at the interpreter/compiler level... (2, Interesting)

sporty (27564) | more than 9 years ago | (#11781276)

Common idioms should be compiled away, like !x or x!=0. Uncommon idioms can't and probably shouldn't be attempted, i.e. if(!(x-x)) (which is always false). Ask your compiler maker and see if patches can be made for these types of things. 'cause if you think to do it one way, chances are, many others may try it too. It would be for their benefit to make a better compiler.

Re:Optimize at the interpreter/compiler level... (1)

Felonious Monk (784998) | more than 9 years ago | (#11781361)

Actually the assertion that !(x-x) is always false is in and of itself false in languages that allow operator overloading. Obviously, you assumed the intrisic operation, and the compiler would certainly know the difference, but from a human standpoint, no such assumption could be made.

Re:Optimize at the interpreter/compiler level... (1)

hpa (7948) | more than 9 years ago | (#11781407)

!(x-x) can be true if x is a floating-point variable (in which case it's equivalent to isnan(x)).

Most people should not bother (5, Insightful)

El Cubano (631386) | more than 9 years ago | (#11781277)

What about code that is more complex? Now that compilers have matured over years and have had many improvements, I ask the Slashdot crowd, what they believe the compiler can be trusted to optimize and what must be hand optimized?

Programmers cost lots more per hour than computer time. Let the compiler optimize and let the programmers concentrated on developing solid maintainable code.

If you make code too clever in an effort to try to pre-optimize, you end up with code that other people have difficulty understanding. This is leads to lower quality code as it evolves if the people that follow you are not as savvy.

Not only that, but the vast majority of code written today is UI-centric or I/O bound. If you want real optimization, design a harddrive/controller combo that gets you 1 GBps off the physical platter (and at a price that consumers can afford).

The most important optimization... (2, Insightful)

slavemowgli (585321) | more than 9 years ago | (#11781278)

The most important optimization is still the optimization of the algorithms you use. Unless under the most extreme circumstances, it doesn't really matter anymore whether the compiler might generate code that takes two cycles more than the optimal solution on today's CPUs; instead of attempting to work around the compiler's perceived (or maybe real) weaknesses, it's probably much better to review your code on a semantic level and see if you can speed things up by doing them differently.

The only exception I can think of is when you're doing standard stuff where the best (general) solution is well-known, like sorting; however, in those cases, you shouldn't reinvent the wheel, anyway, but instead use a (presumably already highly-optimized) library.

Language Age (1)

clinko (232501) | more than 9 years ago | (#11781280)

The only solution to this is to TEST.

Otherwise I go this way:

Is the compiler older, and not designed for current coding practices?

Think of it this way, if it were vb.net you can say isNULL(VAR) or (VAR = VBNull.Value)

The vbnull value was recently added, while isNULL has been there forever.

I use the VBNull.Value because the writer of the compiler was thinking about the new objects and wrote NEW code for it. isNull was probably a copy of an old coding practice.

Long story short: Use Newer objects if you can.

Your compiler (1)

panth0r (722550) | more than 9 years ago | (#11781288)

Why not create your own compiler?

Beware of habits. (4, Interesting)

SharpFang (651121) | more than 9 years ago | (#11781289)

I got in the habit of writing "readable but inefficient" code, taking care that my constructs don't get too sophisticated for the optimizer but then depending on gcc -O3 thoroughly. And then it happened I had to program 8051 clone. Then I learned there are no optimizing compilers for '51, that I'm really tight on CPU cycles, and that I simply don't know HOW to write really efficient C code.
Ended up writing my programs in assembler...

Huh (4, Informative)

NullProg (70833) | more than 9 years ago | (#11781290)


As a simple example, take 'if (!ptr)' instead of 'if (ptr==NULL)'.


Both forms resolve to the same opcode. Even under my 6502 compiler.


CMP register,val
JNE


Enjoy,

compilers are 'smart', but... (1)

LegendOfLink (574790) | more than 9 years ago | (#11781295)

It's always a good idea to practice good coding. This way you'll not only depend on the compiler to do all the coding, but you'll also remain Sup3R l33t.

$.02 (4, Insightful)

MagicM (85041) | more than 9 years ago | (#11781296)

1) Code for maintainability
2) Profile your code
3) Optimize the bottlenecks

That said, (!ptr) should be just as maintanable as (ptr == NULL) simply because it is a frequently used 'dialect'. As long as these 'shortcuts' are used throughout the entire codebase they should be familiar enough that they don't get in the way of maintainability.

Re:$.02 (1)

arkanes (521690) | more than 9 years ago | (#11781376)

The !ptr form is actually more likely to be a performance problem in higher level/interperted languages, strangely enough, where it forces a type coercion to boolean rather than an integer compare.

Tight complex recursive loops (2, Interesting)

LiquidCoooled (634315) | more than 9 years ago | (#11781297)

An example would be searching and sorting algorythms (I know there have been thousands of variations, and entire libraries now exist, but theres always some other need for a non generic search function)

I could write sloppy code which appears to be significant, but then realise that holding this here, and keeping that register there, then I can do such a thing just a fraction quicker than before.

Its not so much going to the assembler level anymore, but a tightly coded loop tuned by human intuition will almost always still be faster than anything an optimiser can give.

I recently had an issue with sorting collections containing thousands of none trivial objects. Every time I adjusted the adready fast quicksort, I gained a little more speed.

Its always been the way, and until genetic compilation and optimisation comes along trying every combination, it will continue to be the case.

micro optimization (4, Insightful)

fred fleenblat (463628) | more than 9 years ago | (#11781298)

What you're talking about it micro-optimization.
Compilers are pretty good at that, and you should let them do their job.

Programmers should optimize at a higher level: by their choice of algorithms, organizing the program so that memory access is cache-friendly, making sure various objects don't get destroyed and re-created unnecessarily, that sort of thing.

Compiler Optimization (1)

sameerdesai (654894) | more than 9 years ago | (#11781300)

Isn't this a huge research topic in itself. And I am confident that we will get better compiler optimizations as we go forward. This makes us come to next question about code complexity. I always believe simpler code would also help compiler to help optimize code better apart from user readability. At this same time when we talk about compiler optimizations it is useful to understand the underlying hardware which has capacity to increase the Instruction level parellelism (ILP) and introduce its own optimization using pipelining.

.NET Has An *Easy* Way for Compiler Optimization (1)

PepeGSay (847429) | more than 9 years ago | (#11781302)

Take the two sample ways to implement and then compile them. Then look at the MSIL that is created, if they are the same, then they were optimized to be identical. This works for most simple constructs, such as the one in the message body. If they are different then you have to know the relative costs of what the MSIL is doing in order to figure it out.

Those who forget Tony Hoare... (5, Insightful)

smug_lisp_weenie (824771) | more than 9 years ago | (#11781303)

...are doomed to repeat the biggest trap in computer programming over and over again:

"Premature optimization is the root of all evil"

If there's only one rule in computer programming a person ever learns, "Hoare's dictum" is the one I would choose.

Almost all modern languages have extensive libraries available to handle common programming tasks and can handle the vast majority of optimizations you speak of automatically. This means that 99.99% of the time you shouldn't be thinking about optimizations at all. Unless you're John Carmack or you're writing a new compiler from scratch (and perhaps you are) or involved in a handful of other activities you're making a big big mistake if your spending any time worrying about these things. There are far more important things to worry about, such as writing code that can be understood by others, can easily be units tested, etc.

A few years ago I used to write C/C++/asm code extensively and used to be obsessed with performance and optimization. Then, one day, I had an epiphany and started writing code that is about 10 times slower than my old code (different in computer language and style) and infinitely easier to understand and expand. The only time I optimize now is at the very very end of development when I have solid profiler results from the final product that show noticable delays for the end user and this only happens rarely.

Of course, this is just my own personal experience and others may see things differently.

"time" is your friend (0)

Anonymous Coward | more than 9 years ago | (#11781304)

TIME(1) TIME(1)

NAME
time - time a simple command or give resource usage

SYNOPSIS
time [options] command [arguments...]

DESCRIPTION
The time command runs the specified program command with the given
arguments. When command finishes, time writes a message to standard
output giving timing statistics about this program run. These statis-
tics consist of (i) the elapsed real time between invocation and termi-
nation, (ii) the user CPU time (the sum of the tms_utime and tms_cutime
values in a struct tms as returned by times(2)), and (iii) the system
CPU time (the sum of the tms_stime and tms_cstime values in a struct
tms as returned by times(2)).

Write C for C programmers (5, Insightful)

swillden (191260) | more than 9 years ago | (#11781306)

With regard to your example, I can't imagine any modern compiler wouldn't treat the two as equivalent.

However, in your example, I actually prefer "if (!ptr)" to "if (ptr == NULL)", for two reasons. First the latter is more error-prone, because you can accidentally end up with "if (ptr = NULL)". One common solution to avoid that problem is to write "if (NULL == ptr)", but that just doesn't read well to me. Another is to turn on warnings, and let your compiler point out code like that -- but that assumes a decent compiler.

The second, and more important, reason is that to anyone who's been writing C for a while, the compact representation is actually clearer because it's an instantly-recognizable idiom. To me, parsing the "ptr == NULL" format requires a few microseconds of thought to figure out what you're doing. "!ptr" requires none. There are a number of common idioms in C that are strange-looking at first, but soon become just another part of your programming vocabulary. IMO, if you're writing code in a given language, you should write it in the style that is most comfortable to other programmers in that language. I think proper use of idiomatic expressions *enhances* maintainability. Don't try to write Pascal in C, or Java in C++, or COBOL in, well, anything, but that's a separate issue :-)

Oh, and my answer to your more general question about whether or not you should try to write code that is easy for the compiler... no. Don't do that. Write code that is clear and readable to programmers and let the compiler do what it does. If profiling shows that a particular piece of code is too slow, then figure out how to optimize it, whether by tailoring the code, dropping down to assembler, or whatever. But not before.

My .02 (1)

r_glen (679664) | more than 9 years ago | (#11781308)

In my experience (if you're using a half-decent compiler), ANY "hand-coded" optimization will save you AT MOST a few instructions or clock cycles, and this is only in rare cases. Anyone who really needs the extra .000000001% difference is probably coding it in Assembly already.
BTW, 'if (!prt)' and 'if (ptr==NULL)' yield the exact same code but I prefer the former because I think it's clearer.

Computers are so fast nowadays (1)

CrazyJim1 (809850) | more than 9 years ago | (#11781310)

I don't worry about something, unless its NP complete, On^2, or running over and over. Its not about individual statements, unless they're nested deep in a loop... In which case its probably best to work in assembly. Its a high level programming language for a reason: You're not supposed to care for the running time.

If it does matter though, you want to analyze your code for where the slowdown occurs. Once you isolated the part, you can try several tecniques to speed up your code. Pick the one that runs the fastest.

Heck no! (0)

Anonymous Coward | more than 9 years ago | (#11781311)

That's why I code everything in pure assembly.

Look for my new OS coming to a computer near you in 2125!

This isn't as important as it used to be (0)

Anonymous Coward | more than 9 years ago | (#11781312)

When I started programming about, oh, 20 years ago this kind of stuff was really important to me. But these days my number one concern is *maintainability*.

I write my code in the simplest, shortest, most readable way possible, and then benchmark and optimize after I'm done. 90% of the time I don't even have to optimize.

I think if you take this attitude, your projects will turn out a lot better. Machines are pretty fast and CPU's do a lot of wacky stuff under the hood.

Now, if you actually do need the level of optimization you describe, you have only one choice: learn assembly and learn how your CPU works. You have to learn assembly and study the output of your compiler. For instance if you want to know the difference between !ptr and ptr==NULL, compile it both ways and check the output. And repeat whenever upgrading any part of your toolchain. Benchmark it too, because a bit of code that looks like it might be slower might actually run faster on today's CPUs!

Personally, I would use ptr == NULL because it's clearer what you mean. I only use "!" for boolean values. For example, when testing for the end of a string, I would use "*str == '\0'" instead of "!*str".

(Note, it's been a while since I've done hardcore C [Ruby is my language of choice these days], but I seem to recall that the C standard says that ptr==NULL is the same as ptr==0 [even if the implementation doesn't use 0 as NULL internally, for some reason], so your two code snippets should be identical I think.)

Ask somewhere else... (-1)

Anonymous Coward | more than 9 years ago | (#11781314)

As a sign of how much the readership of this site has changed over the last 5 years, just watch the posts here. I'm guessing no more than 10 posts which give accurate information and insight into this good question. Most will be mundane, blase, droll, and modded highly for being "informative". To the OP, ask somewhere else. When you want to know if buying a PS3 or an XBOX2 is better, or why M$ $sux0rs, come back and ask here.

First, make the program WORK (0)

Anonymous Coward | more than 9 years ago | (#11781317)

Then, if the program seems slow where it shouldn't, PROFILE. If profiling indicates that optimization should be applied, then and only then start arranging things first so that the algorithms and data structures provide nicer performance. If and only if that fails or is for some reason unapplicable, then you can start looking at things like low-level optimizations. (Pipelining, etc.)

Most of the time crap like if(!ptr) is actually worse than if(ptr == NULL), since in the first case the pointer is first converted into a boolean value (or, in C89, an int) and then checked against false. If you are working with a compiler that was built after 1990, you'll do well to trust the compiler's own optimization routines over this sort of anal-retentive smartypants micro-optimization.

(Not to mention that if(!p) is downright evil if the pointer nature of "p" can't be determined from immediate context. And anyhow, avoiding implicit conversions is usually the way to go in any language.)

#asm (1)

snookerdoodle (123851) | more than 9 years ago | (#11781320)

If you're really, truly, deeply concerned about how well a particular piece of code is optimized, the only way to be really, truly, deeply sure is to Do It Yourself for the particular processor you are using.

Unless it really needs to be optimized, and I speak as a person who reads others' code a LOT, PLEASE go for readability over obfuscility. ;-)

Mark

code should be written for people to read (5, Insightful)

SamSeaborn (724276) | more than 9 years ago | (#11781323)

"Programs should be written for people to read, and only incidentally for machines to execute."
- Structure and Interpretation of Computer Programs [tinyurl.com]

Just look at the assembly (0)

Anonymous Coward | more than 9 years ago | (#11781325)

You want to know? Easy.

Just write the bit of code you're curious about in both ways you'd like to compare, then look at the generated assembler. You don't even need to know much about assembler.

No need to Ask Slashdot about it.

Can be harder in C++ (1)

rrowv (582861) | more than 9 years ago | (#11781328)

I don't believe the compiler can be trusted with more complicated pieces of C++ code, as they can be too hard for a compiler to know for sure that the optimization won't be destructive. The biggest example is when using the STL (or just about any class with iterators like that). Take the follow code:

for( iterator itr = items.begin(); itr != items.end(); itr++ )
{
//stuff
}

Will the compiler recognize that the return value of the postfix ++ operator isn't being used, and can therefor use the faster prefix operator? Probably not, because it won't know for sure that the two methods really do accomplish the same thing. While in general this is the case, its possible that someone's code may not operate that way, causing wierd bugs.

Stuff like this pops up more and more when you make code like this that makes heavy use of classes and templates. Because of that, I don't trust the compiler to figure out it can use "++itr" instead, so I tell it to do that myself. That is just the first one I can think of, but there are many more.

With that said, you in general want to choose the one that is the easiest to read, even if it is slower. Code maintainability is generally more important than speed IMO.

Not a question that can be asked generally (4, Informative)

Sycraft-fu (314770) | more than 9 years ago | (#11781333)

Each compiler is different. Some will optimise things other won't.

In general, however, systems are now fast enought that when in doubt, write the clearest code possible. I mean for most apps, speed is not critical, however for all apps stability and lack of bugs is important and obscure code leads to problems.

Also, for things that are time critical, it's generall just one or two little parts that make all the difference. You only need to worry about optimizing those inner loops where all the time is spent. Use a profiler, since programmers generally suck at identifying what needs optimising.

Keep it easy to read and maintain, unless speed is critical in a certian part. Then you can go nuts on hand optimization, but document it well.

Don't forget about the... (-1, Redundant)

rdavidson3 (844790) | more than 9 years ago | (#11781336)

IsNot Microsoft is watching your code. ;)

optimize vs pessimize (1)

Dionysus (12737) | more than 9 years ago | (#11781342)

Sutter and Alexandrescu had some advice about it C++ Coding Standards (I would think it would apply to other languages too).
Don't optimize prematurely and don't pessimize prematurely.
Micro optimization is useless unless you know for sure that that particular code segment is a bottleneck. OTOH, there are no reason you should pass stuff by value if you can get away with passing it by reference, or do i++ instead of ++i.
As someone pointed out earlier in the disccusion, choosing the right algorithm is more important.

That desperate for a few cycles? Faster box! (0)

Anonymous Coward | more than 9 years ago | (#11781352)

Already have the fastest box? Wait six months.

When to hand optimize (1)

Ironsides (739422) | more than 9 years ago | (#11781353)

Hand optimize code in places that are called a lot, and places that take a lot of processing power. Use the compiler for tasks that are not intensive. However, if you call one task a lot either as a subrotine or in a loop, hand code that to be as optimized as you can. Likewise, routines that are heavily processor intensive should be hand optimized to improve performance. Optimizing anything else won't provide as much of an over all boost in performance.

Check out the LLVM demo page (5, Interesting)

sabre (79070) | more than 9 years ago | (#11781354)

LLVM is an aggressive compiler that is able to do many cool things. Best yet, it has a demo page here: http://llvm.org/demo [llvm.org] , where you can try two different things and see how they compile.

One of the nice things about this is that the code is printed in a simple abstract assembly language that is easy to read and understand.

The compiler itself is very cool too btw, check it out. :)

-Chris

Hand optimization is dying (0)

Anonymous Coward | more than 9 years ago | (#11781355)

Processor Tech & JIT compilers are killing hand optimized code. Not only that, but hand optimized code is often detrimental on AMD while beneficial on Intel or vice versa.

IO Activity is performance killer... (0)

Anonymous Coward | more than 9 years ago | (#11781379)

Far too many programs reads data from files periodically. Harddrive cycles are extremely expensive, and in my experience, one of the most common reasons for slow programs.

being careful. 2 week rule (1)

Triumph The Insult C (586706) | more than 9 years ago | (#11781382)

if you are going to maintain that code, just be sure you know what it does 2 weeks after you write it

<sarcasm>'cause you just now wrote it, and i don't know what it does</sarcasm>

What are you trying to do? (1)

Mannerism (188292) | more than 9 years ago | (#11781383)

Do you really need your code to run as quickly as possible? If so, by all means, use whatever coding tricks you want. Just be sure to add comments to deobfuscate.

OTOH, if fast enough is fast enough, then code clarity will pay off far more in the long run.

Not that I have massive amounts of experience... (1)

rivendahl (220389) | more than 9 years ago | (#11781385)

...but I'd say test your compilers. Find the right fit. If the compiler doesn't optimize or doesn't optimize the way you'd want or expect then change it. I've tried gnu c++ compiler. I like it. But I don't care about optimization. So I didn't pick it based on that. I chose it based on free beer. A buddy of mine said it wouldn't work as well as Borland's. Depending on the task it's better. So he owed me a case.

Regardless, since coders are a choosey lot, you're best bet in my mind is to play around until you find the one you like.

I have to say I like the analogy between Embedded versus Desktop. One would think embedded compilers use better optimization but I would also think, based on some code I've seen, that embedded coders also hand optimize as well. Is there such a thing as double optimization?

Rivendahl

Not Machine Performance but Programmer Performance (1)

RNG (35225) | more than 9 years ago | (#11781386)

These days, unless you're doing something really weird, it's not about machine performance but about programmer performance. For the average application, who cares if some calculation takes 0.005 or 0.006 seconds?

What however does matter, is the fact that your code is clean, understandable and easily maintained and extended by other programmers.

Quote (0)

Anonymous Coward | more than 9 years ago | (#11781394)

Someone else said this...

"Code is not about telling a computer what to do, it is about telling other people what you are telling the computer to do."

So, what is more intuitive?
if(!myPtr) or if(myPtr == NULL or if myPtr is null

C's terseness has only ever served to exclude other people and make them feel inferior.

Write for people first, machines are fast (0)

Anonymous Coward | more than 9 years ago | (#11781405)

Generally the compiler will do a good job optimizing on code written for human understanding. Remember computers are still getting faster, and that will more than compensate for most of these small tricks you are thinking of. Like that post earlier, a good choice in algorthim is more important that these little guys. So code it up and then see how it performs. If it gets the job done right (always the most important thing), and in a reasonable amount of time, you are finished.

If perfomance turns out to be a problem, the other trick is to find the place in your program where it spends most of the time (if there is one). Optimize that or part of it will usually generate the needed improvement.

It doesn't matter (1)

marcus (1916) | more than 9 years ago | (#11781410)

If you write code and it needs optimizing, you will hand profile and optimize. If it does not need optimizing, you won't.

Simple.

You should already know the answer... (1)

David H (139673) | more than 9 years ago | (#11781411)

Maintainability is much more important than optimization. If you do find a slow section of code, it can almost always be optimized in a way that leaves it maintainable. Real hand-optimization involves using a better algorithm, reducing the number of function calls, reducing branchs, and at worst, adding more variables to avoid performing the same calculations repeatedly within a loop. None of these require you to compete in an obfuscated C contest!
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>