×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

High-level Languages and Speed

ScuttleMonkey posted more than 8 years ago | from the ever-changing-animal dept.

777

nitsudima writes to tell us Informit's David Chisnall takes a look at the 'myth' of high-level languages versus speed and why it might not be entirely accurate. From the article: "When C was created, it was very fast because it was almost trivial to turn C code into equivalent machine code. But this was only a short-term benefit; in the 30 years since C was created, processors have changed a lot. The task of mapping C code to a modern microprocessor has gradually become increasingly difficult. Since a lot of legacy C code is still around, however, a huge amount of research effort (and money) has been applied to the problem, so we still can get good performance from the language."

Sorry! There are no comments related to the filter you selected.

Slashdot (0)

Anonymous Coward | more than 8 years ago | (#15735439)

God, what has been happening? Can't OSTG afford a test server for Taco to mess around with?

Re:Slashdot (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#15735444)

is it working now?

In reality only two people actually use Linux (-1, Offtopic)

Flying pig (925874) | more than 8 years ago | (#15735452)

Torvalds doesn't read /. any more, and I've been too busy to do anything.

Re:Slashdot (2, Informative)

jamie (78724) | more than 8 years ago | (#15735467)

We had to make a change to our 'comments' table schema that would have locked up the site if we had allowed full access. At over 15M rows, this takes some time. Sorry about that.

Re:Slashdot (1)

wish bot (265150) | more than 8 years ago | (#15735491)

OK! Forgiven!

Re:Slashdot (1)

Yvanhoe (564877) | more than 8 years ago | (#15735521)

That, and the URL format changed, so ancien links doesn't work anymore.
"Older Article" link returns a 503 error.
RSS links are not valid anymore.

Should I make an entry in the bug tracker or is it obvious enough ?

Re:Slashdot (1)

jawtheshark (198669) | more than 8 years ago | (#15735526)

I understand these kind of tasks are needed. No site goes without maintenance. Still, you could put a little notice on top of the page: "Commenting disabled: Site in Maintenance". The ideal position would be where the text "Have you meta moderated today?" appears.

Old debate (5, Informative)

overshoot (39700) | more than 8 years ago | (#15735454)

Twenty years ago we were still in the midst of the "language wars" and this was a hot topic. The argument then, as now, was whether a high-level language could be compiled as efficiently as a low-level language like C [1].

Well, we ran our own tests. We took a sizable chunk of supposedly well-written time-critical code that the gang had produced in what was later to become Microsoft C [2] and rewrote the same modules in Logitech Modula-2. The upshot was that the M2 code was measurably faster, smaller, and on examination better optimized. Apparently the C compiler was handicapped by essentially having to figure out what the programmer meant with a long string of low-level expressions.

Extrapolations to today are left to the reader.

[1] I used to comment that C is not a high-level language, which would induce elevated blood pressure in C programmers. After working them up, I'd bet beer money on it -- and then trot out K&R, which contains the exact quote, "C is not a high-level language."
[2] MS originally relabled another company's C complier under license (I forget their name; they were an early object lesson.)

Re:Old debate (4, Insightful)

StarvingSE (875139) | more than 8 years ago | (#15735485)

C is not a low level language. If you're not directly manipulating the registers on the processor, you are not in a low level language (and forget about the "register" keyword, modern compilers just treat register variables in C/C++ as memory that needs to be optimized for speed).

If anything, C is a so-called mid level language. If it wasn't, you'd be using an assembler instead of a compiler.

Re:Old debate (1, Insightful)

dpilot (134227) | more than 8 years ago | (#15735487)

Ain't it great to know that Modula-2 - and essentially ALL of the strongly typed and structure languages - have pretty much died out. I did piles of stuff in M2, including reading and parsing legacy binary files, re-entrant interrupt handlers in DOS, etc.

Re:Old debate (2, Informative)

StrawberryFrog (67065) | more than 8 years ago | (#15735541)

essentially ALL of the strongly typed and structure languages - have pretty much died out.

Uh, Java and C# are strongly typed and structured languages.

Re:Old debate (1, Informative)

Anonymous Coward | more than 8 years ago | (#15735504)

"[2] MS originally relabled another company's C complier under license (I forget their name; they were an early object lesson.)"

Lattice

From a gray fox....

Re:Old debate (1)

3waygeek (58990) | more than 8 years ago | (#15735649)

You sure it was Lattice and not Wizard? Or am I thinking of Borland?

Re:Old debate (0)

Anonymous Coward | more than 8 years ago | (#15735677)

>Or am I thinking of Borland?

Nope, Borland is still alive (barely).

I wish they'd got Borland; VS might have been a better product.

Re:Old debate (3, Informative)

CapnOats.com (805246) | more than 8 years ago | (#15735599)

...trot out K&R, which contains the exact quote, "C is not a high-level language."

Actually the quote from my copy of K&R, on my desk beside me is,

C is not a "very high level" language...

emphasis is mine.

Re:Old debate (2, Informative)

cerberusss (660701) | more than 8 years ago | (#15735760)

It also says in the introduction (next page):
C is a relatively "low level" language.

Re:Old debate (0)

Anonymous Coward | more than 8 years ago | (#15735625)

[2] MS originally relabled another company's C complier under license (I forget their name; they were an early object lesson.)

I think this was Lattice C. Well, there are actually only a few products from Microsoft that have been developed by themselves from scratch...

Re:Old debate (5, Informative)

shreevatsa (845645) | more than 8 years ago | (#15735642)

For what it's worth, at The Computer Language Shootout [debian.org] , OCaml does pretty well [debian.org] . Of course, C is still faster [debian.org] for most things (but note that the really high factors (29 and 281) are in OCaml's favour!), but OCaml is pretty fast compared to Java [debian.org] or Perl [debian.org] . Haskell does pretty well too. Functional programming, anyone?
Of course, these benchmarks measure only speed, are just for fun, and are "flawed [debian.org] ", but they are still interesting to play with. If you haven't seen the site before, enjoy fiddling with things to try and get your favourite language on top :)

Re:Old debate (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15735738)

Yes, the guy's main point is valid - an optimizer needs lots of semantic information, and putting that information into a language tends to make that language higher-level. But C++ has lots of this metadata now and it's still horribly low-level in use (lack of garbage collection makes sure of that). I spent a year prototyping a language myself which was just an extension to C with lots of metadata directed at the compiler. It was still a low-level language. So high vs low doesn't determine speed, some other aspect of language design, as yet unnamed, is responsible for how fast the program ends up. And of course, the lowest level language of all is your assembler and with the complete chip specs on one hand and a keyboard on the other (and plenty of time!) a programmer can always beat any compiler.

Along those lines... (1)

tgd (2822) | more than 8 years ago | (#15735764)

Back in the 80's and early 90's I was doing a lot of programming using Modula-2 as well. (To be honest, I still miss the language to this day... I wish I knew why it never took off the way it seemed it should've). One interesting feature the compiler/IDE system I was using at the time (TopSpeed's) had was this concept that all their language compilers (M2, C, C++, etc) all compiled into an intermediate binary form, and their final compiler did very heavy optimizations on that "byte code".

It always tended to crank out really bizarre code, but usually at least as good as I could've hand-optimized directly in assembly.

I wonder what ever happened to them...

Bah (4, Insightful)

perrin (891) | more than 8 years ago | (#15735458)

So we "still can get good performance" from C? The implication is that C will somehow become overcome by some unnamed high-elvel language soon. That is just wishful thinking. The article is not very substantial, and where it tries to substantiate, it misses the mark badly. The claim that C cannot handle SIMD instructions well is not true. You can use them directly from C, or the C compiler can use them through autovectorization, as in gcc 4.1. The claim that C cannot inline functions from another source file is also wrong. This is a limitation in gcc, but other compilers can do it, and IIRC the intel compiler can. It is certainly not "impossible".

Still An Interesting Article (0)

Anonymous Coward | more than 8 years ago | (#15735486)

The article is not very substantial, and where it tries to substantiate, it misses the mark badly.
You're right, for the topic the article is not very substantial. Of course, he's addressing all high level programming so that's at least a book. And then he's addressing C so that's at least another two books.

What I think the author may have been trying to get at with the SIMD part is that our machines are becoming multiprocessor based. And not multi-CPU based but my machine right now has a CPU, GPU & probably some microprocessors for the chipset. I think that you missed his point in that processors and machines have changed a lot. Thirty years ago, things were different and C was optimized to be the fastest. It probably still is the fastest but that doesn't necessarily mean that new languages are slow--in fact, they themselves may be optimized to take advantage of multi-processors better than classic C can. Again, that's not saying someone can't add to the gcc for that particular set up but you can't ignore what the author is saying.

Re:Bah (5, Insightful)

TheRaven64 (641858) | more than 8 years ago | (#15735503)

The claim that C cannot handle SIMD instructions well is not true. You can use them directly from C, or the C compiler can use them through autovectorization, as in gcc 4.1

You have two choices when using SIMD instructions in C:

  1. Use non-portable (between hardware, and often between compilers) intrinsics (or even inline assembly).
  2. Write non-vectorised code, and hope the compiler can figure out how to optimally decompose these into the intrinsics. Effectively, you think vectorised code, translate it into scalar code, and then expect the compiler to translate it back.
Compare the efficiency of GCC at auto-vectorising FORTRAN (which has a primitive vector type) and C (which doesn't), if you don't believe me.

The claim that C cannot inline functions from another source file is also wrong. This is a limitation in gcc, but other compilers can do it, and IIRC the intel compiler can. It is certainly not "impossible".

When you pass a C file to a compiler, it generates an object file. It has absolutely no way of knowing where functions declared in the header are defined. You can hack around this; pass multiple source files to the compiler at once and have it treat them as a single one, for example, but this falls down completely when the function is declared in a library (e.g. libc) and you don't have access to the source.

Re:Bah (1)

madcow_bg (969477) | more than 8 years ago | (#15735537)

> You have two choices when using SIMD instructions in C:
> 1. Use non-portable (between hardware, and often between compilers) intrinsics (or even inline assembly).
> 2. Write non-vectorised code, and hope the compiler can figure out how to optimally decompose these into the intrinsics. >Effectively, you think vectorised code, translate it into scalar code, and then expect the compiler to translate it back.

Why not create standard libraries with classes (well, only for C++ maybe, but worth a try), that will use optimizations with SIMD instructions. For example, if the library is compiled with gcc, it will optimize. With other compilers it won't, but it will work either way. And someone would extend it...

> When you pass a C file to a compiler, it generates an object file. It has absolutely no way of knowing where functions declared in the header are defined. You can hack around this; pass multiple source files to the compiler at once and have it treat them as a single one, for example, but this falls down completely when the function is declared in a library (e.g. libc) and you don't have access to the source.

But if the function is in a pre-linked library there is no way to make it inline whatever language you use. You'r example of libc is not very good, because you can use the source of glibc :).

Anyway, the article has a point, but the proposition that C can be slower with modern processors is not a sound one. But in the future, who knows?

Re:Bah (2, Informative)

gbjbaanb (229885) | more than 8 years ago | (#15735581)

It seemed to me the article was criticising C and trying to compare Java favourably. ie, C is a low level language that canot be optimised, Java is a high level language that can. roughly.

It didn;t say much at all otherwise, but it did have a nice collection of adverts.

Optimisation:
You don't have to hack around, some compilers do it for you. The new MS compiler does a 'whole program optimisation' where it will link things together from separate object modules. Still cannot handle libraries, but then, that's just an issue that applies to all programs that are split into component parts. (except as the article implies, java that uses the bytecode in class libraries... except when compiled to native code as the first page of the article mentioned as a way to boost speed. Can't have it both ways :-) )

Re:Bah (1, Insightful)

Anonymous Coward | more than 8 years ago | (#15735564)

When you pass a C file to a compiler, it generates an object file. It has absolutely no way of knowing where functions declared in the header are defined. You can hack around this; pass multiple source files to the compiler at once and have it treat them as a single one, for example, but this falls down completely when the function is declared in a library (e.g. libc) and you don't have access to the source.

You say this as if C defines an object format and you can toss libraries around without assuming a particular compiler, linker, and loader facility, e.g. a specific C implementation such as GCC with the GNU toolchain!

C compilers can and do store intermediate forms in "object" files such that the linker can do final inter-procedural optimization at link time or even dynamic load time. The SGI Irix compiler did this, for example.

Re:Bah (2, Insightful)

rbarreira (836272) | more than 8 years ago | (#15735632)

Use non-portable (between hardware, and often between compilers) intrinsics (or even inline assembly).

Which usually isn't a big problem anyway since the code sections in which that's an advantage are usually quite small and infrequent, so if you really need the performance you can make a very little sacrifice of inserting conditional compiling statements with different code for the platforms which you are interested on.

It's certainly not an ideal solution but it's a very attractive one, and it has the advantage that you can have experts on each CPU optimizing the code of the platform they know best.

Re:Bah (0)

Anonymous Coward | more than 8 years ago | (#15735668)

experts on each CPU

We call that "compiler".

Re:Bah (5, Insightful)

Anonymous Coward | more than 8 years ago | (#15735518)

C is faster in the same sense that assembly is faster: You have more control over the resulting machine code, so the code can by definition always be faster. You can optimize by hand. But that comes at a price: You have to optimize by hand. That's why C isn't always faster, especially not when it's supposed to be portable. The question isn't whether there could be a faster program in a language of choice, it's whether a language is at the right level of abstraction for a programmer to describe what the program must do and not a bit more. Overspecification prevents optimization. If you write for (int i=0; i<100; i++) where you really meant for (i in [0..99]), how is the compiler going to know if order is important? The latter is much more easily parallelized, for example. C is full of explicitness where it is often not needed. Assembly even more so. That's the problem of low level languages.

Re:Bah (1)

bhima (46039) | more than 8 years ago | (#15735582)

Yes you are right but I *realy* pine for really good autovectorization.

GCC 4.1 does slightly worse than I can do myself and Intel C does it slightly better.

But I know some folks that work on compiliers and I know they can do better.

Inline functions (1)

amightywind (691887) | more than 8 years ago | (#15735663)

The claim that C cannot inline functions from another source file is also wrong. This is a limitation in gcc, but other compilers can do it, and IIRC the intel compiler can.

GCC supports [gnu.org] inline functions and most other aspects of C99.

Re:Bah (0)

Anonymous Coward | more than 8 years ago | (#15735759)

A further note, most high level language compilers, parsers, and libraries are written in C. Esentially making the high level language a frontend for the underlying C code. Though the lack of efficieny might not be noticable on today's processors, there will always be a latency for any of the high level languages since they have to pass through thier an abstraction layer. A hinderance thier low level counterparts don't share.

High Level (4, Insightful)

HugePedlar (900427) | more than 8 years ago | (#15735459)

I remember back in the days of the Atari ST and Amiga, C was considered to be a high-level language. People would complain about the poor performance of games written in C (to ease the porting from Amiga to ST and vice versa) over 'proper' Assembly coded games.

Now I hear most people referring to C and C++ as "low level" languages, compared to Java and PHP and visual basic and so on. Funny how that works out.

I like Assembler. There's something about interacting intimately with your target hardware. It's a shame that it's no longer feasible with today's variety of hardware.

Re:High Level (1)

rbarreira (836272) | more than 8 years ago | (#15735477)

Yes, I also think it's quite fun to program in assembly (if only debugging it was fun and easy too...). And as Randall Hyde [ucr.edu] says, the impact on project time of choosing assembly as a language isn't as big as people think; most of the time is usually spent in designing anyway. As you say, the biggest problem is portability today... That's probably the biggest reason why programs made completely in assembly are more and more rare nowadays.

Re:High Level (0)

Anonymous Coward | more than 8 years ago | (#15735501)

Assembly still gets used a lot for microcontrollers, even micros that come with, or even are designed around, higher-level languages (the Parallax Propeller for example). In general, you first write your code in whatever is it that is available (C, Java, Spin etc.) to make sure the algorithm works, then convert it to asm, sometimes by hand and sometimes starting from the assembled high-level code and tweaking it.

Re:High Level (0)

hummassa (157160) | more than 8 years ago | (#15735565)

Assembly still gets used a lot for microcontrollers, even micros that come with, or even are designed around, higher-level languages (the Parallax Propeller for example). In general, you first write your code in whatever is it that is available (C, Java, Spin etc.) to make sure the algorithm works, then convert it to asm, sometimes by hand and sometimes starting from the assembled high-level code and tweaking it.
And then, you'll still have code slower than a good global-optimizing compiler would produce.

Re:High Level (1)

FinchWorld (845331) | more than 8 years ago | (#15735603)

Is that true?

I've just completed my first year at university doing electronic engineering, a good portion was microcontrollers, and although we could have used Ansi C for them (Which we also learnt), we were recommended to use assembly on the basis most in industry do, as portability seems a small issue when coding for microprocessors.

Re:High Level (4, Insightful)

radarsat1 (786772) | more than 8 years ago | (#15735674)

No. Well, generally you'll have faster code if you code it in assembly. But things change when you enter the world of embedded programming... you're right, portability isn't AS important as speed. Sometimes. In certain parts of your program. But I recommend you DON'T disregard portability, even when it comes to microprocessors. In a real-world engineering project, you never know when one day parts will change, parts become obsolete, and you don't want to be left having to translate thousands of lines of assembly code.

Rather, usually whats done is that most of the code is written in C, and only those parts that REALLY REALLY have to be optimized, like interrupt handlers for example, can be done in assembly. People use assembly for routines that, for example, have to take exactly a certain number of instruction cycles to complete.

But it should be avoided as much as possible. It's just not worth losing the portability.

More and more these days, microprocessors are embedding higher level concepts, and even entire operating systems, just to make software development easier.

Re:High Level (1)

rbarreira (836272) | more than 8 years ago | (#15735608)

And then, you'll still have code slower than a good global-optimizing compiler would produce.

Do you have any credible proof of that extraordinary claim?

Assembler (4, Insightful)

backwardMechanic (959818) | more than 8 years ago | (#15735722)

Every serious hacker should have a play with assember, or even machine code. There is real magic in starting up a uP or uC on a board you built yourself, and making it flash a few LEDs under the control of your hand assembled program. I found a whole new depth of understanding when I built a 68hc11 based board (not to mention memorizing a whole bunch of op-codes). Of course, I'd never want to write a 'serious' piece of code in assembly, and it still amazes me that anyone ever did!

Re:High Level (1)

Bazer (760541) | more than 8 years ago | (#15735748)

I like Assembler. There's something about interacting intimately with your target hardware. It's a shame that it's no longer feasible with today's variety of hardware.

That would be better than the diversity we have today wouldn't it?
No need to worry about portability of any kind.
Your hand-tuned assembler code would run on any machine with no effort at all and with blazing speed.

Yeah, a shame.

Article is theory not practice - no measurements (2, Interesting)

ChrisRijk (1818) | more than 8 years ago | (#15735461)

Not really much "meat" here. The proof is in the pudding as they say - but there's no benchmarks here. Just some minor talk about how things should compare.

I don't agree with the basic premise of the article at all - but I've also written equivalent programs in C and more modern languages and compared the performance.

Re:Article is theory not practice - no measurement (3, Informative)

mrchaotica (681592) | more than 8 years ago | (#15735723)

The proof is in the pudding as they say

No, what they say is "the proof of the pudding is in the eating." (Just pointing it out because most people get it wrong.)

Inaccurate summary (4, Insightful)

rbarreira (836272) | more than 8 years ago | (#15735465)

The task of mapping C code to a modern microprocessor has gradually become increasingly difficult.

This is not true. What they mean, I think, is "the task of mapping C code to efficient machine code has gradually become increasingly difficult".

Re:Inaccurate summary (1)

10101001 10101001 (732688) | more than 8 years ago | (#15735689)

It's worse than that, really. It all comes down to CISC vs RISC design. CISC is great for its ability to do all sorts of nifty functions directly in hardware. At the same time, CISC having all these nifty functions directly in hardware makes it rather hard to have portable code with consistent speed as well as writing a compiler for all levels of languages that'll have reasonable results. Put simply, the die space dedicated to the extra functions in CISC processors that could have went towards more general improvements (ex. more registers) instead goes towards a specific function (say hardware sqrt) and then leaves it to each compiler to figure out how to exploit these special functions.

In the end, the very highest level languages have it best, as very high level languages remove the program from a lot of the low-level details, leaving the compiler to be able to more abstractly choose which instruction to choose. And the lowest level languages (ie, assembly) have it the next best, as a person can program exactly what they mean without having to pray that the compiler does the "right" thing, but they're also required to do all the hand optimizing themselves instead of having a compiler crunch through and make overall more optimal choices than they're likely to. (And yes, a person skilled enough will almost certainly beat the compiler, but those people are by far the exception). It's the middle languages, like C, which have overly explicit constructs that are left in a situation where, as others have stated, the compiler has to guess what is meant and hope that doing not exactly what was written but instead producing some probably expected result is the best manuever.

And why do I spit all this out? Mostly because RISC processors in general avoid this. They're designed to be simple enough that there's little advantage at any language level (at least, innately). But to differentiate each RISC processor requires the feature creep towards a CISC design which removes the chief design decision from the start. And it's always that trade-off of just how often programs really need that feature you're adding and whether the complexity, in the end, hurts a large group of developers. An ironic example of this, I would say, is the ARM line of processors, as the conditional execution in ARM mode comes at the cost of having all instructions as twice the size of the non-conditional execution in THUMB mode. Having a compiler try to figure out which is "optimal" is, except in situations where bus bandwidth gives you a clear winner, is just rife as an optimization problem without a simple answer.

The truth is, there's nothing modern about the difficultly of mapping C code efficiently to machine code. And for the most part, it's only marginally more difficult if you look at all extant processor lines.

It's very simple (4, Interesting)

dkleinsc (563838) | more than 8 years ago | (#15735484)

The speed of code written in computer language is based on the number of CPU cycles required to carry it out. That means that the speed of any higher-level language is related to the efficiency of code executed by the interpreter or produced by the compiler. Most compilers and interpreters these days are pretty darn good at optimizing, making the drawback of using a higher-level language less and less important.

If you don't believe me, I suggest you look at some of the assembly code output of gcc. I'm no assembly guru, but I don't think I would have done as well writing assembly by hand.

Re:It's very simple (4, Informative)

rbarreira (836272) | more than 8 years ago | (#15735514)

I'm no assembly guru, but I don't think I would have done as well writing assembly by hand

I don't believe this as much as the people who I see repeating that sentence all the time...

Not many years ago (with gcc), I got an 80% speed improvement just by rewriting a medium sized function to assembly. Granted, it was a function which was in itself, half C code, half inline assembly, which might hinder gcc a bit. But it's also important to note that if the function had been written in pure C code, the compiler wouldn't have generated better code anyway since it wouldn't use MMX opcodes... Last I checked, MMX code is only generated from pure C in modern compilers when it's quite obvious that it can be used, such as in short loops doing simple arithmetic operations.

An expert assembly programmer in a CPU which he knows well can still do much better than a compiler.

Re:It's very simple (2, Insightful)

hummassa (157160) | more than 8 years ago | (#15735574)

An expert assembly programmer in a CPU which he knows well can still do much better than a compiler.
FOR ONE FUNCTION. If you programmed the whole system in asm, you'd see that the assembler+you combo would lose so many opportunities for optimization that a good compiler got. And that's the whole point of the article.

Re:It's very simple (1)

rbarreira (836272) | more than 8 years ago | (#15735595)

Not if you're willing to throw time away on optimizing it. I'm not saying that it's necessarily good to have an expert assembly programmer doing an entire program (because of time, money constraints and because sometimes performance is already good enough). I'm just saying that the compiler is often WORSE than him at optimizing, nothing more than that.

Re:It's very simple (2, Interesting)

spinkham (56603) | more than 8 years ago | (#15735588)

True, since they can always start with the compiler output, and are thus will at least do no worse.
The more interesting question is if a person with only passing familiarity with assembly can do better then the compiler, and the answer to that is usually no these days.

Re:It's very simple (1)

marcovje (205102) | more than 8 years ago | (#15735754)


True. However the use of such optimizations is shifting. I usually get more in making malloc application specifically optimized then by mucking with assembler.

However there is some Image recognition stuff where I still use (SSE-) asm.

Re:It's very simple (3, Interesting)

jtshaw (398319) | more than 8 years ago | (#15735617)

Most compilers and interpreters these days are pretty darn good at optimizing, making the drawback of using a higher-level language less and less important.


In the past, most compilers were dreadful at optimizations. Now, they are just horrible. I guess that is an improvement, but I still believe there is a lot of good research to come here.

I do agree that the playing field has become pretty even. For example, with the right VM and the right code you can get pretty good performance out of Java. Problem is "the right VM" depends greatly on the task the program is doing.. certainly not a one vm fits all out of the box solution (ok.. perhaps you could always use the same VM, but app specific tuning is often neccesary for really high performance).

At any rate.. people just need to learn to use the best tool for the job. Most apps don't actually need to be bleedingly fast, so developing them in something that makes the development go faster is probably more important then developing them in something to eek out that tiny performance gain nobody will probably notice anyway.

C is the 3vil (4, Funny)

Anonymous Coward | more than 8 years ago | (#15735488)

Isnt the JIT for java written in C though.

ahah now we know why my java program is so slow. damn C slowing it down.

Great article! (4, Funny)

TeknoHog (164938) | more than 8 years ago | (#15735500)

This is exactly what I've been saying over and over, why I think that e.g. Fortran is better than C in many respects. The main point is neatly summarized at the end:
the more information you can give to your optimizer, the better the job it can do. When you program in a low-level language, you throw away a lot of the semantics before you get to the compilation stage, making it much harder for the compiler to do its job.

Re:Great article! (1)

menkhaura (103150) | more than 8 years ago | (#15735563)

"Do what I mean"? Hehehe

What has changed? What should we change? (1)

rufty_tufty (888596) | more than 8 years ago | (#15735530)

Is this a surprise?
C is designed for classic Von-Neumann architecture machines using a stack based methodology and an attempt to give programmers all the rope they could want.
This is a good thing for solving one set of problems.
It also showed it can be adapted and built on to solve further more advanced problems.
This is also a good thing.

I would question however if in these days with ever increasing usage of multiprocessors, MMUs, Register heavy CPUs, massive on chip caches, huge latency to access memory etc if the concepts embodied within C are still the best methodology?

I guess what I'm saying is i wonder if there is an equivalent medium level language* around that better suits todays reality.
Let me as a possibly poor example take the cell processor as an example here:
The kind of tools I'd like to see for this (were a cell style processor to be used in my next desktop) would be:
1) Access to the low level assembler.
2) An extemsion to the C language (possibly a library) which would mean I could run all my existing code as is on this multi processor machine, but I could then profile the code to make better use of new processor architectures in the interim until I migrate my code/programmers over to new methodology.
3) High level languages like python, Perl etc running in some mode - for tools like this I normally don't care about performance if I'm using thse tools - and wioth any luck, clever compilers/interprators will do some of the multi core management for me
Although a new high level language which allows instruction/task level parallelism would be cool!
4) A new medium level language (along the lines of occam) which would allow me to have a comparable level of control and automation over this multi processor register and cache heavy system.

to me, 1 is implicitly required for any design, 3 comes for free once you have 2, and 4 is a way to progress forwards and gain productivity.
That leaves 2 as a stop gap measure.
That is all you programs compiled using #2 would make very inefficient use of such a processor,, but would maintain compatability for the time being until we can move on to more appropriate things.
Will we ever leave C behind? Well we never left Fortran behind really, it's still there. I don't see any more reason to keep C for modern desktop processors than there was to keep training most programmers in fortran.

But will we ever move GNU off of C?
I won't be throwing out my C LRM yet ;-) Unfortunatly...

* By my definition:
High level language - Hide all/as many as possible of the details of the machine from the programmer
Low Level language - expose the programmer to as many of the machine details as possible
Medium level - make some parts of the machine automated (e.g for loops) make some exposed to the programmer (e.g. memory managment)
Yes this is a sliding scale, so arguments as to where a langauge lies are always open to debate

Re:What has changed? What should we change? (0)

Anonymous Coward | more than 8 years ago | (#15735551)

Rubbish. C is at home on Harvard as it is on Von-Neumann: the standard does not allow conversions between pointers to code and pointers to data.

It goes both ways (4, Interesting)

JanneM (7445) | more than 8 years ago | (#15735533)

Sure, CPU:s look quite a bit different now than they did 20+ years ago. On the other hand, CPU designs do heavily take into account what features are being used by the application code expected to be run on them, and one constant you can still depend on is that most of that application code is going to be machine-generated by a C compiler.

For instance, 20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it). One expectation was still, at the time, that a lot of code would be generated directly by humans, so instructions and instruction designs catering to that use-case were developed. But by around then, most code was machine generated by a compiler, and since the compiler had little high-level semantics to work with, the high-level instructions - and most low-level one's too - went unused; this was one impetus for the development of RISC machines, by the way.

So, as long as a lot of coding is done in C and C++ (and especially in the embedded space, where you have most rapid CPU development, almost all coding is), designs will never stray far away from the requirements of that language. Better compilers have allowed designers to stray further, but stray too far and you get penalized in the market.

Re:It goes both ways (4, Informative)

pesc (147035) | more than 8 years ago | (#15735650)

20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it).

While the VAX had some complex instructions (such as double-linked queue handling), it did not have a quicksort instruction.

Here [hp.com] is the instruction set manual.

Re:It goes both ways (1)

JanneM (7445) | more than 8 years ago | (#15735725)

Hm, I'm misremembering it then. Was it perhaps one of the late-model PDP:s? I know one machine of that era had it, and of course now I can't remember correctly which one :(

High-level languages have an advantage (5, Insightful)

Bogtha (906264) | more than 8 years ago | (#15735534)

The more abstract a language is, the better a compiler can understand what you are doing. If you write out twenty instructions to do something in a low-level language, it's a lot of work to figure out that what matters isn't that the instructions get executed, but the end result. If you write out one instruction in a high-level language that does the same thing, the compiler can decide how best to get that result without trying to figure out if it's okay to throw away the code you've written. Optimisation is easier and safer.

Furthermore, the bottleneck is often in the programmer's brain rather than the code. If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations. High-level languages help with programmer productivity. I know that it's considered a mark of programmer ability to write the most efficient code possible, but it's a mark of software engineer ability to get the programming done faster while still meeting performance constraints.

Re:High-level languages have an advantage (5, Insightful)

Eivind (15695) | more than 8 years ago | (#15735577)

If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations.

Especially since you can combine. Even in high-performance applications there's typically a only a tiny fraction of the code that actually needs to be efficient, it's perfectly common to have 99% of the time spent in 5% of the code.

Which means that in basically all cases you're going to be better off writing everything in a high-level language and then optimize only those routines that need it later.

That way you make less mistakes, and get higher-quality better code quicker for the 95% of the code where efficiency is unimportant, and you can spend even more time on optimizing those few spots where it matters.

Re:High-level languages have an advantage (2, Informative)

StormReaver (59959) | more than 8 years ago | (#15735739)

"If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations."

This sound perfectly reasonable in theory. In practice, however, it's not. Users want speedy development AND speedy execution. I developed a Java image management program for crime scene photos, and the Sheriff Patrol's commander told me flat out: we'll never use this. It's too slow.

I rewrote the program using C++ and Qt, and gained a massive speed improvement. The Sheriff Patrol and detective units have been using it ever since, and they love it. I had been a Java booster for upwards of eight years until then. That was (roughly) three years ago, and I haven't written a line of Java since. I have, however, run my historic Java programs in SUN's most recent JVM. The newer hardware runs it faster, but Qt/C++ still smokes Java. Qt gives me speedy development, and C++ gives me fast execution. It's the best of both worlds.

Typical Java Handwaving (5, Insightful)

mlwmohawk (801821) | more than 8 years ago | (#15735540)

The first mistake: Confusing "compile" performance with execution performance. The job of maping C/C++ code to machine code is trivial.

I've been programming professionally for over 20 years, and for those 20 years, the argument is that computers are now fast enough to allow high level languages and we don't need those dirty nasty assemblers and low level languages.

What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment.

The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant.

If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?"

Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse.

Re:Typical Java Handwaving (1)

lukas84 (912874) | more than 8 years ago | (#15735590)

I don't have as much experience as you do, and iam not a software developer either.

But i would like to offer an alternative point of view to the one you have.

I certainly don't understand everything my computer does. The main reason for this is because computers today are more complex then ever.

Several years ago, a system administrator might have known how a file is written to a hard disk, and how the hard disk calculates the appropiate checksum for the data it writes.

This is no longer the case, because these problems have been completely abstracted from us. The hard disk gets an amber light, and if that amber light is lit, the disk is broken.

This is so, because we can afford to have redundancy in our hard disks, and we no longer need to understand how they work exactly.

Why would this be any different for software developers? Many things can be abstracted, if you can afford the abstraction (which always comes at a cost). I don't think that this is wrong, because it helps us to create even better systems.

Take a look at todays cars. A few decades ago, you had a "trained" driver, which usually could dismantle the entire car, and then build it up again. This is no longer the case. Many things in cars have been abstracted, you just need to turn a key in order to start your engine, or just press the gas, the gears get shifted automatically, and when breaking the ABS handles distribution automatically.

This is the same as newer environments like Java and .NET. Of course, an Idiot might write nonsense code in .NET, but that doesn't mean .NET is a bad thing. An idiot can also crash his ESP controlled car into the next tree, if he goes into a curve with 150km/h.

Re:Typical Java Handwaving (2, Insightful)

rbarreira (836272) | more than 8 years ago | (#15735662)

Of course, an Idiot might write nonsense code in .NET, but that doesn't mean .NET is a bad thing.

I think his point was not that abstractions are bad, but that not knowing what's happening behind the scenes isn't good.
Even to optimize .NET code, sometimes it's good to inspect the generated CIL (or even asm!) code in order to know why something isn't going fast.

Typical "/." Handwaving (5, Insightful)

Anonymous Coward | more than 8 years ago | (#15735600)

"I've been programming professionally for over 20 years, and for those 20 years, the argument is that computers are now fast enough to allow high level languages and we don't need those dirty nasty assemblers and low level languages."

The "appeal to an expert" fallacy?

"What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment."

It also means that portability becomes ever harder, as well as adaptability to new hardware.

"If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?""

It's about algorithms. Computers just happen to be the most convienent means for trying them..

"The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant."

With the trend towards VM's and virtualization, that "hypothetical" computer comes ever closer.

"Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse."

Now who's handwaving?

Re:Typical Java Handwaving (2, Insightful)

iotaborg (167569) | more than 8 years ago | (#15735620)

If computer science isn't about computers, what is it about?

I was rather under the impression that computer science was the theory of computation, where the computer is simply a tool; just as much as a soldering iron is a tool in electrical engineering.

Re:Typical Java Handwaving (1)

rbarreira (836272) | more than 8 years ago | (#15735710)

The computer isn't simply a tool for helping computer scientists, it's much more than that. Nowadays, I'd say that it's much more important to have computer science helping computer-related endeavors, than the opposite.

Re:Typical Java Handwaving (5, Insightful)

cain (14472) | more than 8 years ago | (#15735626)

If computer science isn't about computers, what is it about?

"Computer science is no more about computers than astronomy is about telescopes" -- Edsger Dijkstra quotes (Dutch computer Scientist. Turing Award in 1972. 1930-2002)

Sorry, you're arguing against Dijkstra: you lose. :)

Re:Typical Java Handwaving (1)

rbarreira (836272) | more than 8 years ago | (#15735678)

Sorry, but in the context of a rational argument, appeal to authority [wikipedia.org] is a logical fallacy. I respect Dijkstra a lot but that quote doesn't seem particularly accurate. Telescopes appeared to help astronomers. The case with computers/programming and computer science is the opposite.

Re:Typical Java Handwaving (1)

cain (14472) | more than 8 years ago | (#15735737)

Hmmm. Heard of emoticons? They look like this :) . That partiular one means "smile," as in this is a little joke.

Appeal to authority is not always a logical fallacy. It can also be a valid tool for supporting an argument. Why just see this [csun.edu] page for proof. :) If the authority is an actual authority on the topic at hand, then it increases the power of the argument. Are you suggesting that Dijkstra isn't an authority in CS? Good luck arguing that one. :)

(Note the smiley). :) Heh.

Re:Typical Java Handwaving (0)

Anonymous Coward | more than 8 years ago | (#15735741)

>Sorry, but in the context of a rational argument, appeal to authority is a logical fallacy.

As opposed to appeal to random /. posters? Considering the cluelessness displayed around here, I wouldn't be so quick to appeal to that if I were you.

Unless you have an actual response to Dijkstra's objections, I'll agree with the OP: you lose, hard.

Re:Typical Java Handwaving (1)

mrchaotica (681592) | more than 8 years ago | (#15735765)

Telescopes appeared to help astronomers. The case with computers/programming and computer science is the opposite.

No, it's exactly the same: computers arrived to help mathematicians (and scientists, and engineers, and writers, and everyone else).

Re:Typical Java Handwaving (0)

Anonymous Coward | more than 8 years ago | (#15735701)

You wrote:
"Computer science is no more about computers than astronomy is about telescopes" -- Edsger Dijkstra quotes (Dutch computer Scientist. Turing Award in 1972. 1930-2002)

There is a LOT of debate about computer science, and while Dijkstra is well respected, he is certainly not the only source of wisdom on the subject. Computer science MUST include the real-world functioning of computers, otherwise it is merely calculus.

Re:Typical Java Handwaving (1)

cain (14472) | more than 8 years ago | (#15735746)

I 3 calculus! Yay calculus!

Re:Typical Java Handwaving (1)

rayzat (733303) | more than 8 years ago | (#15735669)

I see what you're saying and some people need to learn how to do the add-shift multiplier, I believe these are your Computer Engineers today, not your software engineers, they tend to be the ones involved in you're low level voodoo, but think of it like this. 20 years ago when you were in school you spent 4 years learning how to program in C and assembly in order to execute add-shift multipliers. Today students still go to school for 4 years but have so much more to learn about. Where is the time for object oriented programming techniques, advanced data structures, multi-threaded software design, comm. theory, vector programming, and everything else if people are spending their time learning about add-shift multipliers or whatever archaic multi-step register operation you can think of.

Re:Typical Java Handwaving (1)

Jeff DeMaagd (2015) | more than 8 years ago | (#15735675)

I think the point was that computing performance is scaling up so much that the time laboring over human obtimization isn't well-spent if one can upgrade to a faster server/desktop/workstation for less money than the developer's time is worth. This is particularly true for small projects, where maintainability is important. Next year, we will see two four core CPUs on a workstation, the following year or two, it may be a single eight core chip. How do you take good advantage of that in C? I don't know, and I've done a fair share of C programming. I know it can be done, but I think there are better ways to spend a developer's time.

Re:Typical Java Handwaving (4, Insightful)

arevos (659374) | more than 8 years ago | (#15735696)

The first mistake: Confusing "compile" performance with execution performance. The job of maping C/C++ code to machine code is trivial.

I've designed compilers before, and I wouldn't class constructing a C/C++ compiler as "trivial" :)

If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?"

One could also make the opposite argument. Many computer courses teach languages such as C++, C# and Java, which all have connections to low level code. C# has its pointers and gotos, Java has its primatives, C++ has all of the above. There aren't many courses that focus more heavily on highly abstracted languages, such as Lisp.

And I think this is more important, really. Sure, there are many benefits to knowing the low level details of the system you're programming on; but its not essential to know, whilst it is essential to understand how to approach a programming problem. I'm not saying that an understanding of low level computational operations isn't important, merely that it is more important to know the abstract generalities.

Or, to put it another way, knowing how a computer works is not the same as knowing how to program effectively. At best, it's a subset of a wider field. At worst, it's something that is largely irrelevant to a growing number of programmers. I went to a University that dealt quite extensively with low level hardware and networking, and a significant proportion of the marks of my first year came from coding assembly and C for 680008 processors. Despite this, I can't think of many benefits such knowledge has when, say, designing a web application on Ruby on Rails. Perhaps you can suggest some?

Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse.

I disagree. I think software sucks because software engineers don't understand programming

Re:Typical Java Handwaving (1)

rbarreira (836272) | more than 8 years ago | (#15735730)

Ultimately, I'd say not knowing what happens behind the abstractions is bad. I know (or can at least easily research and think about) everything what happens behind every line of .NET or Java code that you show me, and that is crucial for understanding debugging and optimization techniques. Can you say the same about most programmers out there?

Re:Typical Java Handwaving (4, Insightful)

Oligonicella (659917) | more than 8 years ago | (#15735726)

"The job of maping C/C++ code to machine code is trivial."

Which machine, chum?

"I've been programming professionally for over 20 years..."

OK, bump chests. I've been at it for 35+. And? Experience doth not beget competence. There are uses for low-level languages and those that require them will use them. Try writing a 300+ module banking application in assembler. By the time you do, it will be outdated. Not because the language will change, but because the banking requirements will. Using assembler to write an application of that magnitude is like trying to write an Encyclopedia article with paper and pencil. Possible, but 'tarded.

"Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse."

More like, 'software sucks today for the same reason it always has -- fossized thinkers can't change to make things easier for those who necessarily follow them.' Ego, no more.

Yes, but is it worth it? (2, Informative)

Toreo asesino (951231) | more than 8 years ago | (#15735557)

Of course, lower-level languages can be faster, but I'd suggest that writing code at a very low-level is rarely worth the extra effort.

Take Quake II [slashdot.org] for instance; as quoted from the article 'the managed version initially ran faster than the native version' - which would suggest higher-level languages are certainly capable of comparing to that of their lower-level siblings.

Also, take into account the added developer time gained from factors like memory-management being, well, managed, and ever-falling processor & memory prices, and the logical conclusion is usually "write at a higher-level".

There are of course more considerations than these when deciding on a development platform, but essentially, I think there'd have to be very good reasons for writing green-field projects too close to the machine.

Speed of programming (0)

Anonymous Coward | more than 8 years ago | (#15735559)

People who are good at C will insist that they can code something as quickly as anyone else can code the same thing in a higher level language. Well, maybe. I got my own awakening when I watched two students struggling to write something in C that they could have written in ten lines of Basic (it was a while ago).

In the embedded world, programming DSPs is a wonderful example. We used to write assembly code for the parts that had to be fast because we could get tighter code than a compiler could produce. Now the tools are so good that even good programmers are better off letting the tools do the work.

So, is there any point writing in a low level language, even where speed matters? I don't think. In any event, it will take longer to write the code. It just doesn't seem like a winning proposition.

Single Page Version of the Article (2, Informative)

jaaron (551839) | more than 8 years ago | (#15735566)

Here's a print view [informit.com] of the article so that you don't have to keep moving through the pages. Despite that annoyance, it was a good article. I wish there had been more concrete examples though.

high level vs. low level 101 (1, Informative)

192939495969798999 (58312) | more than 8 years ago | (#15735568)

The criteria for a high-level language are: 1) you aren't allowed to do direct memory register manipulations (i.e. cant run of the end of an array into other areas), and 2) you are interpreted. Either of these can qualify a language as high-level. C has direct memory register manipulation and it is not interpreted, therefore it cannot be a high-level language.

Re:high level vs. low level 101 (0)

Anonymous Coward | more than 8 years ago | (#15735619)

(1) No it doesn't. The fact that C has no input, output, assembly or registry access whatsoever is one of its distinguishing features. The closest thing to direct memory manipulation is typecasting an int to a pointer - a kludge well outside of normal language use.
(2) That's not even a real criterion. Besides, that's outside the scope of programming languages.

Crap (1)

MarkSyms (167054) | more than 8 years ago | (#15735628)

I might give you 1 but 2 is complete crap. There are plenty of high level languages that are not interpreted. I don't think anyone will call Ada anything but high level.

Re:high level vs. low level 101 (1)

s31523 (926314) | more than 8 years ago | (#15735639)

Direct memory access has almost nothing to do with the language being used. This is a funciton of the underlying machine and OS. A protected mode OS running with a MMU won't let you directly manipulate anything without going into "privaledged mode". Sure, some languages might facilitate this better than others, but the ability to do it is provided by the OS and machine.

Re:high level vs. low level 101 (2, Insightful)

backwardMechanic (959818) | more than 8 years ago | (#15735751)

I love these hard definitions of soft concepts. Just because you write down some rules, it doesn't mean we follow them. Any programmer understands roughly what 'high level' and 'low level' mean, but I'm sure we'll all argue over where the boundaries are - they're not well defined. I guess you stopped at 101?

Some comments on the article (4, Insightful)

rbarreira (836272) | more than 8 years ago | (#15735586)

OK, the article isn't bad but contains a few misleading parts... Some quotes:

one assembly language statement translates directly to one machine instruction

OK, this is nitpicking but there are some exceptions - I remember that TASM would convert automatically long conditional jumps to the opposite conditional jump + an unconditional long jump since there was no long conditional jump instruction.

Other data structures work significantly better in high-level languages. A dictionary or associative array, for example, can be implemented transparently by a tree or a hash table (or some combination of the two) in a high-level language; the runtime can even decide which, based on the amount and type of data fed to it. This kind of dynamic optimization is simply impossible in a low-level language without building higher-level semantics on top and meta-programming--at which point, you would be better off simply selecting a high-level language and letting someone else do the optimization.

This paragraph is complete crap. If you're using a Dictionary API in a so called "low-level language", it's as possible for the API to do the same optimization as it is for the runtime he talks about; and you're still letting "someone else do the optimization".

When you program in a low-level language, you throw away a lot of the semantics before you get to the compilation stage, making it much harder for the compiler to do its job.

That's surely true. But the opposite is also true - when you use an immense amount of too complex semantics, they can be translated into a pile of inefficient code. Sure, this can improve in the future, but right now it's a problem of very high level constructs.

Due to the way C works, it's impossible for the compiler to inline a function defined in another source file. Both source files are compiled to binary object files independently, and these are linked.

Not exactly true I think [greenend.org.uk] . Yes, the approach on that page is not standard C, but on section 4 he also talks about some high level performance improvements which are still being experimented on, so...

Optimised compilers (1)

axlash (960838) | more than 8 years ago | (#15735589)

Interesting article... the main point seems to be that compilers have grown better at producing the most efficient machine code for particular processor. Perhaps there's a market out there for processors that are optimised for specific languages (like C, given that there still is a lot of C code out there)?

What I didn't see in TFA... (4, Insightful)

s_p_oneil (795792) | more than 8 years ago | (#15735615)

I didn't see anything mentioning that many high-level languages are written in C. And I don't consider languages like FORTRAN to be high-level. FORTRAN is a language that was designed specifically for numeric computation and scientific computing. For that purpose, it is easy for the compiler to optimize the machine code better than a C compiler could ever manage. The FORTRAN compiler was probably written in C, but FORTRAN has language constructs that are more well-suited to numeric computation.

Most truly high-level languages, like LISP (which was mentioned directly in TFA), are interpreted, and the interpreters are almost always written in C. It is impossible for an interpreted language written in C (or even a compiled one that is converted to C) to go faster than C. It is always possible for a C programmer to write inefficient code, but that same programmer is likely to write inefficient code in a high-level language as well.

I'm not saying high-level languages aren't great. They are great for many things, but the argument that C is harder to optimize because the processors have gotten more complex is ludicrous. It's the machine code that's harder to optimize (if you've tried to write assembly code since MMX came out, you know what I mean), and that affects ALL languages.

Re:What I didn't see in TFA... (1)

sayn_''Hello'' (249680) | more than 8 years ago | (#15735712)

> Most truly high-level languages, like LISP (which was mentioned directly in TFA),
> are interpreted, and the interpreters are almost always written in C.

While Lisp is frequently interpreted, nearly all major implementations of Common Lisp provide a compiler as well. Scheme, another Lisp dialect, also has implementations that compile to native code and/or C.

fundamental proof (0)

Anonymous Coward | more than 8 years ago | (#15735618)

The author comes up with a bunch of half-ass arguments of why higher level languages aren't necessarily slower and can be faster given certain condition. Before we abandon all our C programs, where is the proof that this is true? I mean, there are things that you just can't do with C that higher level languages fundamentally can't do, e.g. write a garbage collector.

They put the D in DUH (2, Informative)

billcopc (196330) | more than 8 years ago | (#15735634)

The main reason C is "faster" than high level languages is because C doesn't cover bad programmers' butts with elaborate type checking, ref counting and garbage collection. Take a properly designed C app with graceful error handling and secure inputs, and you will take a performance hit. Let's face it, most of the code we write in C involves error handling and idiot-proofing, things that most high-level languages have built-in functionality for these boring, repetitive slabs of code we all hate writing.

I see no reason why a high-level application couldn't be compiled as skillfully as a feature-equivalent low-level application. It's just a matter of breaking down the code into manageable building blocks.

Re:They put the D in DUH (1)

bytesex (112972) | more than 8 years ago | (#15735676)

Yeah, because being able to directly push bits over in memory in any place that you like (and that is yours to touch) can't have possibly anything to do with it. Or having an API that is the same one that your OS uses (which is mostly the case), which allows you to do all sorts of system calls without checking or conversion; that can't have anything to do with it either.

Professionals use C for everything (0)

Anonymous Coward | more than 8 years ago | (#15735684)

C is portable, fast, very complex and since 35+ years the leading standard for professional OS and APP development.

C is so successful that C++ had to be invented to get more people into OO style C programming. C++ was designed as an syntax aid for people who lacked the skill writing OO in C by disciplined use of structs and func pointers.

C is obviously too complex for the average CS student who crouch from one alternative to the next.

Java? .NET??? ...amusing.

Doesn't help me (1)

denjin (115496) | more than 8 years ago | (#15735715)

I found low-level languages work better with speed. I get too confused with a higher-level one when I use it. :(

See the language shootout (0)

Anonymous Coward | more than 8 years ago | (#15735762)

http://shootout.alioth.debian.org/ [debian.org]

You'll notice that C does pretty well. Note also however that several other high level languages come in pretty close in terms of performance (of course they usually win on program length). In particular 3 modern functional languages (Clean, OCaml and Haskell) come in just behind gcc and well before your jited or bytecode languages.

So yes, high level languages that compile to native machine code are fully competitive with C and that's not even considering important things like programmer time, safty, maintanability etc.

I'm currently working on a new String and IO library for Haskell and we're getting idiomatic one line programs that are within a few % of equivalent C programs. This is nice because typically you have to uglify your high level code to become more competitive with C performance.

OCaml (1)

Richard W.M. Jones (591125) | more than 8 years ago | (#15735774)

I was playing around with a neural network simulator in C and OCaml the other day, and was pleasantly surprised to find that the optimised OCaml version was just 3% slower than the optimised C version (using gcc, so perhaps the Intel compiler would have done better).

Thread on OCaml-beginners newsgroup here [yahoo.com] .

Rich.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?