Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Overeager Compilers Can Open Security Holes In Your Code

Soulskill posted about 2 months ago | from the i-blame-the-schools dept.

Programming 199

jfruh writes: "Creators of compilers are in an arms race to improve performance. But according to a presentation at this week's annual USENIX conference, those performance boosts can undermine your code's security. For instance, a compiler might find a subroutine that checks a huge bound of memory beyond what's allocated to the program, decide it's an error, and eliminate it from the compiled machine code — even though it's a necessary defense against buffer overflow attacks."

cancel ×

199 comments

Sorry! There are no comments related to the filter you selected.

old news from decades ago (4, Insightful)

iggymanz (596061) | about 2 months ago | (#47284407)

well known for decades that optimizing compilers can produce bugs, security holes, code that doesn't work at all, etc.

Re:old news from decades ago (5, Insightful)

NoNonAlphaCharsHere (2201864) | about 2 months ago | (#47284475)

That's why I always use a pessimizing compiler.

Re:old news from decades ago (1)

iggymanz (596061) | about 2 months ago | (#47284501)

ah, disgruntled and curmudgeonly old coot hand writing assembly from pseudo-code outline. Yeah, done that

Re:old news from decades ago (1)

sinij (911942) | about 2 months ago | (#47284525)

Make sure to run it on DVL distro [securitydistro.com]

Re:old news from decades ago (1)

Kaenneth (82978) | about 2 months ago | (#47284651)

*cough* Microsoft .net

Re:old news from decades ago (5, Insightful)

KiloByte (825081) | about 2 months ago | (#47284647)

Or rather, that optimizing compilers can expose bugs in buggy code that weren't revealed by naive translation.

Re:old news from decades ago (2)

blackwizard (62282) | about 2 months ago | (#47284929)

I'm going with old news from decades ago [gnu.org] .

Re:old news from decades ago (3, Interesting)

itzly (3699663) | about 2 months ago | (#47285065)

That's an example of a programmer not understanding the rules of a conforming C/C++ compiler. It should be fixed in the source, not in the compiler.

Re:old news from decades ago (2)

tepples (727027) | about 2 months ago | (#47285201)

Perhaps the problem is that standard C isn't expressive enough to express some operations that some programs require, especially with respect to detection of integer arithmetic overflows.

Re:old news from decades ago (1)

dreamchaser (49529) | about 2 months ago | (#47285465)

I've always preferred inline assembly or linked asm routines for tricky bits, but the problem then is it's not portable.

Re:old news from decades ago (2)

russotto (537200) | about 2 months ago | (#47285901)

Perhaps the problem is that standard C isn't expressive enough to express some operations that some programs require, especially with respect to detection of integer arithmetic overflows.

Indeed; the compiler's even allowed to assume signed integer overflow doesn't happen, which is where you get into trouble. Yet we have this perfectly good mechanism for detecting integer overflow (condition codes) and no way to reach them from high level languages (C isn't unique in this respect)

Re:old news from decades ago (4, Insightful)

Marillion (33728) | about 2 months ago | (#47284715)

Right. The other part of the issue is why didn't anyone write a test to verify that the buffer overflow detection code actually detects when you overflow buffers?

Re:old news from decades ago (3, Insightful)

AuMatar (183847) | about 2 months ago | (#47284749)

Because it worked in debug mode (which generally has optimizations off)?
Because it was tested on a compiler without this bug? The people writing the memory library is usually not the people writing the app that uses it.
Similarly, it was tested on the same compiler, but with different compiler flags?
Because that optimization didn't exist in the version of the compiler it was tested on?
Because the test app had some code that made the compiler decide not to apply the optimzation?
Life is messy. Testing doesn't catch everything.

Re:old news from decades ago (0)

K. S. Kyosuke (729550) | about 2 months ago | (#47284953)

The other other part of the issue is why the hell aren't we programming en masse in safe languages in which the compiler adds its own checks (rather than removing yours)?

Re:old news from decades ago (1)

UnknownSoldier (67820) | about 2 months ago | (#47285129)

Because of TINSTAAFL.

Just because you want extra safety checks around _your_ code does not imply that I can afford the performance _penalty_.

C is a mid level language because it tries to have zero over-head. If you want extra checking done YOU tell the compiler you explicitly so that it doesn't effect everyone's run time performance implicitly.

While I agree it would be extremely useful to be ability to have "safe arrays", and even tag data with "volatile" to tell the compiler "Hey this is important due to security optimizations" we don't have that.

Re:old news from decades ago (3, Interesting)

K. S. Kyosuke (729550) | about 2 months ago | (#47285255)

I'd personally rather work in languages that are safe by default with optional (but available) extra performance/lower safety where explicitly instructed (the way Common Lisp does it, for example), rather than the other way around. I've come to the impression that most codebases would have fewer overrides in the former case rather than the latter. If you think the latter is preferable, what about all those bugs and security vulnerabilities we got "thanks" to that approach? Was it actually worth it?

Re:old news from decades ago (0)

Anonymous Coward | about 2 months ago | (#47285477)

You can build a safe library on top of a fast zero overhead one. You can't build a fast zero overhead library on top of a safe one.

Re:old news from decades ago (0)

Anonymous Coward | about 2 months ago | (#47285759)

The penalty is very much overblown and can be limited to less than about 10% of runtime. Government does not like secure code - this is the key reason for C being popular. They made sure.

Algol, Ada, Pascal - all way too secure.

Re:old news from decades ago (0)

Anonymous Coward | about 2 months ago | (#47285753)

The Algol machines of Burroughs were apparently much better than the C crapola we have now.

So The Powers made C-based crap dominate the world of computing. Now have a cheap joke on military intelligence while they laugh all the way to Ft Meade.

Unsable Code, again (5, Informative)

Anonymous Coward | about 2 months ago | (#47284413)

This is just as poorly written up as last time [slashdot.org] . These are truly bugs in the programs using undefined parts of the language. It's silly to blame the compiler.

Re:Unstable Code, again (0)

Anonymous Coward | about 2 months ago | (#47284551)

Apparently, no one told USENIX about this 22 year old revelation. [catb.org]

(also, I was compelled to fix the typo in my Re: of your title, unless you really meant "Code unrelated to sables [wikipedia.org] ", which I assume would cover most code)

Re:Unsable Code, again (1)

Anonymous Coward | about 2 months ago | (#47285009)

At least then the summary didn't contain this turd:

For instance, a compiler might find a subroutine that checks a huge bound of memory beyond what's allocated to the program, decide it's an error, and eliminate it from the compiled machine code

Which is wrong and not even in the article. Jfruh probably heard something about overflow semantics, didn't understand a word of it, and then made something up.

Re:Unsable Code, again (2)

david_thornley (598059) | about 2 months ago | (#47285169)

Actually, that makes sense.

Suppose we want to check for buffer overflows in C++, so we allocate a stretch of memory beyond the buffer, zero it out, and check it to see if it's zeros later. Depending on how the buffer is used, it may be that a buffer overflow would require undefined behavior, such as accessing memory beyond the limits of a data structure. An optimizing compiler might figure that out. In event of undefined behavior, anything the compiler does is conforming, and in event of no undefined behavior the stretch of memory will remain zeros, so in either case the check for zeros is unnecessary for a program, and the compiler can remove it by the "as-if" rule.

Re:Unsable Code, again (1)

Nutria (679911) | about 2 months ago | (#47285291)

stretch of memory beyond the buffer, zero it out, and check it to see if it's zeros later.

I don't remember the correct term for such regions, but nulls are a very bad sequence to put there, since... they can't be distinguished from actual, purposeful zeros.

Better to initialize the area with something like 0xDEADBEEF.

Re:Unsable Code, again (1)

Anonymous Coward | about 2 months ago | (#47285667)

Canaries.

Re:Unsable Code, again (1)

Anonymous Coward | about 2 months ago | (#47285225)

i see this as very simplistic thinking. if the c standard were still 20 pages or so, i would agree with you. but the amount of undefined behavior is just nuts. and the compilers just laugh. they don't always issue diagnostics when optimization removes code. no! that would make life easy, and clearly programming is a man's sport.

tl;dr. checking for overflow is valid, and the standard's writers were not considering all the possibilities when they undefined such behavior.

UNISEX conference (5, Funny)

Mdk754 (3014249) | about 2 months ago | (#47284421)

Wow, you know you're ready to go home when it's Friday afternoon and you read:

But according to a presentation at this week's annual UNISEX conference

Re:UNISEX conference (1)

smitty_one_each (243267) | about 2 months ago | (#47284621)

Where "Overage" compilers go to be seen.

Re:UNISEX conference (1)

PPH (736903) | about 2 months ago | (#47284689)

I was wondering why my compiler was generating warnings about traps. I thought it meant something about catching buffer overflow conditions.

Re:UNISEX conference (1)

OakDragon (885217) | about 2 months ago | (#47284915)

So do overeager compilers suffer from premature optimization?

Re:UNISEX conference (1)

K. S. Kyosuke (729550) | about 2 months ago | (#47284963)

Did you just type that on your Unicomp keyboard attached to your Univac?

Complete nonsense.... (3, Insightful)

Anonymous Coward | about 2 months ago | (#47284427)

Any code removal by the compiler can be prevented by correctly
coding the code with volatile (in C) or its equivalent.

Re:Complete nonsense.... (2, Informative)

Anonymous Coward | about 2 months ago | (#47284619)

Except not, so now we have explicit_bzero()

Re:Complete nonsense.... (1, Insightful)

vux984 (928602) | about 2 months ago | (#47284707)

Any code removal by the compiler can be prevented by correctly coding the code with volatile (in C) or its equivalent.

Knowing that the code will be removed by the compiler is paramount to using the volatile keyword.

That requires knowing a lot more about what the compiler is going to do then one should presume. The developer should not have to have foreknowledge of what compiler optimizations someone might enable in the future, especially as those optimizations might not even be known in the present.

The normal use case for 'volatile' is to tell the compiler that you know and expect the memory will be modified externally; and beyond that you don't really need to know exactly what the compiler does. As the developer reasonably knows that the memory is supposed to be modified externally, it is not unreasonable that he mark it as such.

Most security related vulnerabilities arising from compiler optimization tend to revolve around the idea that you are defending against memory being modified externally that should not normally be modified or read from externally. But that is the DEFAULT assumption for all memory.

If you do not expect the memory to be written to or read from, you don't mark it volatile.

So now you are saying, well, if you REALLY don't expect the memory to written to or read from externally, then you should mark it volatile.

So... that has us marking everything volatile that we know will be modified or read from externally, and also everything that we know should NOT be modified or read from externally... so clearly we should mark everything volatile all the time... because pretty much all memory is either "supposed to be read and written to externally" or "not supposed to be read and written to externally" and the only situation where its safe not to use volatile then is when you don't care if its read or written to externally because you KNOW that it can't cause a bug or expose a vulnerability.

I can't offhand think of many situations where I could say with any degree of certainty that if something read or wrote to memory externally that it wouldn't matter, and it would rarely be the best use of my time to try an establish it... so really... mark everything volatile all the time.

Clearly THAT isn't right.

Re:Complete nonsense.... (2)

Threni (635302) | about 2 months ago | (#47284981)

---
I can't offhand think of many situations where I could say with any degree of certainty that if something read or wrote to memory externally that it wouldn't matter, and it would rarely be the best use of my time to try an establish it... so really... mark everything volatile all the time.

Clearly THAT isn't right.
---

Yeah, that would be a poor design. You use volatile when you need to. That's the rule. You just need to work out when you need to.

> If you do not expect the memory to be written to or read from, you don't mark it
> volatile.

> So now you are saying, well, if you REALLY don't expect the memory to written
> to or read from externally, then you should mark it volatile.

I'm not sure you typed that in right, or maybe you don't understand what volatile means or when to use it.

Re:Complete nonsense.... (1)

vux984 (928602) | about 2 months ago | (#47285643)

I'm not sure you typed that in right, or maybe you don't understand what volatile means or when to use it.

No I typed it in right. That's the point. To guard against security flaws, you need to specify volatile in situations where you explicitly expect it not to be volatile; so that any sanity checks you put in place don't get optimized out as redundant/dead code.

For security you effectively have to assume all memory is volatile, and that it might be changed when it shouldn't be.

For a contrived example,

static buffer
static int x

[...]

somefunction()
{
  x = 7
  loop
        int y = x;
        do somestuff(buffer) // this doesn't adjust y or x
        sanity check: if y != x call error("WTF somethings horribly wrong! abort!")
  end loop ....
}

Neither y nor myvariable are volatile, they aren't supposed to be modified except as assigned above. So why should I mark anything volatile?

Because during somestuff() the 'buffer' got overrun and it wasn't detected, so x got overwritten during the loop. I had a sanity check, but the compiler optimized it out because it thought nothing updated x or y.

So now I have to mark x volatile to restore the security check? To tell the compiler, that even though nothing should have altered x or y, I need it to run the sanity check to make sure of that.

(Yes, yes, the real problem is in somestuff() where the buffer got overrun in the first place, but that's someone else's code or whatever.)

I have to mark x volatile when its NOT supposed to change externally. Its only 'volatile' because of a 'bug'. Its not volatile by design. If I don't mark it volatile then the sanity check gets optimized away, and the software is vulnerable.

The trouble is the circumstances of x are the same as ANY other memory in my system, so now it all has to be marked volatile, to ensure my sanity checks evaluate. That's not good.

Re:Complete nonsense.... (1)

david_thornley (598059) | about 2 months ago | (#47285267)

Except that "volatile" means that the memory might be accessed through methods other than the program, which is the exact sort of thing we want to test for. In C++, "volatile" means that all memory accesses must be performed in the given order, and in proper order with other volatile memory accesses and calls to I/O routines. This removes a lot of optimization possibilities, which is why we'd generally rather not call variables "volatile". For a buffer overflow test, we know that the buffer can't be changed by the program if it's working as intended, so the effect is that we zero the memory, process, and after all I/O is done we check the buffer. That's precisely what we want here.

If we make the actual buffer "volatile", we really don't gain anything. We know it's going to change, and we won't learn much from examining it later.

Re:Complete nonsense.... (1)

vux984 (928602) | about 2 months ago | (#47285661)

Except that "volatile" means that the memory might be accessed through methods other than the program,

Exactly right.

The trouble is in the case of a bug or vulnerability, even non-volatile memory might be accessed/updated when its not supposed to be. And any sanity checks or range changes or bounds checks you might write to try and close the potential holes require the non-volatile memory be marked volatile to prevent the compiler optimizing out the checks.

See my other reply for a contrived example of what I'm trying to explain.

Re:Complete nonsense.... (0)

Anonymous Coward | about 3 months ago | (#47285981)

(#47284427) reply ::

The way I learned about volatile is that it is to be used for memory that is modified out of the scope in which it is used.

If you have that understanding of volatile, then it fits perfectly with its use to prevent a memset() from being removed.
So, I would / and do use volatile to tell the compiler that I know the "true" scope of the memory it references and that
it should do each step in the order and frequency that I tell it to the memory in question. So, this sequence is important
for a theoretical hardware register --

#include
int const *volatile a_register = (int const *volatile) 0xffff0002;

int main(int argc, char *argv[])
{
      int dummy_read = *a_register;
              dummy_read = *a_register; // Must read twice to reset the state

      return (0);
}

Ask any hardware guy - this is how you program hardware.

Note that volatile has the same syntacticalconstruction as the const keyword, i.e. right to left association.

Re:Complete nonsense.... (1)

NoNonAlphaCharsHere (2201864) | about 2 months ago | (#47284713)

volatile is a storage class, meaning that something else (i.e. another process) might modify the memory location, meaning the compiler shouldn't remove reads even though it knows that you haven't modified it since it was last read and there might still be copy left lying about in a register. Even if you apply it to the buffer, it doesn't mean that the compiler can't decide that memory you didn't allocate doesn't belong to you anyways and remove the check. Additionally, volatile typically applies to single locations and not to whole buffers.

Re:Complete nonsense.... (1)

Threni (635302) | about 2 months ago | (#47284999)

Yes. The example I remember - because it was when I learnt about it - was on the Amiga, where you had a memory location which was where the mouse hardware buttons were mapped into. Some compilers would see you reading it, then reading it again later and would go "ah! that location hasn't changed in between so I'll store the state of the left mouse button somewhere (register, some other memory location - doesn't matter, it's up to the compiler how it does it's stuff) and present it to the user a little bit later on". Obviously the mouse button may have changed state between the first and the second state.

Re:Complete nonsense.... (0)

Anonymous Coward | about 2 months ago | (#47285783)

Any code removal by the compiler can be avoided by avoiding the C insanity. The insanity brought to you by a business partner of NSA and a branch of U.S.G.

Now, let the propaganda against the Swiss language PASCAL roll in. Please also "explain" why Algol was too perfect to be used. Except in those Unisys mainframes, of course. Explain that too.

Complete nonsense.... (1)

khb (266593) | about 2 months ago | (#47285801)

"...decide it's an error.."

No, it is an "optimizing" compiler not a "correcting" compiler. The optimizer can detect that no language defined semantic will be changed by removing the code, so it does. As others have noted, "volatile" is the fix for this particular coding / compiler blunder. However ill-defined, it is *not an error*.

As for the folks commenting that only C can run in small embedded processors that's hogwash. Huge mainframes of the early ages had smaller memory sizes and ran FORTRAN (now Fortran, but then it was all caps), COBOL, PL/I (and .8 for IBM internals), Algol and other languages. Most made entire classes of C blunders impossible, and there is no fundamental reason why we couldn't go back to safer languages for embedded programming (and good reasons why we ought to; not that I expect we shall).

Bad summary is bad (4, Informative)

werepants (1912634) | about 2 months ago | (#47284439)

This is not really about the existence of bad compiler optimization - it is about a tool called Stack that can be used to detect this, which is known as "unstable" code, and has been used to find lots of vulnerabilities already.

Re:Bad summary is bad (1)

14erCleaner (745600) | about 2 months ago | (#47284939)

Actually it's about non-standard-conforming "security" hacks causing unexpected results. If the result of an operation is undefined, the compiler can insert code to summon Cthulhu if it wants to.

Old news (4, Informative)

Anonymous Coward | about 2 months ago | (#47284463)

I know that at least GCC will get rid of overflow checks if they rely on checking the value after overflow (without any warning), because C defines that overflow on signed integers is undefined. This is even documented. If anything is declared by the language specification as being undefined, expect trouble.

Re:Old news (0)

Anonymous Coward | about 2 months ago | (#47285761)

The proper way to check is to calculate if an overflow will happen instead of checking to see if an overflow did happen. Stupid lazy programmers, shortcuts, and deadlines.

Misleading summary, slashdot (2, Insightful)

Anonymous Coward | about 2 months ago | (#47284505)

The kinds of checks that compilers eliminate are ones which are incorrectly implemented (depend on undefined behavior) or happen too late (after the undefined behavior already was triggered). The actual article is reasonable— it's about a tool to help detect errors in programs that suffer here. The compilers are not problematic.

Re:Misleading summary, slashdot (1)

bAdministrator (815570) | about 2 months ago | (#47284665)

That's true, but also missing the point.
The main issue is that these situations are not always that obvious in real world code.

What's the would-overflow operator? (1)

tepples (727027) | about 2 months ago | (#47285243)

So what's the standard-conforming way to determine whether a particular integer operation will not overflow? And are compilers smart enough to optimize the standard-conforming way into something that uses the hardware's built-in overflow detection, such as carry flags?

Re:What's the would-overflow operator? (0)

Anonymous Coward | about 2 months ago | (#47285815)

You have to use the next-larger data type to make the test. So to securely add to int32 you need to use an int64.

WHICH IS OF COURSE INSANE SHIT. How do you test 64 bit additions for overflow ?

Here's the protip: Avoid the Bell Labs Cancer and use Ada. They have all of this nicely defined in the language specification.

Simple. (0)

MartinG (52587) | about 2 months ago | (#47284529)

Use a language that does bounds checking automatically. Its not the 1970s any more.

Simple. (0)

Anonymous Coward | about 2 months ago | (#47284593)

#fail

It's not just bounds checking -- if you want to securely compare 100 bytes, you need to compare all 100 bytes. An optimized algorithm (intentionally or via compiler) that exits after the first mismatch leaks information. This isn't some pie in the sky hypothetical attack, it's been used to break passwords since the 70s (if not earlier).

Re: Simple. (1)

MartinG (52587) | about 2 months ago | (#47284885)

Indeed. But that's a different class of problem. Or are compilers optimising constant time comparison routines to not run in constant time these days?

Re: Simple. (0)

Anonymous Coward | about 2 months ago | (#47285839)

I think he is one of the Nihilist Shills who essentially sides with the government with their Cyber War Efforts."you need a way to control the plebs and their computors".

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47284597)

Which one those will run on the billions of embedded devices with limited memory and processing power? Oh right, none of them.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47284677)

Shhh you may scare him off and he wouldnt realize what JIT means. (Meaning just in time compilation....) So those 'safe' languages can do the exact same problem with the code they compile. Even worse you wouldnt know because the emitted code is even more ephemeral...

Its not the 1970s any more.
Apparently they dont teach how to think anymore and just regurgitate some meme that sounded sort of good...

Re:Simple. (2)

Desler (1608317) | about 2 months ago | (#47284775)

The problem is that most programmers have never had to get their hands dirty doing embedded work. They live in a bubble that ignores all the memory/storage/processing-power constrained devices all around them. OpenSSL, for example, as used in something like DD-WRT would be unusable if it was written in anything but C or possibly C++.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47285867)

So we use a shitpile of C that gives an illusion of security ? Plus, you claim is probably FALSE.

I write in C++ for a device with 4096 bytes of RAM and about 32kByte of ROM. Works like a charm.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47284919)

JIT means "I don't give a shit what the code really does, just that it compiles"

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47284625)

Unfortunately for those languages, the entire world does not run x86 or other workstation-class or better CPU. Which one of those will run on, for example, the hundreds of millions of 16-bit microcontrollers in wide use? Or MIPS chips in memory-constrained devices like consumer routers? For those requirements, the only usable portable language is C.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47284637)

C and C++ became popular because of UNIX and Linux, not necessarily because they were technically good solutions. At the time (1970s), C was recognized as a step backward by those in the know about modelling computing systems.

Re:Simple. (3, Insightful)

Desler (1608317) | about 2 months ago | (#47284739)

C became popular because it was vastly more portable and performant than its predecessors. It still is today. None of those "better" languages that came before it or after it can beat that. And yes, extreme portability does matter when you have 100s of millions if not billions of devices that can't run anything but assembly or C. It's why the people saying that OpenSSL should be written in Java or C# are morons. Care to tell me how that's going to run on a, for example, Linksys WRT54G with only 8 or 16 MB of RAM, 2 to 4 MB of Flash storage and a 125 to 240 mhz MIPS CPU? Yeah, it's not.

Re:Simple. (1)

K. S. Kyosuke (729550) | about 2 months ago | (#47284993)

C became popular because it was vastly more portable and performant than its predecessors.

With the exception of Forth. :-)

And yes, extreme portability does matter when you have 100s of millions if not billions of devices that can't run anything but assembly or C.

Or Oberon. Oops, there. I said it.

Care to tell me how that's going to run on a, for example, Linksys WRT54G with only 8 or 16 MB of RAM, 2 to 4 MB of Flash storage and a 125 to 240 mhz MIPS CPU? Yeah, it's not.

Correct me if I'm wrong, but aren't Tektronix oscilloscopes still running embedded Smalltalk these days?

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47285269)

With the exception of Forth. :-)

Not a systems language so far less usable than C.

Or Oberon. Oops, there. I said

Same problem as above.

Correct me if I'm wrong, but aren't Tektronix oscilloscopes still running embedded Smalltalk these days?

They might but still has the same limitations as the above and is even more niche.

I should amend my previous statement to say does not have the same portability and capabilities as C. I would dispute that they're as portable as C (I would love to be proven wrong on this), but even if I did except that they are far less capable than C.

Re:Simple. (1)

K. S. Kyosuke (729550) | about 2 months ago | (#47285303)

Even if you don't like Forth (which is arguably vastly superior in the tiniest applications), why should Oberon be "far less useable" than C? A technical argument, please.

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47285403)

Even if you don't like Forth (which is arguably vastly superior in the tiniest applications)

I don't dislike it. It's still less portable and powerful.

why should Oberon be "far less useable" than C? A technical argument, please.

That was my bad. I confused the language. It's usability would be limited by its platform support which is smaller than C.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47285895)

Folks have a dark agenda. No use to argue with them.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47285309)

There are many fallacies your post builds on, all stemming from the original premise that UNIX was built using C, which already laid down the groundwork for its popularization, leading to C++.

Java and C# are in the same venue as C and C++.

Once you have a system that can more abstractly model the hardware, you can focus on the algorithms, and have the optimizer given the freedom it needs to re-arrange the code in non-obvious ways, tailored to the execution pipeline of the target hardware environment, interleaving instructions in more optimal ways, inlining where it makes sense, unrolling where it makes sense for caching. See, if we had explored those venues instead of coping with quick fixes like C and C++, optimizations and parallel execution would be taken for granted these days, and yield far better results than what you can possibly hope to get with C and C++. As your post proves, most are stuck in the 1:1 relation with how their source code is typed in and translated to machine code. That's simply not how a more abstract model of the hardware would do it.

C and C++ are still very close to how assembly language is translated to machine code. It's 99% a 1:1 relationship in how the code is organized in source to how it is organized in code.

Obviously you are not going to invest time in researching better ways when you have a hammer and some nails to do it right away. Humans do with what gets the job done there and then, and the more who use the same tools, the more you can copy and learn from others, even if it's not the optimal way.

C could have been far better at what it does, if it had acknowledge it was just another form of of assembly language. As for C++, you have to become a compiler to fully understand the language, or risk writing code you can't predict the behavior of.

The high societal costs of using C and C++ are staggering, but it will only be known in retrospect.

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47285471)

There are many fallacies your post builds on, all stemming from the original premise that UNIX was built using C, which already laid down the groundwork for its popularization, leading to C++.

You ascribe something to me that I never stated. Of course UNIX was not built using C. C was created in order to make Unix portable. The only thing fallacious is your strawman.

Java and C# are in the same venue as C and C++.

+5 funny. If they are in the same venue please show me you running Java or C# on an Amtel ATTiny. I won't hold my breath.

Obviously you are not going to invest time in researching better ways when you have a hammer and some nails to do it right away. Humans do with what gets the job done there and then, and the more who use the same tools, the more you can copy and learn from others, even if it's not the optimal way.

+5 funny. Half my job is programming in C# so you would be wrong again.

C and C++ are still very close to how assembly language is translated to machine code. It's 99% a 1:1 relationship in how the code is organized in source to how it is organized in code.

LOL. That hasn't been true for decades. C and C++ translate horribly to modern vector assembly language instructions. Even the best of vectorizing compilers are laughably bad. If what you said was true Intel and others wouldn't be constantly reinventing extensions to C to allow better vectorizing of the code.

C could have been far better at what it does, if it had acknowledge it was just another form of of assembly language. As for C++, you have to become a compiler to fully understand the language, or risk writing code you can't predict the behavior of.

C would be far better if lots of things were changed about it. C is a very flawed language, but it's still the best portable language around.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47285721)

C and C++ translate horribly to modern vector assembly language instructions.

Most code, in fact the vast majority of code, is not suitable for and do not benefit from vector instructions. Where it is, use a language which is suitable.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47285411)

languages other than C show other vulnerabilities.

Look at this example:http://armoredbarista.blogspot.de/2014/04/easter-hack-even-more-critical-bugs-in.html
They *try* to do crypto in java, but OO in combination with exception-handling gives a timing-side-channel which LEAKS THE KEY.

Neither OO nor exceptions are in plain C, so using a "better" language gives *additional* vulnerabilities (while it perhaps removes others, *if done right*)

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47285525)

Hubris is always funny. These are the same people who will write Javascript code that has XSS flaws or will write database interfacing code that is subject to SQL injection attacks while at the same time talking about how secure, memory-safe, etc. the language they use is.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47284779)

C and C++ became popular because of UNIX and Linux, not necessarily because they were technically good solutions. At the time (1970s), C was recognized as a step backward by those in the know about modelling computing systems.

Pity they were too busy modeling computer systems and real computer science instead of actually implementing those models.

Re:Simple. (3, Insightful)

Desler (1608317) | about 2 months ago | (#47284819)

Well I'd be pretty pissed as well if my pet language was relegated to the graveyard of obscurity by a language that was usable for real work. Dennis Ritchie was a pragmatist who got shit done not some guy wanking over the greatness and purity of the language he created. People to this day are still jealous of that.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47285947)

I am certainly not jealous of the Creator Of The Cyber War Domain.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47285379)

Who knows what happened to those investing in the research.

Now that parallel computing has started to take central stage, you're forced to deal with the abstract modeling problem.

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47285585)

Now that parallel computing has started to take central stage, you're forced to deal with the abstract modeling problem.

Nah, already been solved by things like OpenMP. It's cross-platform, cross-vendor, etc.

Simple. (0)

Anonymous Coward | about 2 months ago | (#47284661)

lol, why not just use haskell or lisp or some other weenie language where you can mathematically prove it correct? Oh, because sitting around pulling your pud doesn't get the job done.

Re:Simple. (1)

Anonymous Coward | about 2 months ago | (#47284667)

We're so impressed with your insights that we want to hire you to get a managed language like Java or .net to run on our CPU (4KB flash and 2KB ram). It will be so nice to not worry about 1970s problems any more.

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47284745)

But the entire world runs x86 with gigs of RAM and terabytes of storage!! How dare you being reality into this!

The more things change.... (1)

sconeu (64226) | about 2 months ago | (#47284807)

vaxocentrism [catb.org] .

Re:Simple. (1)

K. S. Kyosuke (729550) | about 2 months ago | (#47285029)

It ran on the Alto. But you really should be using Forth on that CPU, though. It was born for that. (Unless it's one of the dreaded 8051s, of course.)

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47285149)

The Alto had at minimum 128 KB so it's not even remotely analogous. Even the most constrained Java ME profile requires 8KB just for itself.

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47285155)

I was speaking of RAM of course.

Re:Simple. (1)

K. S. Kyosuke (729550) | about 2 months ago | (#47285173)

And regarding your CPU, I was speaking of Forth, of course.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47285431)

It's interesting to note that people fear Java taking over their jobs. ;) With neither Java or C# mentioned in the post, but independent posts mention them often, so it must be a sore thumb for some. They have the same flavor as C and C++ and won't qualify as being a solution in any case.

Re:Simple. (1)

gnupun (752725) | about 2 months ago | (#47285081)

Use a language that does bounds checking automatically. Its not the 1970s any more.

Suppose your program accesses millions of array elements in performance sensitive areas. Bounds checking would slow down your code by a factor of 2 or more and therefore should be optional (but not non-existent, like in C).

Re:Simple. (1)

david_thornley (598059) | about 2 months ago | (#47285297)

Sure. Use C++, use std::string and std::vector instead of C-type strings or arrays, and wherever you have brackets you substitute ".at()". Bounds checking guaranteed, and you can globally search for "[" to see if anybody's violating the rules.

Re:Simple. (0)

Anonymous Coward | about 2 months ago | (#47285439)

I want to shove my fetid pregnancy rod into your rancid .at() hole. What say you?

Re:Simple. (1)

Desler (1608317) | about 2 months ago | (#47285547)

Yes but knowing about that would require the GP and his ilk to get better talking points. Most of them have never used C or C++ and are merely parroting random crap they hear from other people who have also likely never used them either. C and C++ are anything but perfect, but for a number of domains/platforms they are basically they best you're going to get unless you want to dive into a usually shitty, proprietary vendor language or assembly.

Bad advice (1)

HalfFlat (121672) | about 2 months ago | (#47284595)

Short of bugs in the compiler's optimizer — and we all know there have been many — the idea that "if the entire code absolutely must stay fully intact, it shouldn't be optimized" is already dangerous.

A compiler conforming to its documentation or standard isn't going to change semantics that have been guaranteed by that document. Those guarantees though are all you have: even without explicit optimization options, a compiler has a lot of freedom in how it implements those semantics. Relying on a naïve translation from a line of code to a particular, non-guaranteed assembly representation is a very brittle practice.

Functionally correct, but insecure (5, Insightful)

Smerta (1855348) | about 2 months ago | (#47284617)

The classic example of a compiler interfering with intention, opening security holes, is failure to wipe memory.

On a typical embedded system - if there is such a thing (no virtual memory, no paging, no L3 cache, no "secure memory" or vault or whatnot) - you might declare some local (stack-based) storage for plaintext, keys, etc. Then you do your business in the routine, and you return.

The problem is that even though the stack frame has been "destroyed" upon return, the contents of the stack frame are still in memory, they're just not easily accessible. But any college freshman studying computer architecture knows how to get to this memory.

So the routine is modified to wipe the local variables (e.g. array of uint8_t holding a key or whatever...) The problem is that the compiler is smart, and sees that no one reads back from the array after the wiping, so it decides that the observable behavior won't be affected if the wiping operation is elided.

My making these local variables volatile, the compiler will not optimize away the wiping operations.

The point is simply that there are plenty of ways code can be completely "correct" from a functional perspective, but nonetheless terribly insecure. And often the same source code, compiled with different optimization options, has different vulnerabilities.

Functionally correct, but insecure (1)

Anonymous Coward | about 2 months ago | (#47284743)

Yes! Failure to wipe memory is like failure to wipe asshole. You save some time by not doing it but you have smell like shit and people know where you have been.

Re:Functionally correct, but insecure (0)

Anonymous Coward | about 2 months ago | (#47285837)

Which is why good IDEs and error messages are so important. When integrated with the compiler it can highlight the code, log a warning, and say ''hey, this doesn't seem to effect anything and will be removed" and the programmer will know there's a potential bug.

Many IDEs have their own checks to do a little of this, but there are lots of room for improvement.

Yeah, Ok, I'll say it if no one else will ... (0)

Anonymous Coward | about 2 months ago | (#47284727)

Gentoo funroll-loops dweebs.

In other news... (0)

Anonymous Coward | about 2 months ago | (#47284947)

Undefined behaviors don't behave in a defined manner.

Bad workman blames his tools etc... (0)

Anonymous Coward | about 2 months ago | (#47285145)

These moronic programs are scanning off the end of allocated memory to do what? Execute that memory if it contains random data, but not if it is detected as malicious code? Please. If the compiler is compliant to the language standard and it opens up a security hole in your crappy code when optimizing then that is your bug.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>