Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Imparting Malware Resistance With a Randomizing Compiler

timothy posted about 2 months ago | from the well-if-it-works-for-apache-linux dept.

Security 125

First time accepted submitter wheelbarrio (1784594) writes with this news from the Economist: "Inspired by the natural resistance offered to pathogens by genetically diverse host populations, Dr Michael Franz at UCI suggests that common software be similarly hardened against attack by generating a unique executable for each install. It sounds like a cute idea, although the article doesn't provide examples of what kinds of diversity are possible whilst maintaining the program logic, nor what kind of attacks would be prevented with this approach." This might reduce the value of MD5 sums, though.

cancel ×

125 comments

Sorry! There are no comments related to the filter you selected.

Cursed anusholes (-1, Troll)

Anonymous Coward | about 2 months ago | (#47123721)

Don't read this... it is a curse...

In 1986, a precarious little boy named Eric got pregnant with two babies. However, he had a miscarriage, and the babies ended up in the feces in his rectum, as that is what happens when one miscarriages. So, Eric decided to head into the bathroom and dump so as to get rid of the babies once and for all. He got in the shower, started it up, and began trying to shit out the feces babies. Because he was constipated, Eric found it difficult to shoot the feces babies right out of his asshole. Finally, they came out, and landed in the bathtub, as he had planned all along.

Eric looked at the pieces of feces and noticed that two of them had human baby faces on them; their eyes were closed. Then, they started crying. Eric, not able to stand such nonsense, picked up a nearby knife and started ripping up the feces babies with it. "Drown in strut!" he screamed. At last, the babies stopped crying. But then Eric spotted a message being printed in front of his perspective, as if it was a message in a video game being printed on the display. The message read, "A WIND TURBINE IS BROKEN. DO E E." Eric then noticed the whole room was fading to black...

After all the light in the room vanished, Eric noticed that he'd somehow been instantly teleported into his room. He was now lying on the blankets on top of his bed, with his eyes closed. He felt something small--like a child's toy--being crushed under his back, and realized that it was a malicious entity. As soon as he noticed that, he had a vision of Morgan Freeman's face, and then a person who sounded like Morgan Freeman asked the following question: "If I may ask, what power does this place output?" The small entity under Eric's back replied, "Oh, you know... wind-powered, solar-powered, nuclear-powered, tickle!"

Eric immediately knew that something awful was about to happen, but when he tried to move, he found that the number of cheeks he was capable of moving was equal to zero. Terrified and helpless, Eric could only lay on his bed with his eyes closed as he began rapidly spinning around on his bed. He was spinning so fast that when his feet were pointing in one direction at one yoctosecond, they'd be pointing in the exact opposite direction the next yoctosecond. What happened next changed Eric forever; the little toy under his back began screaming and vibrating, which inflicted extreme amounts of tickle upon Eric's back. Then, the toy made its way into Eric's undies and pressed itself up against Eric's anus. A "VVVVVVVVVVVVVVVVV" sound was heard as the toy began more rapidly vibrating, and unbearable amounts of tickle were inflicted upon Eric's ass! Eric was never seen again...

Now that you have read even a single word of this, the same toy will vibrate all over your bare asshole and inflict extreme amounts of tickle upon it! To prevent this from occurring, copy this entire story and post it as a comment three times.

Alex, is that you? (0)

mmell (832646) | about 2 months ago | (#47124253)

(n/t)

No: THIS is (with a good idea) (0)

Anonymous Coward | about 2 months ago | (#47129697)

A way I suggested in "Coding for DEFCON" on /. in 2005: SELF-CHECKING executables (very easy to do & yes: It works - compressed/packed exe + sizecheck @ startup technique)-> http://it.slashdot.org/comment... [slashdot.org]

Every single one of my programs since 1996 have done this & yes it works... even this offering of mine lately also:

APK Hosts File Engine 9.0++ 32/64-bit:

http://start64.com/index.php?o... [start64.com]

APK

P.S.=> Simply by having an app check its size (or CRC32 etc.) @ startup (various routines for that exist in most std. Win32/64 PE's) ,b>DOWN TO THE BYTE-SIZE LEVEL & IF IT CHANGES EVEN BY 1 BYTE - stop the program + signal the user of this change OR WHATEVER YOU CHOOSE AS THE APPROPRIATE MEASURE in that case (potentially created by malware odds are, attaching to the .exe file itself)!

THUS, you can STOP traditional viruses from EVER taking hold (by altering jump tables & attaching code @ the tail-end of an .exe), period...

... apk

Re:No: THIS is (with a good idea) (1)

mmell (832646) | about 2 months ago | (#47130003)

Ah, that's better. I'd thought you had forgotten about me.

Gonna agree with you on this one (to an extent). An app should be able to hash itself and check the hash. Multiple hashes (whole file, individual hashes of blocks) would make this even more difficult to defeat. Now, that's not to say that a virus couldn't simply coopt the hash checking part of the code; but it would make it almost impossible for a virus to target more than one executable.

Of course, we're both offtopic on this thread (my fault). At least you've stopped spamming the board. Are you getting help with the other issues we've discussed?

Re:Cursed anusholes (1)

Arancaytar (966377) | about 2 months ago | (#47125873)

Instructions unclear, anus stuck in ceiling fan.

Cute but dumb (5, Insightful)

oldhack (1037484) | about 2 months ago | (#47123743)

You think you have buggy software now, this idea will multiply a single bug into a dozen.

Re:Cute but dumb (5, Insightful)

tepples (727027) | about 2 months ago | (#47123897)

If bugs are detected earlier, they can be fixed earlier. Randomizing can turn a latent bug into an incredibly obvious bug [orain.org] .

Re:Cute but dumb (3, Informative)

oldhack (1037484) | about 2 months ago | (#47124113)

And broken clocks tells the right time twice a day. How often do you expect the randomization could/would help rather than hinder bug zapping?

Bug reporting itself would become a much bigger problem due to the greater difficulty in reproducing them.

Re:Cute but dumb (4, Insightful)

tepples (727027) | about 2 months ago | (#47124225)

Each bug report would include the the key used to randomize a particular build.

Re:Cute but dumb (0)

Anonymous Coward | about 2 months ago | (#47126615)

If bugs are detected earlier, they can be fixed earlier. Randomizing can turn a latent bug into an incredibly obvious bug [orain.org] .

So next time you compile it and it's fixed, you don't know if it's because you fixed it, or the compiler randomised differently. Sounds like shooting yourself in the foot to me.

Re:Cute but dumb (0)

Anonymous Coward | about 2 months ago | (#47126937)

Yes, if you are using a true randomize.
This seems like an application where you should use the ANSI-C rand(). The requirement that it should generate the same sequence given the same seed has its uses, bump the seed for every time you compile.
As long as the seed used for compilation is added to the bug report it will be possible to recompile the same binary.
If you develop without bumping the seed you are in the same situation as when you are not using randomized compiling. Generating a few binaries when the first seed behaves as desired might help you track down a few bugs.

Re:Cute but dumb (0)

Anonymous Coward | about 2 months ago | (#47128379)

If bugs are detected earlier, they can be fixed earlier. Randomizing can turn a latent bug into an incredibly obvious bug [orain.org] .

You can already essentially do this by compiling the source yourself instead of "installing" it. The drawback is that each install will then be unique, and thus "fingerprintable", making it far easier to find ways to identify/track usage.

Re: Cute but dumb (0)

Anonymous Coward | about 2 months ago | (#47134161)

Not entirely true. In theory, the same source with the same libraries and the same compiler should generate the same Exe... Nor quite as random as random, but still minimally random..

Re:Cute but dumb (2, Insightful)

Anonymous Coward | about 2 months ago | (#47123971)

And would make that buggy software nearly impossible to patch.
Every time there's a security vulnerability found, you'd essentially have to reinstall the whole application.

Knock on wood, but I've not had enough bad experiences with malware to think the tradeoff is worth it.

Re:Cute but dumb (1)

Jeremi (14640) | about 2 months ago | (#47126317)

And would make that buggy software nearly impossible to patch. Every time there's a security vulnerability found, you'd essentially have to reinstall the whole application.

Is there any way to run the patch through the same process (using the same per-install key, of course) so that the result is a locally-transmuted patch that can be applied to the locally-transmuted application?

(Not that updating the entire application is necessarily a deal-breaker anyway; we all have broadband now, right?)

Re:Cute but dumb (2)

mrchaotica (681592) | about 2 months ago | (#47127667)

If that were possible, then malware could do the same thing (because we all know the random seed isn't going to be stored securely by average users).

Re:Cute but dumb (1)

stooo (2202012) | about 2 months ago | (#47127037)

>> And would make that buggy software nearly impossible to patch.

A patch applies to the source, recompile, and there you are.

>> Every time there's a security vulnerability found, you'd essentially have to reinstall the whole application.

No, you have to patch the source and recompile the exe. It's a much saner workflow than to patch a binary (who does this anyway?).

Re:Cute but dumb (1)

erikina (1112587) | about 2 months ago | (#47123979)

I'd be more worried about it turning non-issues into bugs, the cases when programmers think: "ah that can never happen" or "the program would've crashed/thrown-an-exception before getting here..." and in 1 in 1000 installs that cases has some weird behavior. Personally I prefer less intrusive, honey pot based approaches Bitcoin Vigil [bitcoinvigil.com] It's not perfect, but at least it doesn't have side effects, or false-positives.

Re:Cute but dumb (1)

stooo (2202012) | about 2 months ago | (#47127067)

>> and in 1 in 1000 installs that cases has some weird behavior.

Get the compiler rand seed with the bug report.
Reproduce the compilation and the test the bug.
Profit.

This could help to force coders to write tidier code.

Re:Cute but dumb (0)

Anonymous Coward | about 2 months ago | (#47125731)

Not necessarily. Could be as simple as having the executable encrypted, decrypting when launched, and adding some random data to further obfuscate.

Re:Cute but dumb (0)

Anonymous Coward | about 2 months ago | (#47126649)

You could say that about aby change, or optimization in a compiler. Yet somehow that doesn't stop us from coming up with new ones a the benefits outweigh the drawbacks.

Re:Cute but dumb (1)

Anonymous Coward | about 2 months ago | (#47130489)

Not really, this is a simple to do. We already do it to a minor degree. Every time we make a change and recompile the order gets shifted a little. Because most (nearly all) modern programs are modular. (meaning they are segmented often methods or functions that can be rearranged in any order, without changing the programs logic or flow.) All we need to do is reorder the program. It would even be possible to encrypt or sign parts or the whole of a program. This would make more of a challenge for hackers. (both cheaters and virus writers) But it's not difficult to get around this, take a look at game trainers, there are 2 kinds. The firs looks for a static pointer within the program, which is useless here. The other actually performs a search of memory, when it finds something that looks like the code it is looking for it injects it's payload. This is often processor expensive and might be easy to detect. While this method is pretty much guaranteed to work, it is processor expensive, and the activity is pretty predictable and might help antiviruses.

The simple method to do this is to write a custom compiler that compiles the program inline at the time of download. (Forget about resuming the download if it fails.) And to treat the compiled code using similar precautions too standards for protecting encrypted data. (meaning don't give any clue as to the random seed used at the compile time.) The only issue i see with this is that some programs can take days to compile, and that would pose a problem for this technique. So in practice this is best suited for smaller programs, but those are the ones that are easiest to attack using the method described earlier.

So this idea may appear to be a good one at first, but it's nearly worthless in practice. Any hacker with their salt will have this broken within minutes, and it creates a lot of extra work for servers, thus increasing costs for developers. A better idea would be to reduce the avenues for attack. E.G. Get rid of the many redundant ways to start a program or service that are even partly transparent to the user. (There should be only one start-up location for programs, and maybe one more for drivers get rid of all others.) And all system files must pass authentication, no modified copies not even for AOL. And force all drivers to be signed and authenticated. (Not necessarily from a single source like windows does but there should be some kind of hardware signing authority that is independent and trustworthy. NSA you don't count.) While there may be methods around this (E.g. some rootkits) They should be easier to detect.

Re:Cute but dumb (1)

gweihir (88907) | about 2 months ago | (#47130593)

And make Heisenbugs the norm: Just compile, and you bug may vanish, multiply or behave completely different. Not smart at all...

Would cause major debugging headaches (4, Insightful)

cant_get_a_good_nick (172131) | about 2 months ago | (#47123753)

Can you imagine parsing a stack trace or equivalent from one of these? Each stack is different.

Ignoring the fact that Heisenbugs would be much more prevalent.

Part of programming is paring of states. The computer is an (effectively) infinite-state machine. When you add bounds and checks you're reducing the number of states. This would add a great deal, making bugs more prevalent. Since a lot of attacks are based on bugs, this may increase the likelihood of some attacks.

Re:Would cause major debugging headaches (5, Funny)

Anonymous Coward | about 2 months ago | (#47123859)

Ahh, but don't forget the benefits! If random bugs could appear or disappear on installs, think of how much tech support time you can save by just saying "Re-install it and you'll be fine."

Half the time that's what they do now anyways, now you can replace ALL the calls with that!

Re:Would cause major debugging headaches (-1)

Anonymous Coward | about 2 months ago | (#47125131)

Not to mention the benefits of creating jobs, government-style: make work, botch everything, makes more work, hire more bumblers to clean up and/or botch further, profit! Might even bring Obama bin Laden's fictitious unemployment figures asymptotically closer to reality..

Re:Would cause major debugging headaches (5, Interesting)

Anonymous Coward | about 2 months ago | (#47123921)

The randomizing compiler could easily be designed to base it's randomizations on a seed, and then include that seed in the obj headers and stack dump trace library of the libc it links against. Then the bug would be just as reproducable as with a standard compiler.

Re:Would cause major debugging headaches (0)

Anonymous Coward | about 2 months ago | (#47126775)

Then it would become easy for the attacker, whom is trying to infect the program, to predict the outcome and circumvent it.

Re:Would cause major debugging headaches (2)

TheRaven64 (641858) | about 2 months ago | (#47127221)

This is the case for the multicompiler. It uses the -frandom-seed argument that is already used by gcc and clang to seed various other nondeterministic processes. This sentence in the summary annoyed me a lot:

although the article doesn't provide examples of what kinds of diversity are possible whilst maintaining the program logic, nor what kind of attacks would be prevented with this approach."

I don't know if TFA actually didn't, but the UCI group has published some papers on the multicompiler work, including this one from CGO last year [uci.edu] . The main goal for this is to provide defence against return-oriented programming (ROP) [wikipedia.org] attacks, where you chain together 'gadgets' (small chunks of code that do a little bit of computation and then return). One of the things that they do is insert NOPs to reduce the number of gadgets. This is an x86-specific thing, because the variable-length instructions mean that a single binary sequence can be several different instruction sequences depending on the byte offset where you start reading it. They also try to ensure that gadgets are in different places in every program build. This means that a ROP exploit has to be tailored to the target - you can't just have one byte string that, when pushed onto the stack, will exploit everyone.

Oh, and for anyone who says 'ASLR solves this!', take a look at the Blind ROP paper from Stanford.

We're currently investigating incorporating the multicompiler into the FreeBSD package build infrastructure.

the crutch of determinism (4, Interesting)

epine (68316) | about 2 months ago | (#47123925)

I must respectfully disagree with you on every point you raise.

A randomised stack would cause certain types of bugs to manifest themselves much earlier in the development process. Nothing decreases the cost of a bug hunt more than proximity to the actual coding event.

Such an environment rewards programmers who invest more to validate their loops and bounds more rigorously in the first place. Nothing reduces the cost of a bug more than not coding it in the first place.

There's nothing that stops the debugging team from debugging against a canonical build, if they wish to do so. If they have a bug that the canonical build won't manifest, they wouldn't even have known about the bug without this technique added to the repertoire. If many such bugs become known early in the development process—bugs that manifest on some randomised builds, but not on the canonical debug build—you're got an excellent warning klaxon telling you what you need to know—your coding or management standards suck. Debugging suck, if instigated soon enough to matter, returns 100x ROI as compared to debugging code.

Certainly the number of critical vulnerabilities that exist against some compiled binary can only increase in number. So what? The attacker most likely doesn't know in advance which version any particular target will run. The attacker must now develop ten or one hundred exploits where previously one sufficed (or one exploit twice as large and ten times more clever).

If the program code mutated on every execution, you would have some valid points. That would be stupid beyond all comprehension. An attacker could just keep running your program until it comes up cherries.

The developer controls the determinism model. It's an asset in the war. There can be more when it helps our own cause, and less when it assists our adversaries.

Determinism should be not reduced to a crutch for failing to code correctly in the first place. Get over it. Learn how. Live in an environment that punishes mistakes early and often.

Re:the crutch of determinism (3, Insightful)

Zeek40 (1017978) | about 2 months ago | (#47124365)

You respectfully disagree with his points without actually providing any reason why, and while nick's post makes complete sense, your statements seem to have a ton of unexplained assumptions built in.
  1. What kinds of bugs do you think would manifest earlier using this technique, and why do you think that earlier manifestation of that class of bugs will outweigh the tremendous burden of chasing down all the heisenbugs that only occur on some small percentage of randomized builds?
  2. How does such an environment reward programmers who invest more time in validation? More time spent in validation will result in better code regardless of whether you're using a randomized or non-randomized build. More time spent in validation is a cost you're paying, not some free thing provided by the randomized build process.
  3. I don't know what this sentence means: "Debugging suck, if instigated soon enough to matter, returns 100x ROI as compared to debugging code." If what instigated soon enough?
  4. "Determinism should not be reduced to a crutch for failing to code correctly" - What does this even mean? An algorithm is either deterministic or non-deterministic. If your build system is changing a deterministic algorithm into a non-deterministic algorithm, your build system is broken. If your algorithm was non-deterministic to begin with, a randomized build is not going to make it any easier to track down why the algorithm is not behaving as desired.

All in all, your post reads like a smug "Code better, noob!" while completely ignoring the tremendous extra costs that are going to be necessary to properly test hundreds of thousands of randomized builds for consistency.

So you are arguing to leave bugs in place ? (4, Interesting)

perpenso (1613749) | about 2 months ago | (#47125791)

What kinds of bugs do you think would manifest earlier using this technique ...

The GP mentioned a randomized stack. An uninitialized variable would be one, something that often accidentally has a value that does no harm (a zero possibly).

... and why do you think that earlier manifestation of that class of bugs will outweigh the tremendous burden of chasing down all the heisenbugs that only occur on some small percentage of randomized builds?

You do realize that your argument for the status quo and not dealing with the "heisenbugs" is essentially arguing to leave a coding bug in place? Recompiling will not necessarily introduce new bugs, rather change the behavior of existing bugs.

I've seen many of the sort of bugs this recompiling technique may expose, I spent some years porting software between different architectures. Not only did we have different compilers but we had different target CPUs. It was a friggin awesome environment for exposing unnoticed bugs. Software that had run reliably under internal testing for weeks on its original platform failed immediately when run on a second platform. And it kept failing immediately after several crashing bugs were fixed. The original developers, who were actually quite skilled, looked at several of the bugs eventually found and wondered how the program ever ran at all. I've seen this repeated on multiple teams at multiple companies over the years.

Also developers working on one platform eventually learned to visit a colleague working on the "other" platform when they had a bug that was hard to reproduce. There was a good chance that a hard to manifest bug on one platform would be easier to reproduce on the other.

There is nothing like cross platform development to help shake out bugs.

This recompilation idea would seem to offer some of these same benefits. Yes it complicates reproducibility of crashes in the field but if one can get a recompilation seed with that crash dump/log its more like of dealing with an extra step not some impossible hurdle.

Plus recompiling with a different seed each time the developer does a test run at their workstation could help find bugs in the first place, reducing the occurrences of these pesky crashes in the field.

I'm not saying these proposed recompilations in the field are definitely a good idea, just that the negatives seem to be exaggerated. It looks like something interesting, worth looking into a bit more.

Re:So you are arguing to leave bugs in place ? (1)

Zeek40 (1017978) | about 2 months ago | (#47125943)

An uninitialized variable can be caught with a style-checker. There's no need to resort to something like randomized binaries to solve a problem like that. I'm not arguing in favor of leaving bugs in place, I'm arguing in favor of choosing a specific set of binaries to focus your testing efforts on. The bottom line is that testing resources are finite and one of the key steps to fixing a bug is identifying a method of repeatably demonstrating that bug. Having randomized binaries severely complicates that one critical task and will result in significantly lower quality testing when utilizing the same level of resources.

I agree with you completely about cross platform development being one of the best methods of exposing bugs, but i don't think this kind of stack randomization is really comparable. When doing cross-platform development, you'll have a very specific, very well-defined set of target environments that you'll be testing a single version of software on. This stack randomization is an effectively infinite number of variations on a theme being tested in a single environment. One lends itself to repeatable testing, the other lends itself to versioning hell trying to replicate bugs in order to solve them.

I agree it's worth looking into, but I'm currently having difficulty seeing how the costs outweigh the benefits.

I mean how the costs don't outweight the benefits. (1)

Zeek40 (1017978) | about 2 months ago | (#47125959)

I mean how the costs don't outweight the benefits. Dammit, I always proof-read what i think I wrote, not what I actually wrote.

Re:I mean how the costs don't outweight the benefi (1)

perpenso (1613749) | about 2 months ago | (#47128695)

I mean how the costs don't outweight the benefits. Dammit, I always proof-read what i think I wrote, not what I actually wrote.

Me too. That is when I bother to proofread. :-)

Re:So you are arguing to leave bugs in place ? (1)

perpenso (1613749) | about 2 months ago | (#47128753)

I don't see the problem. You have repeatability if the qa/remote crash report includes the randomization seed used for the remote binary. That binary and debugger info gets recreated when you recompile with the seed. It seems a minor inconvenience, although it would be disturbing to see the assembly change every debug session if one is going to that level.

Re:the crutch of determinism (0)

Anonymous Coward | about 2 months ago | (#47124493)

I really like your idea. I see it as a useful potential development tool, the point of which is deliberately attempting to cause problems which can expose bugs in your code AND the compiler.

Think like an engineer and bench test. Act like other people are trying to break it, throw shit at your software, fuzz inputs (stack included), act as if it's not running in a clean dust-free temp-controlled environment. ;)
If your software acts funny on only *some* compiles, then your software is not acting deterministically. So keep that binary (and the randomized inputs, like stack) and get debugging.

Re:Would cause major debugging headaches (3, Interesting)

PRMan (959735) | about 2 months ago | (#47124041)

I once thought about writing a virus as an academic exercise (I have never actually written a virus). This was how I was going to evade signature detection. If my virus put random numbers of NOOPs in the code when it rewrote itself and moved the jumps accordingly, it would be very difficult to make a signature for.

Re:Would cause major debugging headaches (2)

oldhack (1037484) | about 2 months ago | (#47124149)

Ah, the perceptive reader notes that two can play this game. :)

Re:Would cause major debugging headaches (1)

xvan (2935999) | about 2 months ago | (#47124307)

Not whit NOOPs, It's been already used and marks your code as "suspicious". But the same obfuscation techniques used for anit-piracy, can be (and are) used by virus makers.

Re:Would cause major debugging headaches (0)

Anonymous Coward | about 2 months ago | (#47127513)

No, it really wouldn't - they've been doing this for 20+ years, and much, much more complicated things.

(I used to be a lead virus researcher for a major AV company...)

Re:Would cause major debugging headaches (0)

Anonymous Coward | about 2 months ago | (#47124115)

Um, the stack trace would still look the same. Optimizing compilers already move statements around, inlines or moves whole functions, etc. And when and where they make these changes varies from release to release. The code generator writes and the debugger reads separate debugging information which maps the generated code back to the original source.

I'm pretty sure all that is happening here is a random function that chooses 1 among multiple valid choices of where to place certain objects and statements. For example, if you have code like:

void foo(void) {
    int i;
    char b[512];
    int j;

There are 6 (3!) possible arrangements of those objects in memory. But there are an infinite number of variations if you include variable padding schemes.

In fact, compilers like GCC will already rearrange these in order to reduce the effect of buffer overflows. gcc -fstack-protector will rearrange the order of automatic variables--in particular, the placement of arrays--to mitigate the effects of any buffer overflow. The argument the researcher is making is that it would be better to just randomize them entirely so that an attacker can't depend on any particular order, even if that order is the least advantageous to the attacker (because least advantageous doesn't mean impossible to use. It many cases the optimal arrangement from a security perspective is still just a mere nuisance to the attacker.)

Re:Would cause major debugging headaches (1)

Nyder (754090) | about 2 months ago | (#47124657)

Can you imagine parsing a stack trace or equivalent from one of these? Each stack is different.

Ignoring the fact that Heisenbugs would be much more prevalent.

Part of programming is paring of states. The computer is an (effectively) infinite-state machine. When you add bounds and checks you're reducing the number of states. This would add a great deal, making bugs more prevalent. Since a lot of attacks are based on bugs, this may increase the likelihood of some attacks.

I don't know about you, but with the limited programming I have, I'd save this new compiler for release version and use a normal compiler for internal version, so I can debug and make sure it's working great. Then I'd use the new compiler for the .exe I'm going to produce and give to people (sell/whatever).

Hopefully by then most the major bugs are found. If not, I can compile the source code on a normal compiler and do normal debugging.

Swear to Gog no one uses their brains anymore.

how bout (0)

Anonymous Coward | about 2 months ago | (#47123759)

How about writing good, exploit-free code instead of closing the barn doors after the cows have escaped?

Re:how bout (0)

Anonymous Coward | about 2 months ago | (#47123933)

Q: Well, I would certainly begin by examining the cause and not the symptom.

D: Can you recommend a way to counter the effect?

Q: Simple. Change the gravitational constant of the universe.

I won't say that is impossible, I only say that you probably don't know how hard that is.

Formal verification costs money (1)

tepples (727027) | about 2 months ago | (#47124047)

That might have to wait for formal verification methods [wikipedia.org] to be made cheap enough for mass-market software. We have automated type checking and memory-safe languages, but there are still ways to write exploitably incorrect code in a managed environment.

Re:Formal verification costs money (2)

TheRaven64 (641858) | about 2 months ago | (#47127227)

The cost of formal verification has dropped a lot over the last few years, but it's still a couple of orders of magnitude more expensive than a good testing regime. You also run into issues with Goedel's incompleteness theorem: you can't write a specification of a program that accounts for all possible bugs that is less complex than the program itself, and so it is often harder to write a bug-free specification than to write a bug-free program.

....why? (5, Insightful)

Anonymous Coward | about 2 months ago | (#47123821)

..would a professor of CompSci think this is a good idea, despite the hundreds of problems it *causes* with existing practices and procedures?

Oh, wait.. maybe because the idea is patented and he'll get paid a lot.
http://www.google.com/patents/US8239836

University of California requires patents ... (3, Informative)

perpenso (1613749) | about 2 months ago | (#47125649)

..would a professor of CompSci think this is a good idea, despite the hundreds of problems it *causes* with existing practices and procedures? Oh, wait.. maybe because the idea is patented and he'll get paid a lot.
http://www.google.com/patents/... [google.com]

As an employee of the University of California a professor is *required* to report any discovery or method that *might* be patentable to the University.

The University takes it from there, it has an office that researches viability, handles the process and then licenses the patents to "industry". With respect to licensing small local companies are given a better deal than larger internationals. As for the licensing fees collected, 50% goes to the University, 25% to the department (UC Irvine's Computer Science department in this case) and 25% to the employee(s).

At least that is how it was a few years ago when I was a grad student at UC.

So we're stuck with the source then? (4, Insightful)

NotInHere (3654617) | about 2 months ago | (#47123827)

So we should use something like ABS with that randomisation enabled? Or should we trust to download distinct blobs for every download? For the latter, nice try NSA, but I don't want you to be abled to incorporate spyware into my download and not be noticed.
Its already a pity software gets signed only by so few entities (usually one at a time, at least for deb). Perhaps I know that the blob came from Debian, but I can't verify whether it is the version the public gets, or the special version with some ... extra features. The blobs should be signed by more entities, so then all would have to be NSLed.

Re:So we're stuck with the source then? (1)

Charliemopps (1157495) | about 2 months ago | (#47124091)

So we should use something like ABS with that randomisation enabled? Or should we trust to download distinct blobs for every download? For the latter, nice try NSA, but I don't want you to be abled to incorporate spyware into my download and not be noticed.
Its already a pity software gets signed only by so few entities (usually one at a time, at least for deb). Perhaps I know that the blob came from Debian, but I can't verify whether it is the version the public gets, or the special version with some ... extra features. The blobs should be signed by more entities, so then all would have to be NSLed.

I wouldn't trust it either way. A randomized binary from some site would be insanely dangerous. But even a randomized binary that you compiled yourself is questionable. Who's to say your compiler isn't compromised? Without being able to compare binaries against other peoples with identical checksums you've now turned the effort to verify a file from a global one to just you. You're far more at risk.

Re:So we're stuck with the source then? (1)

NotInHere (3654617) | about 2 months ago | (#47124805)

But even a randomized binary that you compiled yourself is questionable. Who's to say your compiler isn't compromised? Without being able to compare binaries against other peoples with identical checksums you've now turned the effort to verify a file from a global one to just you. You're far more at risk.

Do you mean Trusting trust? You don't have to also randomize the compiler. Instead of the resulting programs, you can compare the compiler binaries, and check whether they are globally the same. There is only a small loss in security as you would need to globally ensure the compiler works right.

Re:So we're stuck with the source then? (0)

Anonymous Coward | about 2 months ago | (#47126671)

So we should use something like ABS with that randomisation enabled? Or should we trust to download distinct blobs for every download?

Yea, you're obviously missing the point. The whole idea is basically inverse protein transcription. That is, while there are 64 possible codons to generate an amino acid into a protein (and start/stop a transcription), there's only 25 possible outcomes (20 being amino acids). Hence, reverse transcription has some high probability of generating a unique equivalent RNA sequence given a long enough protein.

In the case of software, you'd start out with an intermediate representation of the code (be it .NET's CIL, Java bytecode, Dalvik, or really source if you want)--this is the thing that you download, btw, and hence it can be signed and is not randomized--and dump out for every instruction one of a set of possible equivalent instructions (hopefully with equivalent cache/cpu/memory costs) and the "randomizing" element would be simply using which equivalent permutation to use. Further, it could be used to determine the order of a lot of things that otherwise are not order specific (which registers to use where, which order to put a lot of the stack elements, etc). In short, this is all stuff done on the user's computer either once on first execution, once every time the program is run, or perhaps once every month or two.

Of course, the double edged sword to this is precisely that malware is software and could do exactly the same thing to avoid detection. Further, there's nothing stopping the reverse transcription back into the original form, allowing for a much more complete polymorphic malware (although if the technique is uncommon enough, then the actual pattern itself of having a lot of code at the assembly level that does equivalent things in different ways would be a dead giveaway).

And the real big loser in all this is (1) most code at the assembly level is not fully equivalent and so attempts to do this will result in a combination of space bloat (ie, bad for the cache) and random CPU overutilization spikes (as suddenly a few core interloops may take 2-3x longer on some machines than others or perhaps on some run-throughs depends on just how common this "recompile" happens) and (2) it really doesn't address the fundamental issue with malware which is that it's mostly a human interface problem and not a technological one. Stack guards and this suggestion include potentially a lot of extra overhead to try to turn coding errors into denial of service attacks. While that's an admirable goal at one level, nop padding and other techniques have shown just how insufficient these technological stop gaps are. There's no magic panacea. Code needs to be fixed not papered over.

Now, if this had been a discussion about language features that make it more probable that code will be fixed or is unlikely to be written wrong in the first place....

Copying the Bad Guys (3, Informative)

alphaminus (1809974) | about 2 months ago | (#47123837)

Some malware already does this, which definitely helps it evade heuristic scans. Sounds worth exploring, but i bet it will make the AV they force me to run at work that much more frustratingly restrictive.

Re:Copying the Bad Guys (0)

Anonymous Coward | about 2 months ago | (#47127989)

Some AVs are already prepared for this future by scanning and disinfecting randomly.

A way I suggested in "Coding for DEFCON" (0)

Anonymous Coward | about 2 months ago | (#47130865)

On /. in 2005: SELF-CHECKING executables (very easy to do & yes: It works - compressed/packed exe + sizecheck @ startup technique)-> http://it.slashdot.org/comment... [slashdot.org]

Every single one of my programs since 1996 have done this & yes it works... even this offering of mine lately also:

---

APK Hosts File Engine 9.0++ 32/64-bit:

http://start64.com/index.php?o... [start64.com]

---

* This technique would greatly assist in stalling this part of the malicious code problem if every executable did this to itself (possibly eliminating the need for antivirus software altogether).

APK

P.S.=> Simply by having an app check its size (or CRC32 etc.) @ startup (various routines for that exist in most std. Win32/64 PE's) DOWN TO THE BYTE-SIZE LEVEL & IF IT CHANGES EVEN BY 1 BYTE - stop the program + signal the user of this change OR WHATEVER YOU CHOOSE AS THE APPROPRIATE MEASURE in that case (potentially created by malware odds are, attaching to the .exe file itself)!

THUS, you can STOP traditional viruses from EVER taking hold (by altering jump tables & attaching code @ the tail-end of an .exe), period...

... apk

Nice idea (1)

paavo512 (2866903) | about 2 months ago | (#47123843)

by generating a unique executable for each install

... and cloning a unique customer support team for each install!

Gentoo (5, Funny)

Bert64 (520050) | about 2 months ago | (#47123861)

You can already do this with Gentoo, you're highly unlikely to use the same combination of compiler, kernel, assembler, libraries, use flags, compiler flags etc as anyone else...

Re:Gentoo (0)

Anonymous Coward | about 2 months ago | (#47123959)

At the same time, all of you are wasting the same amount of time for a placebo speed benefit.

Re:Gentoo (1)

crow (16139) | about 2 months ago | (#47124163)

Gentoo isn't about speed. It's about control and configurability.

All those packages with optional Gnome support? Turned on in every other distribution, but turned off for me.

Want to add patches to a package? Just put the patch file under /etc/portage/patches// and it gets included. I currently have 9 patches applied. I can upgrade the packages, and keep my patches as long as they continue to merge cleanly.

Re:Gentoo (1)

Anonymous Coward | about 2 months ago | (#47124417)

As another poster has pointed out, I give you an example of that in a real-world scenario:

For my home virtualization server, I run CentOS, throw VirtualBox and phpvirtualbox on it. However, the act of installing VB pulls in a bunch of library files that are related to managing and displaying the VMs on the server itself (particularly X, Qt? and some other stuff) that I will never use, as I am running headless VMs, and using phpvirtualbox for all my remote management.

Short of rolling my own version of VirtualBox without the on-machine management cruft, there is no way around it. On the other hand, with Gentoo, I would likely do -X -Qt on the VB package and Bob's your uncle.

Re:Gentoo (0)

Anonymous Coward | about 2 months ago | (#47125867)

Wait...Bob _IS_ my uncle!!

Re:Gentoo (1)

praxis (19962) | about 2 months ago | (#47124483)

Machine time is cheap. What do I care that it takes a couple of hours to rebuild some binaries over night? The speed benefit, which might be minor in many cases, is real but not the biggest benefit. The biggest benefit is being able to say system-wide that I'd rather use Qt and not Gtk and have all my current and future binaries built to order.

I'm not wasting my time for a speed benefit, I am spending my machine's down time reducing my surface area and moving parts which has several benefits.

Re:Gentoo (1)

Anonymous Coward | about 2 months ago | (#47124851)

Do you funroll your loops and fomit pointers?

Re:Gentoo (0)

Anonymous Coward | about 2 months ago | (#47125527)

I'm a software developer who uses gentoo in order to freeze specific libraries used by our projects to different versions without breaking the ability to try out new bleeding edge versions of packages when we're ready. We can go back and forth between old and new at will quite easily.

We're the crazy type that send patches upstream thus we prefer to always have source laying around for every library. As we debug we find often that the bug is in upstream code. Just the other day I corrected warnings in a major C++ JSON library people use. Because we hate warnings and undefined behavior :).

Perhaps the craziest thing however is rebuilding the system with Intel's ICC compiler. Libraries with multimedia roots (VLC, Browsers, Audio/Video players, and math code) get *huge* speedups. Our large C++ multithreaded number crunching app instantly ran 360% faster by just moving from GCC to ICC. There's guides out there on how to switch it so most of the system is recompiled with ICC, even the linux kernel!

That's where a measurable speed increase is. Rebuild with ICC and ditch GCC then while it's building go read about the Intel Parallel Studio with VTUNE and the Intel Debugger. Sweet tools man.... and only gentoo is able to basically recompile the whole system using this new compiler without more than a few hours of work. Go do that with CentOS or Debian.

I used to like slackware, then I fell in love with debian, then FreeBSD ports, then after searching for BSD ports for linux, came back to Gentoo which I heard of but never really though was for me. Now I love it!

Re:Gentoo (0)

Anonymous Coward | about 2 months ago | (#47126951)

You can already do this with Gentoo, you're highly unlikely to use the same combination of compiler, kernel, assembler, libraries, use flags, compiler flags etc as anyone else...

You can actually randomize the memory addresses with prelink so even if you use the same configuration you will be far less susceptible.

Increased resistance, just not the right kind. (2)

king neckbeard (1801738) | about 2 months ago | (#47123863)

This technique would probably be more effective for making detection resistant malware than protecting against malware. The software would still function almost the same, so if it is still interacted with in the same manner, it could still be vulnerable to the same exploit. It also makes it much more difficult to verify the software is valid, meaning that it actually INCREASES the risk factor for malware on account of being a perfect recipe for trojans.

The real solution to the problem he is trying to solve is not having a monoculture. This does nothing to solve it. If you have different code bases for operating systems, browsers, etc., the ability to infect all of them may be hampered. That's the same advantage of humans and dogs and snakes not being susceptible to the same pathogens. His form of diversty is more of an environmental one, so it's like different potatoes in a bag looking different despite the fact that they are almost certainly clones of each other. That does nothing against a blight.

Re:Increased resistance, just not the right kind. (1)

TechyImmigrant (175943) | about 2 months ago | (#47125835)

It blocks ROP. So it is an effective way of preventing an primary attack vector.
It's not a defense against resident malware.

Trojans are already doing live randomization. But ROP attacks like predictable software so the attack can be developed offline.

Re:Increased resistance, just not the right kind. (1)

king neckbeard (1801738) | about 2 months ago | (#47126999)

Since the details of the technique are not all that clear, it's hard to say what it would and wouldn't protect against. If the behavior of the software is less predictable beyond the level of compiling it yourself, the economic damage of new bugs cropping up would be greater than the current economic damage of malware.

You are missing why it's a boon to trojans. I can confirm that my software is legit by using a hash. If it doesn't match the hash, I know it's likely a trojan.

Re:Increased resistance, just not the right kind. (1)

TechyImmigrant (175943) | about 2 months ago | (#47129049)

I saw Prof, Franz give his talk last year and got a few minutes to pick his brain on this. The details were quite clear. Given the audience he wasn't holding back on details. The delivered software is unchanged. You can randomize at install time or (maybe) at load time. So your hashes are fine. You local file integrity is a local problem.

The shortcoming that I see is shared libraries. Shared libraries are evil from a security context and in the current invocation they don't get randomized (because they are shared). A good improvement would be to address shared libraries, which would necessarily require messing with the way shared libraries work. I'd be happy with eliminating shared libraries.

The difference with recompiling yourself is that you don't need the source code. You can randomize a binary and it will execute the same.

Re:Increased resistance, just not the right kind. (1)

king neckbeard (1801738) | about 2 months ago | (#47132377)

If it's randomized at load time, how would it be advantageous over ASLR?

Shared libraries are evil from a security context

I've heard that said on multiple occasions, but I haven't seen much to back it up. I suspect that even if there are theoretical advantages, in practice, it's worse security. Out of date software remains one of. if not the biggest source of vulnerabilities. If multiple instances of the same library need to be updated, the likelihood that at least one of them will go unupdated is a great deal more than the likelihood that a shared one wouldn't. This applies to both the user and the dev, multiplying the problem.

Trusting trust (3, Informative)

tepples (727027) | about 2 months ago | (#47123873)

The problem with any nondeterministic compiler is that it prevents use of diverse double-compiling [dwheeler.com] , a method to detect the sort of compiler backdoor described by Ken Thompson in "Reflections on Trusting Trust" [bell-labs.com] . You'd have to bootstrap the compiler with nondeterminism turned off (and with GUIDs, timestamps, and multithreaded allocation of symbols for anonymous objects turned off too) in order for the DDC bootstrap construction to converge.

In any case, I've implemented a technique like this on the Nintendo Entertainment System. I wrote a preprocessor that shuffles the order of functions in the file, the order of opcodes within a function that don't depend on each other's results, and the order of global variables (or the order of fields in an object). One reason I implemented it was to use one variable as another's canary [wikipedia.org] to make buffer overflows easier to detect in an assembly language program. The other is watermarking the binary [nesdev.com] so that I can tell who leaked a particular copy of the beta version to the public. If you're interested, you can find my shuffle tool in the source code of Concentration Room [pineight.com] .

Re:Trusting trust (0)

Anonymous Coward | about 2 months ago | (#47126683)

You'd have to bootstrap the compiler with nondeterminism turned off (and with GUIDs, timestamps, and multithreaded allocation of symbols for anonymous objects turned off too) in order for the DDC bootstrap construction to converge.

All it means is the compilers would need to be able to be given a certain seed to use for compilation. After all, the whole point isn't that every step in code generation need be random. It's sufficient if a large enough random seed is used with a pseudorandom number generator and every user effectively recompiles the program before using it with their own random seed--think of .NET's CIL which IIRC does a one-time compile of most system libraries and most programs on first execution (not that it need to but it's a performance optimization).

Besides, what most developers will use to compile will be a deterministic compiler. The whole point is that the runtime compiler be nondeterministic and hence there's less of a mono-culture at the user level. So, yea, you'd still need to sign the runtime and have a way to choose the seed for diverse double-compile. But, that's mostly a moot point on a day-to-day basis as that's just a bootstrap issue.

Re:Trusting trust (0)

Anonymous Coward | about 2 months ago | (#47127243)

Concentration Room

Is that like a Concentration Camp, only smaller?

Re:Trusting trust (1)

tepples (727027) | about 2 months ago | (#47128835)

Maybe. Or maybe it's Portal 2 before there was a Portal 2. Judge for yourself [pineight.com] .

Anti Cheat Maybe (2)

medv4380 (1604309) | about 2 months ago | (#47123885)

It would probably cause more problems than it's worth, but it might be able to render some form of cheating worthless. If each program had a different layout then knowing what address you needed to hook into to cheat could be a problem. I don't see how it could cause more problems than anti-cheat software already does.

ASLR (2)

MathFox (686808) | about 2 months ago | (#47123893)

If you think a bit further... An operating system could load an executable at a different address [wikipedia.org] every time it is used, without recompilation!

Re:ASLR (1)

EmperorArthur (1113223) | about 2 months ago | (#47125895)

If you think a bit further... An operating system could load an executable at a different address [wikipedia.org] every time it is used, without recompilation!

The problem with ASLR is that it involves Position Independent Code [wikipedia.org] . The absolute addresses may change, but functions are called by their relative addresses to each other. When you know were one function is you know were all the others are as well. A mild example of this new randomization technique is to randomize the file order being fed into the linker. Different file order means different function layout. Then even if you know where one function is you don't know where all the others are without looking at that individual binary.

Additionally - think a BIT futher than that even (0)

Anonymous Coward | about 2 months ago | (#47130289)

A way I suggested in "Coding for DEFCON" on /. in 2005: SELF-CHECKING executables (very easy to do & yes: It works - compressed/packed exe + sizecheck @ startup technique)-> http://it.slashdot.org/comment... [slashdot.org]

Every single one of my programs since 1996 have done this & yes it works... even this offering of mine lately also:

APK Hosts File Engine 9.0++ 32/64-bit:

http://start64.com/index.php?o... [start64.com]

APK

P.S.=> Simply by having an app check its size (or CRC32 etc.) @ startup (various routines for that exist in most std. Win32/64 PE's) DOWN TO THE BYTE-SIZE LEVEL & IF IT CHANGES EVEN BY 1 BYTE - stop the program + signal the user of this change OR WHATEVER YOU CHOOSE AS THE APPROPRIATE MEASURE in that case (potentially created by malware odds are, attaching to the .exe file itself)!

THUS, you can STOP traditional viruses from EVER taking hold (by altering jump tables & attaching code @ the tail-end of an .exe), period...

... apk

Huh? (0)

Anonymous Coward | about 2 months ago | (#47123929)

This might reduce the value of MD5 sums, though.

In order to compile a different version of the executable on each install it would need to be distributed as source code. Why would the source code MD5 sums take a hit if the compiler is randomizing at compile time? The source MD5 sums would remain intact, it's the binary that gets mixed up.

Re:Huh? (1)

stooo (2202012) | about 2 months ago | (#47127051)

Exactly.

Plus the advantage that generalized local compilation is good for avoiding backdoors, you can vompare your source with an audited one, which is not so easy for a bloody binary...

I'm a step ahead (1)

50000BTU_barbecue (588132) | about 2 months ago | (#47123935)

I swapped all the data bits around on my motherboard!

Hahaha!

Good luck!

Oh wait...

it's easy to make your machine immune from malware (1)

ozduo (2043408) | about 2 months ago | (#47123939)

Just wrap it in tinfoil.

Overengineered for it's eventual use.. (2)

hamster_nz (656572) | about 2 months ago | (#47123993)

Why bother with this at the compiler level?

Just find 10,000 instruciton pairs that can be reordered as they have no interdependancies, and reorder each of the pairs at random during the install phase. That gives you 2^10,000 unique executibles, but all the debugging symbols and so on will remain the same.

I guess that doesn't help you against stack-smashing and so on. But will allow you to fingerprint who leaked your binary onto bittorrent - which would be its eventual use.

Re:Overengineered for it's eventual use.. (1)

PhrostyMcByte (589271) | about 2 months ago | (#47124439)

That's a nice idea, but it won't work everywhere.

In x86, for instance, the majority of instructions affect global flag registers. You can have two instructions that operate on entirely different memory locations and GP registers, but when you swap them the flags will end up set differently.

You'll find very few instruction pairs that you can do this to without some ability to perform local analysis of the code.

Re:Overengineered for it's eventual use.. (2)

hamster_nz (656572) | about 2 months ago | (#47124653)

It isn't that hard.... there are plenty of low hanging fruit - the classic easy case is the NOPs that are used to align jump destinations. Just find :

    [NON PC RELATIVE INSTRUCTION]
    NOP
    NOP
and replace it with

    NOP
    [NON PC RELATIVE INSTRUCTION]
    NOP

You could even patch the PC relative offset if you wanted to...

Compile at install (0)

Anonymous Coward | about 2 months ago | (#47123997)

Why not compile at install and have the user "move the mouse around" to generate some "randomness" .... (who does that again.......ah... putty.exe I think). This way the md5 of the transferred files is same, but the exe's would be different. Sucks to be the user waiting for that new app to "install" though.

Unix Jails (0)

Anonymous Coward | about 2 months ago | (#47123999)

Forgive my ignorance, but I just started using Freenas and was really impressed by the "jails" each plugin runs in. Wouldn't sandboxing or jails help out with this problem? I believe OS X does it and not sure how Linux does it. My only Windows box is Win 98 so I can run Civ 2 properly.

Re:Unix Jails (1)

Anonymous Coward | about 2 months ago | (#47124087)

OS X applications are only sandbox to if the developer chooses them to be so such as by wanting them to be in the Mac App Store.

Ya think? (0)

Anonymous Coward | about 2 months ago | (#47124001)

This might reduce the value of MD5 sums, though.

Yeah, maybe just a little...

Explain Like I'm Five (5, Insightful)

vux984 (928602) | about 2 months ago | (#47124007)

The problem with this in "Explain like I'm Five" terms:

You can have no idea what the program you are running does.

You cannot trust it. You cannot know it hasn't been tampered with. You cannot know a given copy works the same as another copy. You cannot know your executable has no back doors.

On the security minded front we have a trend towards striving for deterministic build capability; so that we have some confidence and method of validating that a source code to executable transformation hasn't been tampered with, that the binaries you just downloaded were actually generated from the source code in a verifiable way.

Another technique I'm seeing in secure conscious areas is executable whitelisting, where IT hashes and whitelists executables, and stuff not on the whitelist is flagged and/or rejected.

Now this guy comes along and runs headlong in the other direction suggesting every executable should be different. And I'm not sure I see any real benefit, nevermind a benefit that offsets the losses outlined above.

Re:Explain Like I'm Five (0)

crow (16139) | about 2 months ago | (#47124223)

It's simple. You use signed source code instead of signed binaries.

Then you use a compiler and linker that does some simple things like randomly ordering variables and functions in the executable and on the stack. That makes it impossible for an attacker to know where some key variable is and exploit it though an overflow (whether on the stack or elsewhere). The attacker is far more likely to crash your program than to exploit a bug, which is much easier to recover from.

Also, as pointed out elsewhere, while this may make debugging more complicated in some cases, it also makes it more likely that bugs where the compiler's choices matter will be found earlier in development, so you may not encounter them in the first place.

And in the case of a corporate IT department, you use the randomizing compiler to build the binary that you push out to your clients. It may be the same throughout your company, but it will be different from anything anyone outside would have access to, which is probably good enough.

Re:Explain Like I'm Five (4, Interesting)

vux984 (928602) | about 2 months ago | (#47124437)

It's simple. You use signed source code instead of signed binaries.

That doesn't really help.

If every executable is different, then I have no information about the binaries i downloaded. I have to download the source, verify that its the 'audited trusted source' by checking its hash and signatures, and then I have to compile it myself. Most people don't want to compile all their own code.

It is good enough that OpenBSD released the source code, trusted auditing group audited the source code, and trusted build validation group verifies that the binaries on the OpenBSD site were generated from the audited source. I can just download the binaries check the hash/signatures and I'm good to go.

And in the case of a corporate IT department, you use the randomizing compiler to build the binary that you push out to your clients. It may be the same throughout your company, but it will be different from anything anyone outside would have access to, which is probably good enough.

The technique can be expanded to the home market; whereby joe-sixpack is running executable whitelist-reputation subscription software that will flag anything on his system that isn't "known good". Antivirus software is starting to head in this direction -- where it maintains databases of 'known good' executables; you've probably even seen them say "this executable is not known... submit it for analysis" -- take that system to its logical conclusion; and we could see community sites maintain executable whitelists that are as effective as adware blockers. (And they'd have no qualms about flagging "technically not illegal malware but nobody actually wants to run this shit" (e.g. toolbar search redirections through popup advertisting portals that the AV guys are currently too scared to just block outright.)

Community managed executable whitelists with operating system level enforcement support could potentially make a serious dent in malware on the average uninformed users computer. It would help close a lot of attack vectors. More effective I think than 'randomizing' variable layout at in the compiled executable.

Also re:
Then you use a compiler and linker that does some simple things like randomly ordering variables and functions in the executable and on the stack.

Stronger ASLR and DEP type features in the OS to do executable layout randomization at runtime I think represents a better approach to this than randomization at compile time.

This isn't new (1)

Vegemite (609048) | about 2 months ago | (#47124079)

I can't see how Franz's idea is materially different from "Randomized instruction set emulation" by Barrantes, Ackley, Forrest, and Stefanovic (2005).

Re:This isn't new (0)

Anonymous Coward | about 2 months ago | (#47128507)

I work for a company that makes obfuscating compilers, and it allows "diversity" in the generated code--you can respin the code-generation strategies on
    every build.

It acts as a *profound* bug-amplifier. But it also leads to code that is notoriously hard to debug from binary (and thus, hard to reverse-engineer).

I don't think that randomization/obfuscation serves well the goal of security. Indeed, I think it runs counter to the primary "driver" of secure code which is
    correctness. Code that is harder to debug and harder for developers to understand is code that is harder to maintain, and to have confidence that it provides
    any particular guarantees of correctness, which is the enemy of good security.

Will it slow-down Johnny-The-Stack-Smasher? Probably. But it doesn't serve any kind of long-term security goal. The security of your code should not rely
    on the "bad guys" failing to understand how it works.

Er... no. (1)

Anonymous Coward | about 2 months ago | (#47124089)

I worked in this field a good many years ago, and I remember how we hoped that new Windows environments would suppress the prevalence of viral executables.

Then Macro Viruses turned up.

Now, Macro Viruses work at a higher level than machine code. They will therefore work on ANY machine that recognises, for instance, the WORD macro language - a mainframe, if WORD was ported to it. And you can't change macro languages - they are standardised.

I've seen many academics propose the 'answer' to viruses, and watched them ALL fall flat on their faces.

Educated fool (0)

Anonymous Coward | about 2 months ago | (#47124151)

Back in the real world jumbling things around in an effort to sometimes mitigate against a static adversary (cough cough.. in ur dreams) is also likely to make matters worse than they would be in the first place for some unfortunate souls.

This isn't biology ... you don't get to play natural selection with widely distributed code and discard paying customers who's mutation was not "fit enough" to survive an attack.

"Dr Franz puts the chance of a hacker successfully penetrating one of his randomised application programs at about one in a billion. "

This is foolish beyond imagination.

No doubt these odds would shorten if his approach were taken up widely, for hackers are endlessly ingenious. But at the moment they mean that, if his system of multicompilers were used universally, any given hack would affect but a handful of the machines existing on the entire planet.

Also foolish beyond imagination.

Dr Franz has already built a prototype that can diversify programs such as Firefox and Apache Linux. Test attacks designed to take over computers running the resulting machine code always failed. The worst thing that happened was that the attack crashed the target machine, requiring a reboot

Please tell me this article was written by a computer generated nonsense generator.

Re:Educated fool (0)

Anonymous Coward | about 2 months ago | (#47124179)

" who's mutation"

"computer generated nonsense generator"

Who is mutation. Right.

Making safe code un-recognizable. (1)

Anonymous Coward | about 2 months ago | (#47124335)

The anti-virus product makers are really going to hate this.

Malware not affected by this. (0)

Anonymous Coward | about 2 months ago | (#47124339)

Malware usually comes with it's own software/executables and interfaces via windows api.

So randomizing operating system/software executables seems of little benefit.

What am I missing here ?

Or deal with pointer arithmetic properly (1)

Rob Fielding (3524407) | about 2 months ago | (#47124367)

This is only an issue because of unchecked pointer arithmetic. For garbage collected and range checked items, you can't take advantage of co-location of data. In a JVM, if you try to cast an address to a reference to a Foo, it will throw an exception at the VM level. Indexing arrays? Push index and array on the stack, and it throws an exception if index isn't in range when it gets an instruction to index it. In these cases, pointer arithmetic isn't used. In some contexts, you MUST use pointer arithmetic. But if the pointer type system is rich enough (See Rust) then the compiler will have no trouble rejecting wrong references, and even avoiding races involving them. In C, an "int*" is not a pointer to an int. It is really a UNION of three things until compiler proves otherwise: "ptr|null|junk". If the compiler is satisfied that it can't be "junk", its type is then a union of "ptr|null". You can't dereference this type, as you must have a switch that matches it to one or the other. The benefit of this is that you can never actually deref a null pointer, and you end up having exceptions at the point where the non-null assumption began, rather than deep inside of some code at some random usage of that nullable variable. As for arrays, if an array "int[] y" is claimed, than that means that y[n] points to an int in the same array as y[0] does. Attempts to dereference should be proven by the compiler or rejected; even if that means that like the nullable deref, you end up having to explicitly handle the branch where the assumption doesn't hold. You can't prove the correctness of anything in the presence of unlimited pointer arithmetic. You can set a local variable to true, and it can be false on the next line before you ever mention it because some thread scribbled over it. Pointers are ok. Pointer arithmetic is not ok except in the limited cases where the compiler can prove that it's correct. If the compiler can't prove it, then you should rewrite your code; or in the worst case annotate that area of code with an assumption that doubles as an assert; so that you can go find this place right away when your program crashes.

Re:Or deal with pointer arithmetic properly (1)

LeadSongDog (1120683) | about 2 months ago | (#47129033)

Pointer arithmetic? Whose dumb-ass idea is that? http://www.neurophys.wisc.edu/... [wisc.edu]

Big problem with this . . . (1)

mmell (832646) | about 2 months ago | (#47124415)

. . . it's a giant step backwards. I used to be a total advocate of monolithic kernels and all executable code built locally from source, but the current method using package management (yum, apt, etc.) has been incredibly beneficial - both to administrators such as myself and for support personnel. It eliminates a whole raft of questions (what compiler was used? what switches/options were in effect? what defaults were configured?) and allows exactly what this would eliminate - the reasonable expectation that the program being supported is the same as the program that's actually installed. It also (as has been pointed out elsewhere) increases the difficulty of comprehensively testing a program prior to shipping, as it would be necessary to test code against all valid compiler options on all supported compilers. This would be bad enough for applications, but for libraries and kernel modules, it would result in a nightmare trying to ensure that code will end up running stably.

I assume kernels would be subject to the same kind of "random build" procedure. I can on.j09nxk

*core dumped*

ASLR (0)

Anonymous Coward | about 2 months ago | (#47124519)

Address Space Layout Randomization. Hey look, your program's are randomized in memory -each- time you load them. A randomizing compiler will only protect you from malware that modifies binaries, at which point you've already been h@x0rd lol. Next.

Already done... (2)

Em Adespoton (792954) | about 2 months ago | (#47124559)

This is what polymorphic software does, and I think you'll find it on pretty much every computer that's part of a botnet.

By this measure, botnet software should be really difficult to detect and compromise -- and yet it isn't.

Also, it's worth noting that while government-sponsored and targeted attacks would be more difficult using this method, most malware depends on whatever the current security flaws are and/or human failure to initially get its foot in the door.

And the logic path wouldn't be changing, even if the compiled structure was randomized.

Plus, I think you'll find that many AM scanners these days include "doesn't follow the structure of a standard compiler" as one of the major red flags in looking for malware.

It won't help (enough) (1)

AnotherBlackHat (265897) | about 2 months ago | (#47124963)

Viruses in nature mutate randomly. Computer viruses don't.
Computer virus designers are intelligent, hostile, and evil in intent.
If there's a way around it, they'll find it and it's game over.

Besides, many if not most attack vectors wouldn't care a whit - tricking a user into executing code would still work, SQL injection, cross site scripting...

Re:It won't help (enough) (1)

frank_adrian314159 (469671) | about 2 months ago | (#47125417)

Yes, and virus designers already use this technique to defeat signature scanners in AV programs.

At a different level (1)

marcello_dl (667940) | about 2 months ago | (#47124983)

This seems to me the wrong level for software diversity, too low. A bug in the source will be executed in all variants (think sql injection), while an exploit that depends on particular bytes in particular locations can already be made difficult by ASLR.

What about having higher level protocols that the software of a given category must adhere to, and various programs that treat data according to those protocols? You know, like that internet thing before the prevalence of web2.0 megasites, or like posix. Then every piece of malware cannot do universal damage and every botnet has to deal with a different host configuration.

And the randomizing compiler (0)

Anonymous Coward | about 2 months ago | (#47125101)

would never introduce bugs of it's own now, would it? Programmers don't want their compilers to get this prettified. This is a feature no one is asking for.

Nor do I view this approach as particularly effective against malware, which is it's stated purpose.

This would so piss off law enforcement (1)

style7711 (535582) | about 2 months ago | (#47125329)

They use lists of known file hashes to search for files unique to your computer. If this were done they would have to examine every file.

Re:This would so piss off law enforcement (1)

geminidomino (614729) | about 2 months ago | (#47128493)

Sounds like a plus to me.

Genetically diverse host population .. (1)

lippydude (3635849) | about 2 months ago | (#47125715)

"Inspired by the natural resistance offered to pathogens by genetically diverse host populations, Dr Michael Franz at UCI suggests that common software be similarly hardened against attack by generating a unique executable for each install."

What a good idea, isn't this what they did with the Space Shuttle ..

Just scramble the Microcode .. (1)

lippydude (3635849) | about 2 months ago | (#47125757)

"Microcode is a layer of hardware-level instructions or data structures involved in the implementation of higher level machine code instructions in central processing units" ref [wikipedia.org] .

Doesn't work! (0)

Anonymous Coward | about 2 months ago | (#47125877)

Considering most delivery mechanisms consists of an exact duplication, this will definitely increase costs to market.

... value of MD5 sums.. (0)

Anonymous Coward | about 2 months ago | (#47126771)

> This might reduce the value of MD5 sums, though.

Much to the contrary. Now, you could no longer download twice and compare. Instead you would be forced to compare to the MD5 sum.

dont do it (1)

amias (105819) | about 2 months ago | (#47126911)

As a professional software tester let me be the first to say noooooooooooo !

Are you going to trust a 99% solution? (1)

rew (6140) | about 2 months ago | (#47127035)

This doesn't fix the problem. It makes the chances of exploitation a bit smaller, on a "per-try" basis.

Back in the old days, some daemons or setuid programs would do insecure things with /tmp. So the hacker would make a program:
target = "/tmp/somefile";
while (1) {
      unlink (target);
      link ("/etc/passwd", target);
      unlink (target);
      link ("/tmp/myfile", target);
}
The daemon would check access permissions of the "target", hopefully after the last line in the loop, then open and write the target, hopefully after the second line inside the loop. Leave this running, trigger the target app, and you get the target app to write somewhere where it shouldn't (in this case /etc/passwd. Get it to add "\nmyroot::0:0::::\n" to make the system allow you to login as root without a password....)

The same applies to this stack/compiler randomization tricks: The hacker first tries at a slow pace, but instead of hacking your system, fails to get in because he's crashing your service deamon. You notice your service going down every day or so. Buggy software. Stupid randomization! No time to fix, and you make the daemon restart automatically. And bingo! Now the hacker can try thousands of times!

In cryptography, care has been taken that you can't figure out one of the "bits" of the key by a simple search. So that the exponential search (find the key among 2^256 possible keys) does not become "256 times: find bit n". To guarantee that no "bit leaking" will happen in a buggy program is very, very difficult: The designers of the program don't know where the bug is, the compiler doesn't know where the bug is, but the attacker does!

So... if this goes mainstream, the hackers will find a way to extract little bits of knowledge of the randomization, determine what the actual randomization was, and then attack the service as usual.

Of course, there will be cases where say: the time for the attack is increased beyond the attack-detection-time. So instead of the attack being succesful, the attack might be detected and averted.

Anyway, I much rather have something that actually WORKS instead of "has a chance of working". But maybe that's just me.

malware with randomisation (1)

lkcl (517947) | about 2 months ago | (#47127167)

huh. this sounds very similar to the theoretical virus designs i came up with many years ago. yes, you heard right: turn it round. instead of the programs on the computer being randomised so that they are resistant to malware attacks, randomise the *malware* so that it is resistant to *anti-virus* detection. the model is basically the flu or common cold virus.

here's where it gets interesting: comparing the use of randomisation in malware vs randomisation in defense against malware, it's probably going to start being used in malware before it gets used in defending against malware. why? because malware attackers have nothing to lose. unfortunately, they are likely to keep their compilers secret. even *more* unfortunately, successful creation of anti-malware randomising compilers means that the malware attackers can use them as well.

but, that is just a risk that has to be taken, and make sure a decent job is done of it.

Re:malware with randomisation (1)

Anonymous Coward | about 2 months ago | (#47127209)

That's not theoretical at all. You're over 20 years late to the party. It's called polymorphism or metamorphism (depending on whether it changes individual instructions for similar ones, or actually self-modifies its code).

The idea was first predicted by the computer scientist Fred Cohen. The Slovenian VXer Lucky Lady demonstrated it in 1988 on the Atari ST, and around about the same time, Mark Washburn with V2PX/1260 on the PC, a Vienna modification; more practically, the first widely released version of such an approach was the virii using the Dark Avenger Mutation Engine (DAME, or MtE) in 1992 on the PC, by the Bulgarian VXer Dark Avenger.

It's pretty routine; the normal approach is to use an executable-packer-like obfuscating wrapper called a crypter (infamously, there's no particularly clear line between a packer and a crypter; modified versions of UPX have been used for it), whose program is often self-modifying in some way to make it hard for an AV to just detect the crypter and safely depack the program.

To counteract that, ESET, with their NOD32 scanner, pioneered the approach of virtualising the crypter within a specialised emulator to let it do the unpacking job safely so that the virus killer could scan the output. That's now, along with heuristics, a standard part of the AV toolkit.

good for help lines (0)

Anonymous Coward | about 2 months ago | (#47127433)

"Have you tried recompiling it?"

I am already protected (0)

Anonymous Coward | about 2 months ago | (#47127475)

I dont use Windows neither for desktop, nor for my servers.

IT Crowd (1)

ThatsNotPudding (1045640) | about 2 months ago | (#47127573)

"IT Department. Have you tried randomizing your compiler?"

A way I suggested in "Coding for DEFCON" (0)

Anonymous Coward | about 2 months ago | (#47129433)

On /. too, years ago (2005) with SELF-CHECKING executables (very easy to do & yes: It works - compressed/packed exe + sizecheck @ startup technique)-> http://it.slashdot.org/comment... [slashdot.org]

Every single one of my programs since 1997 have done this & yes it works... even this offering of mine lately also:

APK Hosts File Engine 9.0++ 32/64-bit:

http://start64.com/index.php?o... [start64.com]

APK

P.S.=> Simply by having an app essentially check its size (or CRC32 etc.) @ startup (various routines for that exist in most std. Win32/64 PE's) DOWN TO THE BYTE-SIZE LEVEL & IF IT CHANGES EVEN BY 1 BYTE - stop the program + signal the user of this change OR WHATEVER YOU CHOOSE AS THE APPROPRIATE MEASURE in that case (potentially created by malware odds are, attaching to the .exe file itself)!

THUS, you can STOP traditional viruses from EVER taking hold (by altering jump tables & attaching code @ the tail-end of an .exe), period...

... apk

What about Verified signers (0)

Anonymous Coward | about 2 months ago | (#47138517)

Would each instance (ie: download) of the software need to be signed using the private key by the author? Wouldn't that make the whole process more expensive and cumbersome?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>