Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

DieHard, the Software

ScuttleMonkey posted more than 7 years ago | from the yippie-ki-yay-and-other-flying-quotes dept.

230

Roland Piquepaille writes "No, it's not another movie sequel. DieHard is a piece of software which helps programs to run correctly and protects them from a range of security vulnerabilities. It has been developed by computer scientists from the University of Massachusetts Amherst — and Microsoft. DieHard prevents crashes and hacker attacks by focusing on memory. Our computers have thousands times more memory than 20 years ago. Still, programmers are privileging speed and efficiency over security, which leads to the famous "buffer overflows" which are exploited by hackers."

cancel ×

230 comments

Sorry! There are no comments related to the filter you selected.

I liked this movie better... (-1, Offtopic)

Purity Of Essence (1007601) | more than 7 years ago | (#17427440)

...when it was called TRON.

Re:I liked this movie better... (-1)

Anonymous Coward | more than 7 years ago | (#17427596)

another useless POS comment... oh, but you got to post first!

Vista already doing some of this (4, Informative)

PurifyYourMind (776223) | more than 7 years ago | (#17427454)

Along the same lines anyway... a new feature in Vista: Address space layout randomization (ASLR) is a computer security technique which involves arranging the positions of key data areas, usually including the base of the executable and position of libraries, heap, and stack, randomly in a process' address space. http://en.wikipedia.org/wiki/Address_space_layout_ randomization [wikipedia.org]

Re:Vista already doing some of this (5, Interesting)

Anonymous Coward | more than 7 years ago | (#17427552)

This came out in OpenBSD 3.3 [openbsd.org] over three years ago. Nice to see Microsoft keeping up with the times.
 

Re:Vista already doing some of this (1)

PurifyYourMind (776223) | more than 7 years ago | (#17427678)

Doh! I should have known. :-)

Re:Vista already doing some of this (1)

Jeremi (14640) | more than 7 years ago | (#17428522)

This came out in OpenBSD 3.3 over three years ago. Nice to see Microsoft keeping up with the times.


Is this feature standard in Linux yet? I'd hate to see us OSS guys get shown up by Bill... ;^)

Re:Vista already doing some of this (4, Informative)

strider44 (650833) | more than 7 years ago | (#17429434)

You could have just looked it up and seen that it's been in Linux for a similar length of time (in 2.6.x). I just googled for "linux address randomization" and clicked the top link.

Re:Vista already doing some of this (5, Informative)

Ristretto (79399) | more than 7 years ago | (#17428728)

Hi Slashdot readers,

DieHard's randomization is very different from what OpenBSD does, not to mention Vista's address-space randomization. I've added a note to the FAQs that explains the difference in some detail, and answers several other questions, but in short: "address-space randomization" randomizes the base address of the heap and also mmapped-chunks of memory, leaving the relative position of objects intact. By contrast, DieHard randomizes the location of every single object across the entire heap. It also goes further in that it prevents a wide range of memory errors automatically, like double frees and illegal frees, and effectively eliminates heap corruption.

-- Emery Berger

Re:Vista already doing some of this (1)

truth_revealed (593493) | more than 7 years ago | (#17429206)

Die Hard uses the Flux Capacitor [umass.edu] technology.

Man, that's lame even by my standards.

Re:Vista already doing some of this (1)

jnf (846084) | more than 7 years ago | (#17429030)

Yea and was in PaX about 6 years ago, so much for being proactively secure.

Re:Vista already doing some of this (4, Interesting)

Salvance (1014001) | more than 7 years ago | (#17428020)

Sure, but wouldn't it be better if everything ran in it's own virtual session (or within a virtual secure space)? This was Microsoft's original plan with it's Palladium component of Longhorn [com.com] , but my understanding is that this was almost entirely scrapped to get Vista out the door.

Part of the other problem is that most home users expect secure data, but they aren't willing to do anything about it (e.g. set up non-admin users, install virus checkers/firewalls/etc).

Re:Vista already doing some of this (1)

Alien54 (180860) | more than 7 years ago | (#17428028)

So of course, the two systems will conflict with each other, and lock up the system tighter than the improper use of superglue in NSFW situations

or randomly locate virtual memory around the HD without regard to pre-existing magnetic conditions.

;-)

Different program? (2, Informative)

Anonymous Coward | more than 7 years ago | (#17427468)

I thought DieHard was a random number generator test suite. It is annoying when people dont even look around for other programs with the same name and do similar things.

Re:Different program? (4, Funny)

j00r0m4nc3r (959816) | more than 7 years ago | (#17427744)

Yeah, he should have named his project Die Harder

Re:Different program? (3, Funny)

baldass_newbie (136609) | more than 7 years ago | (#17427814)

Or DieHardWithAVengeance...

Re:Different program? (0)

Anonymous Coward | more than 7 years ago | (#17428476)

Or follow Wil Wheaton's example [slashdot.org] and call it CleverProjectName.
 

Re:Different program? (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17428516)

Wil Wheaton is a fag.

Re:Different program? (1)

gbobeck (926553) | more than 7 years ago | (#17429286)

Or DieHardWithAVengeance...

That is so 1995. Update to 2007 and call it "Live Free or DieHard"

Re:Different program? (1)

RAMMS+EIN (578166) | more than 7 years ago | (#17427826)

``I thought DieHard was a random number generator test suite.''

So it is. Speaking of which, does anyone here know how to interpret the numbers it generates? I ran it on the deadbeef random number generator [inglorion.net] a while ago (test results linked from that page), and my interpretation is that deadbeef_rand does well on some tests and very poorly on others. Am I right? Can one distill from DieHard's output what the weaknesses of the PRNG are?

Re:Different program? (5, Funny)

Werkhaus (549466) | more than 7 years ago | (#17428022)

I thought the random number generator was DieBold?

Re:Different program? (5, Funny)

Joebert (946227) | more than 7 years ago | (#17428286)

No, you're thinking of the random Vote generator.

Mod parent up! (0)

Anonymous Coward | more than 7 years ago | (#17428296)

This was one of the smartest jokes I've read here in ages. If I'd only have some mod points...

Re:Different program? (0)

Anonymous Coward | more than 7 years ago | (#17428862)

Yes, and I hope George Marsaglia sues them for trademark infringement.

Correction (4, Insightful)

realmolo (574068) | more than 7 years ago | (#17427476)

"Still, programmers are privileging speed and efficiency over security..."

Speed and efficiency of *development*, maybe.

Which is the problem. Modern software is so dependent on toolkits and compiler optimizations and various other "pre-made" pieces, that any program of even moderate complexity is doing things that the programmer isn't really aware of.

Re:Correction (4, Insightful)

MBCook (132727) | more than 7 years ago | (#17427604)

This is one of the arguments for a language running on a VM like Java, C#, or Python. They can do runtime checking of array bounds and such and throw an exception or crash instead of silently overwriting some other variable that only may or may not cause a crash or some other noticeable side effect later.

nothing to do with VMs - just exception handling (3, Informative)

Anonymous Coward | more than 7 years ago | (#17428040)

Ada's been doing that kind of runtime checking and throwing exceptions for 20 years now without needing a VM to enable exception handling.

Re:Correction (4, Insightful)

AKAImBatman (238306) | more than 7 years ago | (#17427618)

"Still, programmers are privileging speed and efficiency over security..."

Speed and efficiency of *development*, maybe.

No, it was right the first time. Java is several orders of magnitude more secure by default than any random C or C++ program. Yet mention Java on a forum like, say, Slashdot, and you'll hear no end to how much Java sucks because "it's slow". (Usually ignoring the massive speedups that have happened since they last tried it 1996.) It doesn't matter that the tradeoff for that speed is flexibility, security, and portability. They want things to be fast for some undefined quantity of fast.

In fact, I predict that someone will be along to argue just how slow Java is in 3... 2... 1...

Re:Correction (0)

Anonymous Coward | more than 7 years ago | (#17427896)

Java is so slow that I had to hook up a calendar for my profiler! *rimshot*

Re:Correction (0, Flamebait)

Bill Dog (726542) | more than 7 years ago | (#17427932)

They want things to be fast for some undefined quantity of fast.

No, we want things to be as fast as they can be.

From TFA:
These problems wouldn't arise if programmers were a little less focused on speed and efficiency, which is rarely a problem these days, and more attentive to security issues, says Berger.
He must be criticizing open source programmers only. Because in business, programmers aren't focussed on speed and efficiency, they're focussed on what their bosses are breathing down their necks about: getting it out the door. Berger sounds like a VM-language bigot (or paid ($30K from MS) .Net Runtime shill) who doesn't understand how most software is really made, and prefers to believe in caricatures of programmers.

Re:Correction (4, Insightful)

Anonymous Coward | more than 7 years ago | (#17428332)

No, we want things to be as fast as they can be.

Maybe, but most programs are not written in a way which will achieve this goal.

Programmer time is a limited resource. This is true even on a hobby project with no deadlines and everybody working for free; you want to ship sometime. Making programs run fast takes a lot of programmer time, even when you use a language which is supposedly fast by default such as C or C++.

C and C++ make you spend a lot of time working around weaknesses in the language and fixing bugs that other languages can never have. A great deal of programmer time is put into developing the broken and slow implementation of half of Common Lisp that every sufficiently complex program must contain.

All of this time spent is time that does not go into making the program fast.

By using a language that makes programmers more productive, you get a lot more time to make the program fast. You can do this by optimizing in the "slow" language you started with, by rewriting inner loops in C, by changing the whole algorithm to run on the GPU, etc.

The 90/10 rule says that your program spends 90% of its time in only 10% of its code, and that optimizing the other 90% of the code is basically a waste. And yet people who want their programs to "go fast" are writing that 90% in a low-level language, effectively wasting a large amount of effort.

You may also end up getting your program working, realize that it actually is fast enough despite being written in a really slow interpreted language, and spend the time you saved making more cool software. Or you can go back and make the original product fast. It's up to you.

There are many good reasons to use C, and many good reasons to write entire programs in C, but "it's fast" is not a particularly good reason. An app written in pure C is probably not as fast as it can be unless its scope is very limited.

Re:Correction (0)

Anonymous Coward | more than 7 years ago | (#17429236)

by rewriting inner loops in C

Been there, done that, with a twist. The original loop was a nice find/replace loop in Word automation. Knowing for certain this was a bottleneck and also knowing the code would be ugly no matter what language it was in,
I didn't bother rewriting in .NET first but went straight to C. Having pointer arithmetic actually made the problem easier despite finding several pointer-related bugs, so big win there.

Re:Correction (3, Insightful)

Jeremi (14640) | more than 7 years ago | (#17428548)

He must be criticizing open source programmers only. Because in business, programmers aren't focussed on speed and efficiency


Business software isn't the problem. The software that is the problem is the software that runs on every naive home user's PC ... Windows, Outlook, IE, Mozilla, AIM, etc etc. This is the software whose security problems allow spam, credit card fraud, virus outbreaks, etc. And last time I checked, all of that stuff is still written in C or C++, not in any VM.


Berger sounds like a VM-language bigot (or paid ($30K from MS) .Net Runtime shill)
who doesn't understand how most software is really made, and prefers to believe in caricatures of programmers.


Great, you've called the guy a bigot, a shill, and an idiot, without even having understood what he was talking about.

What is portable? (1)

tepples (727027) | more than 7 years ago | (#17428292)

It doesn't matter that the tradeoff for that speed is flexibility, security, and portability.

"Portable" can mean that it runs on more than one PC-type platform, or that it can also run on handheld platforms with small CPU, RAM, and battery. True, there's J2ME, but it's hard for an individual in North America to buy a device to run midlets that isn't locked down in some way by the operator of a mobile telephone network. There's a reason that GBA games are written in C and that Nintendo DS games are written in C++ and not Java.

Re:Correction (1)

dodobh (65811) | more than 7 years ago | (#17429100)

As a user, my concern isn't about development time at all. It's about how the application consumes my resources. Java is great on the server side (one app, long run times, lots of memory). On the desktop side? Not yet (less memory, lots of concurrent apps, short run times for most apps).

Re:Correction (3, Interesting)

evilviper (135110) | more than 7 years ago | (#17429114)

It doesn't matter that the tradeoff for that speed is flexibility, security, and portability. They want things to be fast for some undefined quantity of fast.

I've got to call you on the "portability" crap.

Java is about as portable as Flash... Sure, the major platforms are supported, but that's it. 3rd parties spent a lot of time trying to impliment java, but never did get everything 100%. Licensing issues, above all else, made it a real hassle to get Java on platforms like FreeBSD.

Meanwhile, C and C++ compiler are installed in the base system by default.

The only "portability" advantage Java has is perhaps in GUI apps, and that's at the expense of a program that doesn't look or work remotely similar to any other app on the system...

There are a great many reasons people don't use java. Performance is only a minor one.

Re:Correction (1)

M. Baranczak (726671) | more than 7 years ago | (#17429314)

Licensing issues, above all else, made it a real hassle to get Java on platforms like FreeBSD.
Sun just formally announced that they'll release Java under the GPL.

Re:Correction (5, Insightful)

Anonymous Coward | more than 7 years ago | (#17429154)

"Java is slow" is the stated reason. As you noted, it is not the actual reason. To tell the actual reason is difficult, but in short Java reminds us too much of what it should have been.

The basic complaints I have heard are these:

Complaint 1: Java is slow.
  As you stated, this is not a meaningful complaint.

Complaint 2: Garbage Collection stinks
  GC is an obvious requirement of a "safe" language. As implemented in Java, it is downright stupid. When doing something CPU intensive, the GC never runs, leading to gobbling up memory until there is no more and thrashing to death. I'm sure that somebody is going to dig up that paging-free GC paper, but pay attention: that is a kernel-level GC.

Complaint 3: Swing is ugly/leaks memory
  The first is a matter of opinion. The second is well-known. Swing keeps references to long-dead components hidden in internal collections leading to massive memory leaks. These memory leaks can be propagated to the parent application if it is also written in Java.

Complaint 4: Bad build system
  Java cannot do incremental builds if class files have circular references. In a small project of about ten classes I was working on, the only way to build it was "rm *.class ; javac *.java"

Complaint 5: Tied class hierarchy to filesystem hierarchy
  This was just stupid and interacts badly with Windows (and anything else with a case insensitive filesystem). It is even worse for someone who is first learning the language. It also makes renaming classes have a very bad effect on source control.

Complaint 6: Lack of C++ templates
  C++ has some of its own faults. Fortunately its template system can be leveraged to fix quite a few of them. Java's generics have insufficient power to do the same thing.

Complaint 7: Lack of unsigned integer
  These are oh-so-necessary when doing all kinds of things with binary formats. Too bad Java and all its descendents don't have them.

Complaint 8: Verbosity without a point
  It has gotten so bad in places that I am strongly tempted to pass Java through the C preprocessor first, but I can't do that very well because of 4.

Re:Correction (0)

Anonymous Coward | more than 7 years ago | (#17429390)

Complaint 8: Verbosity without a point

This was bad in C++ (I blame iostreams for leading down that path) and got a million times worse in java. Seriously, something like 'parse this coma delimited string and pull out the numbers as hex digits, then pack the result into an array" should be 10 lines of code *max* including robust error checking - java turns this simple example into a giant silly game that is verbose for ZERO gain. Long type and functions names, multiple type swaps, and pulling in an extra library or two. The C++ situation is no better.

Sometimes the "do what I say, exactly that, and nothing more" approach of C is the right approach.

Corrections (2, Informative)

SuperKendall (25149) | more than 7 years ago | (#17429418)

Basically almost every point you raised can be addressed simply by saying "get your head out of five years in the past". Moderm GC can take little overhead, and will run when needed even with the CPU being consumed.

Swing does not really have the problems you speak of any longer, if you are using it right... heck, it didn't really have those problem to any great degree about seven years ago when I was building a large custom client app all in swing for only desktop deployment.

Complaining about the build system is like saying GCC has a bad build system - really it has no build system, and you should use something made for building Java. That is why we have Ant and the like...

Of the remainder, I really only think #7 has much in the way of merit. Have you looked into the java.nio package? This makes working with binary data much simpler...

Re:Correction -expansion (1)

Umuri (897961) | more than 7 years ago | (#17429438)

"programmers are privileging speed and efficiency over security..."

This comment, at least the way I read it, is actually rather language independent. It has to do with programmers writing bad code in the first place. It has to do with them using variables that are too small to hold what they need to hold. Other bad practices is allowing users to define what goes into a variable, but then not validating the user input to make sure it is a valid and nonerroneous input. Most programmers nowadays are just being taught to just write an error handler for the basic stupid shit that may happen, instead of just writing a handler to prevent stupid stuff from ever causing an error in the program in the first place, which brings why the parent post mentioned java, as it has better erroneous variable control then it's predecessors.

However i again mention that it is language independent, and it's more bad programming practice then anything.

Re:Correction (2, Insightful)

tulrich (737161) | more than 7 years ago | (#17429456)

Java is several orders of magnitude more secure by default than any random C or C++ program.

Do you know what "several orders of magnitude" means? For variety, next time you should write "... exponentially more secure ..." or "... takes security to the next level!"

BTW, it's funny you should mention Java performance in this thread -- one of the DieHard authors published this fascinating paper on Java GC performance: http://citeseer.ist.psu.edu/hertz05quantifying.htm l [psu.edu] -- executive summary: GC can theoretically be as fast as explicit malloc/free, if you're willing to spend 5x memory size overhead (gulp).

Re:Correction (1)

TopSpin (753) | more than 7 years ago | (#17428086)

Speed and efficiency of *development*, maybe. Which is the problem. Modern software is so dependent on toolkits and compiler optimizations and...

I wondered where all those vulnerabilities were coming from. It's not humans misusing memory references and overrunning ad hoc fixed length buffers, etc. It's the toolkits, libraries and compilers! Glad we got that figured out.

From the post:

Our computers have thousands times more memory than 20 years ago. Still, programmers are privileging speed and efficiency over security...

This implies is that because memory is larger less attention can be paid to efficiency, but the hapless programmers don't know better. I used to use quicksort when I had 640 KiB of RAM, but now that I have 8 GiB, I'll just use bubble sort. Brilliant.

What a load of shit. (0, Troll)

QuantumG (50515) | more than 7 years ago | (#17427530)

1. Roland Piquepaille
2. It's just heap randomization, again.

Nothing to see here.

Re:What a load of shit. (0)

Anonymous Coward | more than 7 years ago | (#17427874)

While heap randomization is not new I don't know of another project that uses comparison of replicas to identify inconsistancies. From the abstract of thier paper: "By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas.". Can you provide a cite for your "nothing to see here comment" or is that just your knee jerk reaction? Personally I think the utility of DieHard from a security standpoint might be limited, but it has the potential to be freakin useful for tracking down complex pointer problems.

Re:What a load of shit. (0)

Anonymous Coward | more than 7 years ago | (#17428004)

> is there a middle way between Intelligent Design and Neo-Darwinism?

is there a middle way betwen geocentrism and heliocentrism?

Re:What a load of shit. (1)

QuantumG (50515) | more than 7 years ago | (#17428060)

wow, someone who gets my commentary on british politicals, brilliant.

Re:What a load of shit. (4, Interesting)

Anonymous Coward | more than 7 years ago | (#17428364)

I did a quick read of the whitepaper and sort of see it as heap randomization+. I have very little faith in the claims of low overhead. But leaving that aside, there are 2 major problems here:

1) If there is a program crash, it may be possible to reproduce the bug on the same computer, but probably not on 2 different ones, such as the user's and the developer's.

2) It discourages programmers from good design and thorough testing by leading them to believe that bugs won't occur.

The claim for DieHard (from the whitepaper) is that it "tolerates memory errors and provides probabilistic memory safety". But bugs will still happen! I once added about 10 lines of code to log a bug our team was having a hard time tracking down. It turned out to have its own bug that would be hit if:

- Two threads were accessing the same buffer
AND
- One of them was swapped out during the execution of 3 machine instructions (out of about a million)

It took my moderately sized customer base 2 weeks to hit it. The only way to avoid memory errors is to make the code bulletproof, which means fixing it when bugs are reported.

Re:What a load of shit. (1)

jnf (846084) | more than 7 years ago | (#17429088)

At a 50-75% memory usage increase no less.

Cue Diehard quotes in... (0)

Mogster (459037) | more than 7 years ago | (#17427586)

5...4...3..

Yippee-ki-yay

Um, no. (1)

Duncan3 (10537) | more than 7 years ago | (#17427632)

No, putting arrays on the stack causes buffer overflows. Which is trivial to not do, and trivial to check for.

The fact that Microsoft doesn't HAVE a security model, IE/Outlook are jokes, and users run as admin has a bit more to do with it.

Re:Um, no. (1)

codegen (103601) | more than 7 years ago | (#17427698)

Buffer overflows can also happen in the data segment (both global variables and heap).
And it is almost as easy to exploit. Intead of overflowing to the return address, you overflow
to the nearest vptr (if C++ is being used), to the nearest function pointer or to the nearest green bit.

Not quite (1)

gr8_phk (621180) | more than 7 years ago | (#17427808)

There is nothing wrong with putting an array on the stack. I once had the need to copy a function into a local int[50] and run it from there - no issues (embedded system, the function needed to run from RAM). The problem is when people write code that can blow right past the end of an array. They don't stop to think that the functions they call to dump data in there don't know where the end of the available space is. Oh right, the data told me how much space to allocate and I just allocated that much and read until EOF. ;-) ok, that's a heap (not stack) buffer overrun. Anyway arrays on the stack are not inherently bad.

Re:Not quite (1)

Teresita (982888) | more than 7 years ago | (#17427886)

The problem is when people write code that can blow right past the end of an array.

Why not? This is the era of kerjillion dollar lawsuits that blow right past the GDP of the host country.

Re:Not quite (1)

TapeCutter (624760) | more than 7 years ago | (#17428706)

"Anyway arrays on the stack are not inherently bad."

Maybe not "inherently bad", but certainly inefficient as regards speed and space since the entire array must be copied to the stack rather than just an array pointer.

Re:Not quite (1)

jorghis (1000092) | more than 7 years ago | (#17429394)

I dont see why you necessarily have to copy an entire array.

void foo(){
int arr1[52]; /* a bunch of code that stores values into arr1 and then manipulates them and reads them back*/
}

The only overhead that had to be done here was moving the stack pointer down an extra 52*4 bytes, which is no more work than what it was doing already. Assuming you are in a language that doesnt initialize every element of an array when you declare it. Arrays on the stack are not inherintly inefficient although they certainly can be if you dont use them right.

Re:Um, no. (1)

jorghis (1000092) | more than 7 years ago | (#17429200)

There are pointers to the code segment in every section of memory. (heap, stack, you name it) Do you honestly think that the only time a pointer to code gets followed is when you are bouncing around in between stack frames?

I am amazed that you would so arrogantly declare that simply doing a bit of static analysis would be sufficient to fix all (or even most) buffer overflows in complex programs with hundreds of thousands or even millions of lines of code. It sounds like you just looked at one tutorial of the 'classic' buffer overflow with overrunning in a stack frame causing arbitrary code execution and decided that was the only case programmers had to worry about. It seems like its always people who have a small amount of knowledge about computers (but really not all that much) who are the most eager to rip on MS.

Let's solve voting security. (2)

Wilson_6500 (896824) | more than 7 years ago | (#17427644)

If you were somehow to install DieHard software on a DieBold machine, does the universe collapse in on itself? This is one of those pasta plus antipasto situations, I think.

Pastamancer? (0)

Anonymous Coward | more than 7 years ago | (#17427778)

This is one of those pasta plus antipasto situations, I think.
Sounds like you might be a Pastamancer [kingdomofloathing.com] .

Re:Pastamancer? (1)

Wilson_6500 (896824) | more than 7 years ago | (#17429428)

God, no! Disco Bandit for life. I called myself a "Senator" and gave all my pets names that started with Intern. I miss that game sometimes.

Movie sequel no. 3 (0)

Bob54321 (911744) | more than 7 years ago | (#17427652)

Offtopic but does anyone know what happened to Die Hard 4 (probably with a name like Die Hardest???). I read they were making it with Britney Spears as John McLean's daughter. Would of probably put me off one of my favorite movie series with her bad acting. I guess that was the answer to my own question!

Re:Movie sequel no. 3 (1)

nwbvt (768631) | more than 7 years ago | (#17427726)

Yeah, its coming out soon, its going to be called "Live Free or Die Hard" [imdb.com] . I saw a trailer for it before "The Good Shepherd", though all I can tell you from that is that there will be a lot of big explosions and car chases.

Re:Movie sequel no. 3 (1)

dbIII (701233) | more than 7 years ago | (#17428722)

Die Hard 4 (probably with a name like Die Hardest???) ... with Britney Spears

Die Hard 4 - going commando.

Re:Movie sequel no. 3 (1)

Coucho (1039182) | more than 7 years ago | (#17429194)

Die Hard 5 - Hardly Dying

wtf? (0, Insightful)

Anonymous Coward | more than 7 years ago | (#17427658)

Today's computers have more than 2,000 times as much memory as the machines of yesteryear, yet programmers are still writing code as if memory is in short supply.

I stopped reading after that first line.

Programming is not a matter of simply writing until things get full.

So... (0)

Anonymous Coward | more than 7 years ago | (#17427682)

Microsoft helped develop a program to uninstall Windows?

Algorithms demand perfection (1)

Progman3K (515744) | more than 7 years ago | (#17427732)

You should never program thinking about security issues.
Write the algorithms correctly and there won't BE any buffer-overflows.

What's so hard about this?

Re:Algorithms demand perfection (1)

wallet55 (1045366) | more than 7 years ago | (#17427798)

so much to do, so little time. So much easier to dash off some code, patch the bugs, jury rig the gaps, and append new features.... then clean up the more screamingly ridiculous messes in service pack x+1

You must remember... (4, Insightful)

jd (1658) | more than 7 years ago | (#17427948)

...the number of programmers like ourselves who learned how to code correctly is vanishingly small in comparison to the number of coders who assume that if it doesn't crash, it's good enough. Whether you validate the inputs against the constraints, engineer the program such that constraints must always be met, or force a module to crash when something is invalid so that you can trap and handle it by controlled means - the method is irrelevant. What matters is less that you're using a method than you remember to use a method.

Even assuming nobody wants to go to all that trouble, there are solutions. ElectricFence and dmalloc are hardly new and far from obscure. If a developer can't be bothered to link against a debugging malloc before testing then you can't expect their software to be immune to such absurd defects. A few runs whilst using memprof isn't a bad idea, either.

This assumes you're using a language like C, which is not a trivial language to write correct software in. For many programs, you are better off with a language like Occam (provided for Unix/Linux/Windows via KROC) where the combination of language and compiler heavily limits the errors you can introduce. Yes, languages this strict are a pain to write in, but the increase in the initial pain is vastly outweighed by the incredible reduction in agony when debugging - if there's any debugging at all.

I do not expect anyone to re-write glibc in Occam or any other nearly bug-proof language. It would be helpful, but it's not going to happen.

Re:You must remember... (1)

jorghis (1000092) | more than 7 years ago | (#17429358)

Thats quite bold of you to claim that you are in an elite group that can churn out large programs in C with zero bugs.

Your claim that smart programmers using dmalloc, electric fence, or some other bounds checker will find all buffer overflows seems misguided to me. Those tools are great for catching buffer overflows that are actually being caused by your test suite. But arent most buffer overflow security holes caused by weird corner cases noone though of? I mean in the real world its never caused by something as simple as "=" versus "".

Re:Algorithms demand perfection (0)

Anonymous Coward | more than 7 years ago | (#17428150)

Well, one possible reply is

"lol, fire lusers who write buggy code that overflows!1!! i bet m$ dont do this, lusers, i write secure code because i use TEH LUNIX lolz!"

The less slashdotty but more sensible response is that even the best programmers make mistakes. Show me a program you've written that doesn't have bugs in it, and I'll show you a liar. Some of the worst security holes are written by a genius that made one slip up. Which is why having automated tools to prevent these kinds of mistakes helps. This can include anything from DieHard-like tools, to virtual machines with enforced safety, to the C++ STL (down with char* !).

Re:Algorithms demand perfection (1)

Jeremi (14640) | more than 7 years ago | (#17428612)

Write the algorithms correctly and there won't BE any buffer-overflows.

What's so hard about this?


The "write the algorithms correctly" part. The demand for programs is much larger than the supply of sufficiently trained/disciplined/talented programmers. Therefore, we need a solution that gives acceptable results even when the programmer isn't a guru (and preferably when the programmer is a trained monkey, because he often will be)

Sears Diehard Movie Software??? (0)

davidwr (791652) | more than 7 years ago | (#17427736)

If you watch Diehard, the movie, in your car DVD player running off a Sears Diehard battery while running DieHard on your laptop, does that mean you died hard 3 times over?

going by the movies (1)

game kid (805301) | more than 7 years ago | (#17427962)

It would probably mean that you died hard with a vengeance.

Our computers have thousands times more memory... (1)

Stormx2 (1003260) | more than 7 years ago | (#17427742)

Well thank god we have Vista to fill up that uptapped goldmine!

Wouldn't a language do the same job? (1)

rolfwind (528248) | more than 7 years ago | (#17427748)

Wouldn't using languages like Lisp do basically the same job? I mention lisp, besides it being a favorite language of mine, because I know the end product can be coded/compiled fast and efficiently while maintaining security in many cases. Other more popular languages like Python, while getting more lispish, seem to have a inherent speed penalty that cannot be reduced as easily come compile time though I am not sure, saying this as more of a spectator to that language.

Note: I'm sure other functional languages can do the job of memory management and protect from buffer overflows as well as Lisp, I'm just no expert on them so I can't speak for them.

It's just that over the years, I have seen products come out for C family of languages that protect the programmers from the trickiers parts of C..... which seem to come up again and again even for expert programmers and where there is no bulletproof solution. I'd want to know if another language wouldn't do the job in 90% of the cases.

Re:Wouldn't a language do the same job? (1)

russellh (547685) | more than 7 years ago | (#17428124)

Wouldn't using languages like Lisp do basically the same job?

Yes.

But it's not a practical solution for about 185 different reasons starting with the fact that very few commercial apps are written in any kind of dynamic language, let alone LISP, and they're not likely to be rewritten anytime soon for such an intangible reason as security. rpg was right that worse is better, and the last language will be C. he wrote that before java, ruby, etc., but I think it's still right. Like it or hate it.

Re:Wouldn't a language do the same job? (1)

QuoteMstr (55051) | more than 7 years ago | (#17428632)

Lisp is all that safe a language, and can be somewhat strange: I'm a little confused by the specification's discussion of safe versus unsafe operations, but as I recall, you can specifically instruct the Lisp compiler to ignore array bounds checking at the expense of speed. You'd have to be insane to do this, however, but it is possible. Consider this function:

(defun bar (array i x)
  "Set the Ith element of array ARRAY to X"

  (declare (type fixnum i)
           (type (simple-array fixnum) array)
           (optimize (speed 3)
                     (safety 0)))

  (setf (aref array i) x))

Now, let's test it:

(let ((x (coerce '(0 1 2 3)
                 '(array fixnum 1))))
  (bar x -4 31)
  x)

That does indeed cause various memory errors under SBCL.

OTOH, if we were to specify safety 3 and speed 0 in that function, an error would be signaled and all would be well.

Lisp is a great language, but the Scheme is too simple and lacks a real packaging system, and Common Lisp suffers from C++-style overcomplexity. (Take the mess that is the intersection of the vector, simple-vector, array and simple-array types for example.)

CL's macro system allows these flaws to be overcome, but it's still an overcomplex PITA sometimes.

Irony (0)

Anonymous Coward | more than 7 years ago | (#17427836)

Funny how Mozilla crashed continually after enabling this software. I'm glad I don't go to UMass...

Re:Irony (0)

Anonymous Coward | more than 7 years ago | (#17428064)

good point - thanks for validating the point of the software! are you sure you read the article?

jeers for your stupid comment about umass though. They do pretty neat research there.

Way more memory (0)

Anonymous Coward | more than 7 years ago | (#17427892)

FTA:
""Today we have way more memory and more computer power than we need," he says. "We want to use that to make systems more reliable and safer, without compromising speed.""

I guess he hasn't tried Vista yet. Already out day; such is the industry.

Buggy (2, Interesting)

The MAZZTer (911996) | more than 7 years ago | (#17427894)

Firefox 2 crashed for the first time ever (I've used it since beta 1 came out) for me today... suspiciously, less than five minutes after I turned DieHard on. Hrm.

Re:Buggy (1)

jomama717 (779243) | more than 7 years ago | (#17428770)

I used to write code for a company that made a runtime-replaceable (no re-compile) garbage collecting allocator for C/C++ apps and in the process came across numerous cases of our allocator causing crashes by fixing memory errors in the app. E.g. Netscape on Solaris would crash on our buffer-overrun detection flag because the app was depending on the word after the end of a buffer to be all zeros. When it hit our flag it puked (Maybe you are seeing the same bug!!).

It only takes a couple of cases like this to make you believe that these products are just too intrusive to be practical for day to day use. I would bet that ~15-30% of all commercial software will react poorly with DieHard.

Addendum (1)

The MAZZTer (911996) | more than 7 years ago | (#17428806)

I should probably clarify that I can't be 100% sure DieHard was the problem, but I still think it's possible. Sadly I can't reproduce the error (not surprisingly, given DieHard's random nature). Although given the sheer volume of drivers and apps interacting on this comp, which still manages to stay stable normally, it's surprising DieHard didn't bring my whole house of cards down instantly. :)

Sounds like systrace (1)

methodic (253493) | more than 7 years ago | (#17427902)

Am I wrong in thinking this is just another implementation of systrace?

http://www.citi.umich.edu/u/provos/systrace/ [umich.edu]

Re:Sounds like systrace (1)

napir (20855) | more than 7 years ago | (#17428592)

Yes. What does this have in common? Other than it's software and it has something do do with security or reliability or something?

Don't need it...already have it.... (1)

AetherBurner (670629) | more than 7 years ago | (#17427926)

It is called WindowsXP. When I kill it on a regular, daily basis, its Dies Hard.

Anyone who professes to be involved (0)

Anonymous Coward | more than 7 years ago | (#17427942)

with the computer software industry and still confuses the word "hacker" with someone engaged in bad behavior has to be considered clueless.

Re:Anyone who professes to be involved (1)

westlake (615356) | more than 7 years ago | (#17428796)

with [in] computer software industry [who] still confuses the word "hacker" with someone engaged in bad behavior has to be considered clueless.

clueless is thinking you can reclaim the meaning of a word once a new definition becomes common usage in a larger world

Speed being privileged? In these times? (0)

Anonymous Coward | more than 7 years ago | (#17428128)

Still, programmers are privileging speed and efficiency over security, which leads to the famous "buffer overflows" which are exploited by hackers.

...Riiiight. Programmers are certainly privileging speed over security. That students are being taught nowadays that speed is the last thing you should care about, and that the inexplicable popularity of managed languages and so forth exists is completely irrelevant. And the combined losses in efficiency from many programs running at once causing things to run far slower than they have any good reason too...

I'm sure that the buffer overflows certainly aren't caused by programmers just not realising that they're there. There's no way programmers are that absent-minded.

Lots of memory available? (4, Interesting)

NorbrookC (674063) | more than 7 years ago | (#17428196)

In reading this article, I started to wonder a lot about this. writing to conserve memory is a bad thing? I will say that I haven't noticed that in most software, regardless of whether it's OSS or closed-source. If anything, there seems to be a variation of Parkinson's Law in effect. Yes, computers these days have a lot more memory available, however, the number of applications and the size demands of each application has grown almost in lock-step with that. 15 or so years ago, yes, you had one OS and one application running - maybe, if you were lucky or were running TSR apps, two or three. These days, the OS takes up a hefty chunk, and it's not uncommon to see 8 or 9 (if not more) applications running at once. What they all seem to have in common is that they assume they have access to all the RAM, or as much of it as they can grab.

I have to wonder if he's actually looked at things these days. I don't see where programming (properly done) to conserve memory is a bad thing. If anything, it seems that few are actually doing it.

Re:Lots of memory available? (1)

Jeremi (14640) | more than 7 years ago | (#17428666)

What they all seem to have in common is that they assume they have
access to all the RAM, or as much of it as they can grab.


They don't assume "access to as much RAM as they can grab", they assume "access to as much RAM as they need". Given the presence of gigabyte RAM modules, virtual memory, and near-terabyte hard drives, this is usually a reasonable assumption.


I have to wonder if he's actually looked at things these days. I don't see where programming (properly done) to conserve memory is a bad thing. If anything, it
seems that few are actually doing it


Certainly one shouldn't gratuitously waste memory, but it's possible to go to far the other way too... you can become so wrapped up in reducing memory usage that the other aspects of your program (e.g. runtime performance, code simplicity, code correctness, or maintainability) suffer. I'd much rather have a program use a lot of memory and work well, than one that uses a teensy amount of memory but crashes, or doesn't get the job done. Memory is dirt-cheap these days, and if you've got it you might as well put it to use.

Re:Lots of memory available? (1)

NorbrookC (674063) | more than 7 years ago | (#17428876)

Memory is dirt-cheap these days, and if you've got it you might as well put it to use.

You have an interesting definition of "dirt cheap". Doing a quick check, 1GB RAM is running around $175-$200. Admittedly, that's a lot cheaper than 12 to 15 years ago, when it was averaging $25 a MB, but I don't consider that "dirt cheap." The problem, as you pointed out, is that they grab as "much as they need", or more correctly, as the developer(s) think it needs. That's fine in an isolated system, where it's the only application running, and usually developer's computers have more power than most people's computers, so it works there. It is not true in real life. I get a little fed up every time I get told to throw hardware at a problem. I'm running a computer that's more than an order more powerful than the one I had a few years ago, and I'm still running out of RAM.

Re:Lots of memory available? (1)

ip_fired (730445) | more than 7 years ago | (#17429060)

Don't know where you buy memory, but it looks like it's $100 for a GB of RAM (at newegg.com)

Look, if you want to run RAM hungry apps, you need to either purchase more memory, or open fewer apps at once. Or, I guess you could go back to using the apps that you were using a few years ago. I'm sure they'll run with the same, small memory footprint that you want them to.

Re:Lots of memory available? (1)

Jekler (626699) | more than 7 years ago | (#17429234)

I think you're rightfully fed up with throwing hardware at a problem. Hardware isn't as cheap and easy to come by as some developers believe. Waste can be seen if you look at things like the ATI Catalyst Control Center (CCC). It occupies 60mb-70mb of memory persistently. Am I to believe that the CCC is several times more complex and contains several times more data than the entirety of the Windows 3.1 operating system? (obvious cracks about Windows aside)

Re:Lots of memory available? (1)

laffer1 (701823) | more than 7 years ago | (#17428716)

As an open source developer and college student, I can clarify the problem. We are taught that memory is cheap and there will always be enough. Of course that is stupid. Anyone who uses Firefox, Gnome, KDE or Vista can tell you that modern software is using way too much RAM. Granted three of those products are trying to work on the problem a bit. In the case of Microsoft, it helps them sell new PCs which then ship with new versions of their software.

If you want to solve this problem, professors need to teach that there is a tradeoff between performance and security. There are not unlimited resources to work with and yet we must try to maintain a reasonable level of security. Then again the age of the professor is important. Some like C++ or Java while others only live by .NET. A few I've had love linux or BSD. Their tastes seem to control what's taught in class.

Old News (1, Interesting)

Anonymous Coward | more than 7 years ago | (#17428648)

Methodologies for writing secure code have been known for decades. A far more interesting article would be one that attempts to discover or explain why certain engineers and organizations refuse to use them, or, for that matter, why they refuse to learn them in the first place.

Note: there are new breeds of software developers who do NOT buy into the old-school lines of bullshit that have brought us where we are. Hopefully, these smarter heads will prevail as we move forward.

Randomness. Nooooo! (2, Insightful)

istartedi (132515) | more than 7 years ago | (#17429014)

The worst bugs are the ones that are hard to reproduce. In fact, when faced with a bug that's difficult to reproduce, I've been known to quip "yet another unintentional random number generator". The suggestion that they're going to apply a pseudo-fix that involves random allocations raises all kinds of red flags. I'd much rather have fine-grained control over which sections of code are allowed to access which sections of memory, and be able to track which sections of code are accessing a chunk of memory. I'd much rather have strict enforcement of a non-execute bit on memory that's only supposed to contain data (there is some support for this already). Introducing randomness into memory allocation? Worst. Idea. Ever. It's like throwing in the towel, and if they put that in at low levels in system libs and things like that, we're screwed in terms of every being able to *really* fix the problem. If their compiler is going to link against an allocator that has this capability, I hope they provide the ability to disable it.

has been done before (1)

oohshiny (998054) | more than 7 years ago | (#17429032)

These techniques are old hats: several malloc implementations offer randomization, and ElectricFence finds pointer errors by spreading out and aligning allocations across virtual memory.

In practice, however, a decent set of test cases together with valgrind will make any of those runtime gymnastics unnecessary.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>