Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Lessons of Software Monoculture

michael posted more than 9 years ago | from the penicillin dept.

Microsoft 585

digitalsurgeon writes "SD Times has a story by Jeff Duntemann where he explains the 'Software monoculture' and why Microsoft's products are known for security problems. Like many Microsoft enthusiasts he claims that it's the popularity and market share of Microsoft's products that are responsible, and he notes that the problem is largely with C/C++ and mostly because of the buffer overflow problems."

Sorry! There are no comments related to the filter you selected.

FP (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10752403)

first post! kekelar2000

I'll take all the blame (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10752404)

Aqua Seafoam Shame

GNAA OWNS YOU (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10752406)

Nath0rn loves the GNAA

C# (2, Insightful)

Anonymous Coward | more than 9 years ago | (#10752408)

Wasn't this why C# was created?

Re:C# (2, Insightful)

Anonymous Coward | more than 9 years ago | (#10752599)

No, C# was made so Microsoft could make money off Java. By changing the name, and some keywords, they can market the next OO language to a bunch of people who never learned C++.

Popularity not the problem. (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10752409)

All that "popularity" can do is emphasize the problems that are already there. Ultimately, it's Microsoft's fault for letting the massive number of bugs in to begin with.

Re:Popularity not the problem. (0)

Anonymous Coward | more than 9 years ago | (#10752429)

Perhaps you could RTFA and then refute specific points rather than reading the blurb and making a blanket statement with nothing to support it other than your reflexive thinking.

Re:Popularity not the problem. (0)

Anonymous Coward | more than 9 years ago | (#10752441)

How do you know he's not just making a joke? How do I know you're not ust making a joke? What's the point of these discussions?

Re:Popularity not the problem. (0)

Anonymous Coward | more than 9 years ago | (#10752495)

How do I know you are not making a joke about the other people making a joke?

Re:Popularity not the problem. (1, Funny)

Anonymous Coward | more than 9 years ago | (#10752497)

how do i know any of you actually exist. How do i know i exist. Fuck, there goes sleeping for the night.

Re:Popularity not the problem. (5, Insightful)

Anonymous Coward | more than 9 years ago | (#10752519)

It's odd to refute specific points of the article when its basic premise is flawed, but the one that applies is "all software has bugs". This is a defeatest attitude that is contradicted by the existence of formal methods for proving a piece of software to be bug free, and even of automatic theroem provers for showing software to be bug free (such as ACL2). This is the part that I was complaining about, and it is fair to criticise that without having to go into the finer points of the rest of the article.

To further expound on my original complaint, the article argues that microsoft's bad reputation is due to the popularity of its software, but this is only valid if it is impossible to make software better than Microsoft. The article seems to lean this way by stating that Microsoft has some of the smartest developers around working for it, but having the smartest developers doesn't mean that it produces the best code. Microsoft has earned its bad reputation by allowing so many bugs into such critical software like an Operating System.

Forth post! (-1, Offtopic)

Philosomatographer (744211) | more than 9 years ago | (#10752414)

Damn, I had to try

managed code (3, Interesting)

MoFoQ (584566) | more than 9 years ago | (#10752415)

I thought that's why Microsoft was pushing for "managed code" with the .Net framework. Though I think it's some what ripping the idea(s) from Sun's Java. But I'm sure even with .Net, there will still be buffer overflows. Well...the GDI+ exploit is one prime example of that fact.

Re:managed code (0)

Anonymous Coward | more than 9 years ago | (#10752436)

Except that the CLI doesn't solve this problem, it just makes avoidable (which it already was to begin with). A developer can still write code to do pointer arithmetic. BTW, what kind of brain damaged designer allows for pointer arithmetic in a garbage collected language?

Re:managed code (4, Informative)

omicronish (750174) | more than 9 years ago | (#10752484)

Except that the CLI doesn't solve this problem, it just makes avoidable (which it already was to begin with). A developer can still write code to do pointer arithmetic. BTW, what kind of brain damaged designer allows for pointer arithmetic in a garbage collected language?

Pointer arithmetic automatically makes the code unsafe (you actually use the 'unsafe' keyword in C#), and you have to compile it with an /unsafe switch. Resulting binaries are not verifiable by .NET, and you can prevent unsafe code from executing via code security. I can't run C# code that uses pointer arithmetic off a network share because of this.

Re:managed code (0)

Anonymous Coward | more than 9 years ago | (#10752504)

A designer that cares about interopability ? Duh !

easy... (1)

MoFoQ (584566) | more than 9 years ago | (#10752509)

Microsoft

Just think, if code was perfect, then there would be no need for upgrades; a valuable source of revenue.

Re:managed code (5, Insightful)

Anonymous Coward | more than 9 years ago | (#10752540)

BTW, what kind of brain damaged designer allows for pointer arithmetic in a garbage collected language?

Umm, one who knows that it is required for proper interoperability with existing libraries? One who knows more about language design than you?

The CLI actually isn't a "garbage collected language". First, it isn't a language - it is a language infrastructure (the LI in CLI). Second, garbage collection is available to the languages, but not required. It is a complete virtual machine, and straight C/C++ ports just fine to it, including all the buffer overruns.

However, there is a convention for "safe" programming. If you follow the convention, the assembly loader can verify that there are no buffer overruns or similar problems in your program. The price you pay is access to low-level constructs such as pointers, since their use cannot be verified.

Loading assemblies with unverifiable code is a privilege, which allows security to be maintained.

I think it all boils down to: the decision was the right one, it was well implemented, so stop talking about stuff you know nothing about.

Re:managed code (5, Insightful)

omicronish (750174) | more than 9 years ago | (#10752459)

I thought that's why Microsoft was pushing for "managed code" with the .Net framework. Though I think it's some what ripping the idea(s) from Sun's Java. But I'm sure even with .Net, there will still be buffer overflows. Well...the GDI+ exploit is one prime example of that fact.

An interesting distinction to make is that .NET code itself isn't vulnerable to buffer overflows. GDI+ is an unmanaged component (likely written in C++), and is vulnerable. The problem is that .NET exposes GDI+ functionality through its graphics classes, and since those classes are part of the .NET framework, .NET itself essentially becomes vulnerable to buffer overflows.

Microsoft appears to be shifting its APIs to the managed world, either as wrappers to legacy APIs, or new APIs built completely in the .NET world (or both as is the case with WinFX). So to expand on your post, as long as legacy code is used, yeah, buffer overflows will still be possible, but by shifting more code to managed world the likelihood of such vulnerabilities will hopefully diminish.

Re:managed code (2, Interesting)

MoFoQ (584566) | more than 9 years ago | (#10752612)

including drivers (longhorn will be .Net based).

One major disadvantage is that performance will take a hit. Now, if u make drivers .Net based, then the performance hit will be multiplied.

And one more thing, managed code is fine but not having the old samples/examples updated with the new managed code is annoying. An example of this can be seen in the Oct. 2004 update for the DirectX 9.0 SDK; the C# examples use the older deprecated code which has no wrapper classes (and thus will get a compile error). (A way to workaround this is to use the older Summer 2004 or older DLL's as the reference instead of the new ones...but then that begs the question; why bother with Oct. 2004?)

Re:managed code (3, Informative)

Tablizer (95088) | more than 9 years ago | (#10752647)

It seems to be a fundamental battle between speed versus protection. As time goes on and processors get faster, then things should shift toward the protection side.

However, some applications, such as games, may still require being close-to-the-metal in order to get competative speed. Game buyers may not know about extra protection, but they will balk at speed issues. Thus, it still may be better business for some industries to choose speed over safety.

However, if the option for such exposure is avialable, then viruses and other malware may still be able to take advantage of it somehow. The trick is to find a way to allow speed-intensive apps without creating back-doors. Maybe have a toggle switch on the front of the CPU box with two settings:

* Speed
* Safety

Just an idea (that probably needs work).

jasabella from #winprog (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10752417)

TFA (-1)

saned (736423) | more than 9 years ago | (#10752426)

Before it gets /.ed...
Last summer, much was made of Slate author Paul Boutin’s harangue in his June 30, 2004 “Webhead” column. Boutin basically told his readers to drop Microsoft’s Internet Explorer like a hot rock and move to Mozilla’s Firefox, because of the increasingly nasty security holes turning up in IE. Problem is, Slate is owned by Microsoft.

Ouch.

It really has gotten that bad, and it’s easy to be left with the impression that Microsoft creates lousy software, rotten with bugs that allow the black hats to break into our networks and bring the global Internet to its knees. The anti-Microsoft tomato tossers insist that if only Microsoft cleaned up its products, we’d be rid of the security holes and the black hats who thrive on them.

It’s not that simple. Microsoft has some of the best programmers in the world working on its products, and books like “Writing Solid Code” from the Microsoft developer culture are seen as classics that belong on every programmer’s shelf. Nonetheless, Microsoft software has bugs; all software has bugs, which is a crucial point that I’ll return to later.

What we have to understand is that our current problems with Internet Explorer have less to do with bugs than with success. When a product has 90% of a huge worldwide market, there will be problems. It doesn’t matter what the product is, and it matters only a little how good it is. What matters is that Internet Explorer is virtually the sole organism in an ecosystem that the world’s technology industry depends on. When IE catches a cold, the networked world gets pneumonia.

This metaphor from biology is called software monoculture. Ubiquitous high-bandwidth communication has turned the world of computing from countless independent islands into a single global ecosystem. The fewer distinct organisms at work within this ecosystem, the easier it is for a bug—any bug—to become a threat to the health of the whole.

Worms and viruses that depend on these bugs replicate and travel automatically, and unless they can assume that the next system is identical (bugs and all) to the one they’re leaving, they can’t propagate as quickly nor do as much damage. If only one in 20 systems allowed such worms and viruses to take hold (rather than nine out of 10) it’s doubtful that they could ever achieve any kind of critical mass, and would be exterminated before they got too far.

Software monoculture happens for a lot of reasons, only a few of them due to Microsoft’s sales and marketing practices. In the home market, nontechnical people see safety in numbers: They want to be part of a crowd so that when something goes wrong, help will be nearby, among family, friends, or a local user group.

In corporate IT, monoculture happens because IT doesn’t want to support diversity in a software ecosystem. Supporting multiple technologies costs way more than supporting only one, so IT prefers to pick a technology and force its use everywhere. Both of these issues are the result of free choices made for valid reasons. Monoculture is the result of genuine needs. Technological diversity may be good, but it costs, in dollars and in effort.

As if that weren’t bad enough, there is another kind of software monoculture haunting us, far below the level of individual products—down, in fact, at the level of the bugs themselves.

If you give reports of recently discovered security holes in all major products (not merely Microsoft’s) a very close read, you’ll find a peculiar similarity in the bugs themselves. Most of them are “buffer overflow exploits,” and these are almost entirely due to the shortcomings of a single programming language: C/C++. (C and C++, are really the same language at the core, where these sorts of bugs happen.) Virtually all software written in the United States is written in C/C++. This includes both Windows and Linux, IE and Firefox. A recent exploit turned up in Firefox that was almost identical to one in IE. The standard C libraries have more market share than even Microsoft.

This makes the obvious solution to software monoculture—switching to something other than Microsoft—problematic. Individual consumers and individual corporate shops can switch to a minority product, like the Mac (for consumers) and open-source tools like Linux, Apache, Evolution, and Mozilla for the corporate enterprise.

But then what happens if Mozilla or the Mac get too popular? They’re all written in C, and they all have those same bugs. Once a product’s market share creeps up toward 50% or so, the effects of monoculture take hold again. You can run from IE, but if too many people run with you (and to the same destination) you won’t be hiding for long.

Putting today’s security hole debacle in perspective requires this dual understanding of software monoculture. It’s not just in the apps, but in our developer tools as well. Microsoft is almost incidental. It sounds hopeless, and if what we want to do is fix software monoculture itself, well, it is hopeless. Standards are good, and you’ll pry C and C++ out of our programmers’ cold, dead hands.

No, the real lesson of software monoculture is that, like it or not, we’re all in this together. It’s not really about bugs, nor about Microsoft, nor about software at all. There has always been software monoculture (am I the only one here old enough to remember when OS/360 ruled the world? Or, hey, PC DOS?) but it took ubiquitous unmanaged high-bandwidth communication to turn an arcane sort of string buffer overflow into a global threat. Ducking the unavoidable effects of software monoculture really means going back to the drawing board and managing the communications that make one app’s problem everybody’s problem. Static packet filtering is clearly not enough. Even stateful packet inspection may not be enough. The real answer may not have been invented yet—but if we keep looking in the wrong places, or blaming the wrong people (Microsoft or anybody else) the black hats will keep on lighting fires, and the networked world will continue to burn.
Jeff Duntemann is a programmer, technology editor and author, and was the founder of PC Techniques/Visual Developer magazine, as well as of Coriolis Books.

TFA as AC! Say no to whores! (5, Informative)

Anonymous Coward | more than 9 years ago | (#10752485)

The Lessons of Software Monoculture
by Jeff Duntemann

November 1, 2004 --

Last summer, much was made of Slate author Paul Boutin's harangue in his June 30, 2004 "Webhead" column. Boutin basically told his readers to drop Microsoft's Internet Explorer like a hot rock and move to Mozilla's Firefox, because of the increasingly nasty security holes turning up in IE. Problem is, Slate is owned by Microsoft.

Ouch.

It really has gotten that bad, and it's easy to be left with the impression that Microsoft creates lousy software, rotten with bugs that allow the black hats to break into our networks and bring the global Internet to its knees. The anti-Microsoft tomato tossers insist that if only Microsoft cleaned up its products, we'd be rid of the security holes and the black hats who thrive on them.

It's not that simple. Microsoft has some of the best programmers in the world working on its products, and books like "Writing Solid Code" from the Microsoft developer culture are seen as classics that belong on every programmer's shelf. Nonetheless, Microsoft software has bugs; all software has bugs, which is a crucial point that I'll return to later.

What we have to understand is that our current problems with Internet Explorer have less to do with bugs than with success. When a product has 90% of a huge worldwide market, there will be problems. It doesn't matter what the product is, and it matters only a little how good it is. What matters is that Internet Explorer is virtually the sole organism in an ecosystem that the world's technology industry depends on. When IE catches a cold, the networked world gets pneumonia.

This metaphor from biology is called software monoculture. Ubiquitous high-bandwidth communication has turned the world of computing from countless independent islands into a single global ecosystem. The fewer distinct organisms at work within this ecosystem, the easier it is for a bug--any bug--to become a threat to the health of the whole.

Worms and viruses that depend on these bugs replicate and travel automatically, and unless they can assume that the next system is identical (bugs and all) to the one they're leaving, they can't propagate as quickly nor do as much damage. If only one in 20 systems allowed such worms and viruses to take hold (rather than nine out of 10) it's doubtful that they could ever achieve any kind of critical mass, and would be exterminated before they got too far.

Software monoculture happens for a lot of reasons, only a few of them due to Microsoft's sales and marketing practices. In the home market, nontechnical people see safety in numbers: They want to be part of a crowd so that when something goes wrong, help will be nearby, among family, friends, or a local user group.

In corporate IT, monoculture happens because IT doesn't want to support diversity in a software ecosystem. Supporting multiple technologies costs way more than supporting only one, so IT prefers to pick a technology and force its use everywhere. Both of these issues are the result of free choices made for valid reasons. Monoculture is the result of genuine needs. Technological diversity may be good, but it costs, in dollars and in effort.

As if that weren't bad enough, there is another kind of software monoculture haunting us, far below the level of individual products--down, in fact, at the level of the bugs themselves.

If you give reports of recently discovered security holes in all major products (not merely Microsoft's) a very close read, you'll find a peculiar similarity in the bugs themselves. Most of them are "buffer overflow exploits," and these are almost entirely due to the shortcomings of a single programming language: C/C++. (C and C++, are really the same language at the core, where these sorts of bugs happen.) Virtually all software written in the United States is written in C/C++. This includes both Windows and Linux, IE and Firefox. A recent exploit turned up in Firefox that was almost identical to one in IE. The standard C libraries have more market share than even Microsoft.

This makes the obvious solution to software monoculture--switching to something other than Microsoft--problematic. Individual consumers and individual corporate shops can switch to a minority product, like the Mac (for consumers) and open-source tools like Linux, Apache, Evolution, and Mozilla for the corporate enterprise.

But then what happens if Mozilla or the Mac get too popular? They're all written in C, and they all have those same bugs. Once a product's market share creeps up toward 50% or so, the effects of monoculture take hold again. You can run from IE, but if too many people run with you (and to the same destination) you won't be hiding for long.

Putting today's security hole debacle in perspective requires this dual understanding of software monoculture. It's not just in the apps, but in our developer tools as well. Microsoft is almost incidental. It sounds hopeless, and if what we want to do is fix software monoculture itself, well, it is hopeless. Standards are good, and you'll pry C and C++ out of our programmers' cold, dead hands.

No, the real lesson of software monoculture is that, like it or not, we're all in this together. It's not really about bugs, nor about Microsoft, nor about software at all. There has always been software monoculture (am I the only one here old enough to remember when OS/360 ruled the world? Or, hey, PC DOS?) but it took ubiquitous unmanaged high-bandwidth communication to turn an arcane sort of string buffer overflow into a global threat. Ducking the unavoidable effects of software monoculture really means going back to the drawing board and managing the communications that make one app's problem everybody's problem. Static packet filtering is clearly not enough. Even stateful packet inspection may not be enough. The real answer may not have been invented yet--but if we keep looking in the wrong places, or blaming the wrong people (Microsoft or anybody else) the black hats will keep on lighting fires, and the networked world will continue to burn.


Jeff Duntemann is a programmer, technology editor and author, and was the founder of PC Techniques/Visual Developer magazine, as well as of Coriolis Books.

Blaming the language... (3, Informative)

The Hobo (783784) | more than 9 years ago | (#10752428)

Glancing over a book called "Writing Secure Code" by Howard and LeBlanc, from the Microsoft Press and that touts the following quote on the front cover:

"Required reading at Microsoft - Bill Gates"

Makes me wonder if blaming the language is easier than the possiblity of the code being more sloppy than it should. The book recommends many ways to avoid buffer overflows and such.

Re:Blaming the language... (3, Informative)

mind21_98 (18647) | more than 9 years ago | (#10752449)

Complex systems are difficult to debug. Simple as that. With something that has as many lines of code as Windows and IE, it's impossible not to miss at least one bug. Sure, a change in policies might help, but you can never get rid of bugs. That said, Firefox does seem to have fewer problems.

Re:Blaming the language... (1, Interesting)

Anonymous Coward | more than 9 years ago | (#10752523)

Which is exactly why security policy should be implemented in a small, simple kernel.

Then any apps that make requests through this kernel get the security benefits.

Re:Blaming the language... (1)

KingPunk (800195) | more than 9 years ago | (#10752493)

you're right! its not the code, but the coders.
quality vs. features.. its the biggest issue.
thats one of the most major issues with oss projects too!
and time-driven codeweaving doesn't exatly add
padding to that factor of sloppy code
not that the language isn't at fault, it by default, is quite old.

if it were my car, it'd be time to retire it!
--kingpunk

Re:Blaming the language... (5, Insightful)

Moraelin (679338) | more than 9 years ago | (#10752529)

The problem is that nobody writes perfect code.

Yes, we're all nerds, and we're all arrogant. We all like to act as if _our_ code is perfect, while everyone else is a clueless monkey writing bad code. _Our_ bugs are few and minor, if they exist at all, while theirs are unforgivable and should warrant a death sentence. Or at the very least kicking out of the job and if possible out of the industry altogether.

The truth however is that there's an average number of bugs per thousand lines of code, and in spite of all the best practices and cool languages it's been actually _increasing_ lately.

Partially because problems get larger and larger, increasing internal communication problems and making it harder to keep in mind what every function call does. ("Oh? You mean _I_ was supposed to call that parameter's range before passing it to you?")

This becomes even more so when some unfortunate soul has to maintain someone else's mountain of code. They're never even given the time to learn what everything does and where it is, but are supposed to make changes until yesterday if possible. It's damn easy to miss something, like that extra parameter being a buffer length, except it was calculated somewhere else. Or even hard-coded because the original coder assumed that "highMagic(buf, '/:.', someData, 80)" should be obvious for everyone.

And partially because of the increassing aggressiveness of snake oil salesmen. Every year more and more baroque frameworks are sold, which are supposed to make even untrained monkeys able to write secure performant code. They don't. But clueless PHBs and beancounters buy them, and then actually hire untrained monkeys because they're cheap. And code quality shows it.

But either way, everyone has their own X bugs per 1000 lines of code, after testing and debugging. You may be the greatest coder to ever walk the Earth, and you'll still have your X. It might be smaller than someone else's X, but it exists.

And when you have a mountain of code of a few tens of _millions_ of lines of code, even if you had God's own coding practices and review practices, and got that X down to 0.1 errors per 1000 lines of code... it still will mean some thousands of bugs lurking in there.

Re:Blaming the language... (1, Insightful)

Anonymous Coward | more than 9 years ago | (#10752634)

"And when you have a mountain of code of a few tens of _millions_ of lines of code"

Isn't that the real problem. No program should include tens of millions of lines of code. That's the whole point of developing software in layers.

Not just C/C++ (4, Interesting)

Dancin_Santa (265275) | more than 9 years ago | (#10752434)

Any compiled language is susceptible to security holes. The problem is that the process of turning source code into binary code is opaque to the developer. He puts some code through the compiler and some binary object code pops out. Things like memory offsets, code areas, data areas, and all these esoteric issues that need to be dealt with are simply left to the compiler to decide.

Unlike interpreted languages which for the most part implement all code as either line-by-line interpretation or in bytecode form, compiled languages talk directly to the CPU. Interpreted environments have the additional benefit that they run inside of a sandbox that is abstracted from the hardware by some large degree. Because of this, the running code never actually touches the CPU directly.

Things like the "no-execute" bit on modern CPUs provide an additional layer of security and prevent purposely damaged code from running directly on the CPU. However, until operating systems implement this in their own code, any application that does not want to adhere to the no-exec flag does not have to. This is like flock on Unix which only sets a file locking flag which applications are expected to obey rather than true file locking as implemented on other systems.

Re:Not just C/C++ (1)

Drantin (569921) | more than 9 years ago | (#10752458)

Unlike interpreted languages which for the most part implement all code as either line-by-line interpretation or in bytecode form, compiled languages talk directly to the CPU.


Huh? Do you mean something like Java, where you have a virtual machine, something like Bash Scripting, which doesn't do what you're talking about, or are you merely advocating writing in ASM directly, which AFAIK is the only language with implement all code as either line-by-line interpretation or in bytecode form

Re:Not just C/C++ (1)

Drantin (569921) | more than 9 years ago | (#10752476)

or since this topic seems to be mostly about MS consider .net along with java for the virtual machine category...

Re:Not just C/C++ (4, Insightful)

TheLink (130905) | more than 9 years ago | (#10752506)

All languages are susceptible to security problems.

However C and C++ (and a few other languages) are susceptible to buffer overflows - where it is common for bugs to cause "execution of arbitrary code of the attacker's choice" - this is BAD.

There are saner languages where such things aren't as common. While Lisp can be compiled, AFAIK it is not inherently susceptible to buffer overflows. OCaml isn't susceptible to buffer overflows either and is in the class of C and C++ performance-wise.

"arbitrary code of the attacker's choice" can still be executed in such languages, just at a higher level = e.g. SQL Injection. Or "shell/script".

However one can avoid "SQL injection" with minimal performance AND programmer workload impact by enforcing saner interfaces e.g. prepared statements, bind variables etc.

How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look and work like C or C++?

Re:Not just C/C++ (1)

nihilogos (87025) | more than 9 years ago | (#10752596)

How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look and work like C or C++?.

In the case of C++ you'd use the standard library containers, which you should be using anyway so I don't understand why people go on about buffer overflows in C++.

In the case of C you could, for example, use strlen().

I haven't used C++ in a long time, (0)

Anonymous Coward | more than 9 years ago | (#10752640)

so please correct me if I'm mistaken, but IIRC vector<T> and the like aren't boundary checked, are they? If they aren't, then C++ code using the STL containers could be just as susceptible to buffer overflows as C.

Re:Not just C/C++ (2, Insightful)

Anonymous Coward | more than 9 years ago | (#10752649)

Methinks you have never done any serious coding in C.

Consider, for example, the following valid bit of C code:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main()
{
char* a = "abcdefg";
a[8] = 'a';
printf("%d\n", strlen(a));
return 0;
}

This even compiles on gcc with -Wall without any errors or warnings, yet it segfaults every time you run it.

Re:Not just C/C++ (0)

Anonymous Coward | more than 9 years ago | (#10752676)

This does not compile under VC.Net.

The original a string is designated const, so the attempt to change it in the second line causes the compiler to issue an error. If you had wanted to do this in VC, you'd have to malloc the string or declare it as an array.

VC better than gcc? Say it ain't so!

Re:Not just C/C++ (1)

spongman (182339) | more than 9 years ago | (#10752602)

it should be said that prepared statements and parameter binding usually increases performance, especially if your DBMS is able to cache the query.

Re:Not just C/C++ (0)

Anonymous Coward | more than 9 years ago | (#10752624)

How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look and work like C or C++?

You'd take quite a performance hit, and have to change the way you code.. but you could easily create a class in c++ for each memory allocation, and instead of directly writing to memory (e.g. *mem, mem[]), you would use memory.write(address, value), or memory.read(address value). Have these functions automatically ignore out of bounds writes, maybe even give you an alert to stdout.
You could do something similar with c and structs with an API set.

What I do though is just allocate all of my memory blocks to powers of two, regardless of how much memory I need. Then I can perform a simple bitwise-and on the memory address before accessing it.
ex: unsigned char mem[64];
mem[ (x & 63) ] = value;
If you need to add pointers, then just use pointer position comparisons.
ex: unsigned char mem[64], end_mem[1];
unsigned char *ptr = mem + x;
if((ptr + y) < end_mem)*(ptr + y) = value;

Re:Not just C/C++ (4, Insightful)

Brandybuck (704397) | more than 9 years ago | (#10752635)

How does one do the same thing with respect to buffer overflows and C or C++, AND still have things look and work like C or C++?

This is borderline troll material! Would you stop beating that dead horse? You avoid buffer overflows in C by checking the lengths of your buffers. You stop using C strings. You use container libraries. As for C++, you avoid them by using the included string and container classes.

Re:Not just C/C++ (4, Insightful)

themo0c0w (594693) | more than 9 years ago | (#10752565)

The problem is that the process of turning source code into binary code is opaque to the developer. He puts some code through the compiler and some binary object code pops out.
Interpreted environments have the additional benefit that they run inside of a sandbox that is abstracted from the hardware by some large degree. Because of this, the running code never actually touches the CPU directly.

So is being distanced from the hardware good or bad? If anything, interpreted languages put the programmer more distant from the operating hardware.

The problem with compiled languages like C(++) are that you DO have to deal with memory management directly, thus creating buffer overflow exploits. However, all languages are vulnerable to input verification problems, of which buffer overflows are a subset. The problem is sloppy programmers, not bad languages, compiled or otherwise.

Also, no offense, but compilers are pretty damn smart pieces of software. Almost all security problems arise from the application software, not the compiler/interpreter.

Furthermore, the difference between compilation and interpretation is not particularly distinct these days, anyway, especially when dealing with VMs. You "compile" Java into bytecodes, which are executed by the Java VM, which in turn compiles and executes native code for the host machine. Conversely, many processors perform on the fly "translation" of instructions from one ISA to another.

Re:Not just C/C++ (2, Informative)

ikewillis (586793) | more than 9 years ago | (#10752566)

Things like the "no-execute" bit on modern CPUs provide an additional layer of security and prevent purposely damaged code from running directly on the CPU. However, until operating systems implement this in their own code, any application that does not want to adhere to the no-exec flag does not have to. This is like flock on Unix which only sets a file locking flag which applications are expected to obey rather than true file locking as implemented on other systems.

Wrong. sparcv9, for example, implements a non-executable user stack per default. In POSIX, all memory from the heap is pre-marked non-executable (on architectures that support page protections) unless it is explicitly set by the program to be executable (for example, in JIT compilers) using functions like mprotect(). In Windows, this is implemented as a flag passed to HeapAlloc().

The interface design and OS support is already there, what isn't is people buying non-IA32 CPUs in large numbers.

Re:Not just C/C++ (3, Interesting)

Foolhardy (664051) | more than 9 years ago | (#10752584)

Any compiled language is susceptible to security holes. The problem is that the process of turning source code into binary code is opaque to the developer. He puts some code through the compiler and some binary object code pops out. Things like memory offsets, code areas, data areas, and all these esoteric issues that need to be dealt with are simply left to the compiler to decide.
Are you saying that all high-level languages that can compile use a process of producing machine language so opaque that the developers cannot produce predictable, consistent and detirminstic code without an extreme amount of effort?

Any self-respecting language will produce a binary that does what the source code says it should do, in exact detail. As for complexity or how much detail you get in that control, depends on the language. C and C++ are languages that give you some of the strongest control. Unfortunately, this amount of control can get you to hang yourself if you aren't careful. Use the best language for the problem. (they aren't all the same.)
Unlike interpreted languages which for the most part implement all code as either line-by-line interpretation or in bytecode form, compiled languages talk directly to the CPU. Interpreted environments have the additional benefit that they run inside of a sandbox that is abstracted from the hardware by some large degree. Because of this, the running code never actually touches the CPU directly.
Protected memory CPUs can provide every bit as much protection for the rest of the system as a VM can; it's hardware VM support for memory. That's the point of protected memory. Also, many VMs provide a on-demand compiler that produces native code so the program can execute directly on the CPU because it's faster. Any limits imposed on the language's environment can be done without a VM.
Also, user-mode processes never talk to any hardware but the CPU and memory, as allocated by the OS.

The IBM AS/400 has no protected memory and does not need VMs to provide system security because there are only two ways to get binary code onto the system: 1. From a trusted source or 2. from a trusted compiler that only produces code that adheres to security regulations.
Things like the "no-execute" bit on modern CPUs provide an additional layer of security and prevent purposely damaged code from running directly on the CPU. However, until operating systems implement this in their own code, any application that does not want to adhere to the no-exec flag does not have to. This is like flock on Unix which only sets a file locking flag which applications are expected to obey rather than true file locking as implemented on other systems.
The no-execute bit provides hardware negation of a certain type of attack. It does not protect against corruption of program memory, which can lead to crashes and other types of vulns. Yes, like many things, it only works effectively when it's used correctly. The most common form of buffer overrun that can lead to code execution is on the stack. Unless the compiler (or the assembly) produces code that needs the stack to be executable, the operating system can safely mark all thread stacks as no-execute. Although you can move the stack to some private section of memory, the OS is usually aware of where the thread's stack is because it's needed to start the thread and it isn't normally moved. XPSP2 in Windows does this for all threads in system service processes by default when the NX bit is supported, or programs not on a blacklist upon request.

Re:Not just C/C++ (4, Insightful)

Baki (72515) | more than 9 years ago | (#10752590)

Hmm, try putting a web server implemented in shell script on the internet and see what happens :). Shell scripts are interpreted, but have so many "tricks" such as backtick expansion, variable expansion etc. that it is virtuall impossible to write a safe program with it.

I don't see how program safety has something to do with being compiled or not. It is just a different class of security holes that you get depending on the language.

Tool (3, Insightful)

radaway (741780) | more than 9 years ago | (#10752437)

This is idiotic... The language is simply a tool. If you dont know how to use a hammer without crushing your finger,use screws, or dont and stop blaming the hammer for losing your pinky.

Re:Tool (5, Funny)

Concerned Onlooker (473481) | more than 9 years ago | (#10752479)

and stop blaming the hammer for losing your pinky.

That's kind of like ending up with a "null pointer" eh?

Re:Tool (0, Flamebait)

mjfgates (150958) | more than 9 years ago | (#10752551)

Y'know, a hammer with a warped handle *will* bite your thumb every damn' time. Likewise a screwdriver with a trashed head will slip, chew up the screws, etc.

C++ tries, really tries hard, to cause buffer overflows.

Sometimes you gotta take a look around. (4, Insightful)

Sheetrock (152993) | more than 9 years ago | (#10752443)

This brings up a complaint I've got with the way the industry works nowadays, monoculture being something many large companies seem to share.

As a programmer, I feel the continual march of progress in computing has been hampered as of late because of a major misconception in some segments of the software industry. Some would argue that the process of refinement by iterative design, which is the subject of many texts in the field -- extreme programming being the most recent -- demonstrates that applying the theory of evolution to coding is the most effective model of program 'design'.

But this is erroneous. The problem is that while extremely negative traits are usually stripped away in this model, negative traits that do not (metaphorically) explicitly interfere with life up until reproduction often remain. Additionally, traits that would be extremely beneficial that are not explicitly necessary for survival fail to come to light. Our ability to think and reason was not the product of evolution, but was deliberately chosen for us. Perhaps this is a thought that should again be applied to the creation of software.

It makes no sense to choose the option of continually hacking at a program until it works as opposed to properly designing it from the start. One only has to compare the security woes of Microsoft or Linux with the rock-solid experience of OpenBSD for an example. It makes little sense from a business perspective as well; it costs up to ten times as much to fix an error by the time it hits the market as it would to catch it during the design. Unfortunately, as much of this cost is borne by consumers and not the companies designing buggy products, it's harder to make the case for proper software engineering -- especially in an environment like Microsoft where one hand may not often be aware of what the other is doing.

Don't be fooled into thinking open source is free of the 'monoculture' mindset, either. While it is perhaps in a better position to take advantage of vibrant and daring new concepts because of the lack of need to meet a marketing deadline or profitability requirement the types of holy wars one might have noticed between KDE/GNOME or Free Software/Open Source demonstrate that there are at least some within every community that feel they hold the monopoly on wisdom.

Huh? (3, Insightful)

Anonymous Coward | more than 9 years ago | (#10752473)

Our ability to think and reason was not the product of evolution, but was deliberately chosen for us.
A statement on the origins of thought and reason founded on the use of neither...Interesting!

Re:Sometimes you gotta take a look around. (0)

Anonymous Coward | more than 9 years ago | (#10752490)

the rock-solid experience of OpenBSD for an example

Seems like there are a lot of bug reports in OpenBSD too. Clean up your own household before crapping too much in others'.

Re:Sometimes you gotta take a look around. (0)

Anonymous Coward | more than 9 years ago | (#10752573)

"Our ability to think and reason was not the product of evolution, but was deliberately chosen for us."

FYI: Don't appeal to religion if you want anybody to listen to you. You just destroyed your legitimacy by special pleading.

Re:Sometimes you gotta take a look around. (2, Interesting)

Baki (72515) | more than 9 years ago | (#10752578)

You shall always have evolution on a certain scale. Maybe you may "revolutionize" a single program but you cannot rewrite an operating system from scratch (meaning not even borrowing existing code and libraries as OpenBSD did heavily, such as many libraries, gcc, binutils etc). If you do, it will take years to "mature" which is also a kind of evolution.

On a somewhat larger scale, within companies you may replace one box with another (running another OS), but you cannot change your complete infrastructure overnight, i.e. replace all network protocols at once. Such changes take years and are a slow and evolutionary process.

Sometimes you have to take a step back and throw away some old cruft and make it fresh and new. However a certain degree of both evolution and also monoculture is unavoidable. If you have a 10000 employee company throwing in new technologies all the time, allowing for too much heterogenity, you shall have a maintenance and system-management nightmare very soon, leading to collapse of your IT infrastructure.

Re:Sometimes you gotta take a look around. (3, Insightful)

The Musician (65375) | more than 9 years ago | (#10752606)

1) Extreme programming doesn't mean skipping design, it means building only what you need. You're still building that little bit with the same attention to all facets of software engineering.

The point being that when you don't know what you'll eventually have to build, no amount of intelligence, forethought, or design will solve that problem. You build what you know you need, and flow along with changing requirements.

2) Who's to say that the better overall choice is to correct the so-called "negative traits". There is some cost associated with doing so. If they are important enough, they will get fixed. Maybe (as is often the case) getting something that mostly works makes the users happier than something "properly design[ed] from the start" yet six months later.

(Not to say that design slows down a project; attention to design should and will speed up work. But too much Capital-D Design up front -- before the questions are really explored, and before you have a working version to pound on and gain understanding from -- will end up a losing proposition in the end.)

The blessing and curse of software development is that everything you are doing is necessarily new in some way. If someone has done it before, why would you be writing it again? That combined with the push to solve the difficult problems in software rather than hardware (because software is *easy* to change!?) means each project is an exploration.

And to the extent that the exploration is into more and more unknown territory, you need the steps of iterative and "agile" processes to get yourself a good feedback loop into your problem domain.

Otherwise you end up over time and over budget (if it even works at all), because you had a great design for the wrong problem.

Re:Sometimes you gotta take a look around. (1)

Skapare (16644) | more than 9 years ago | (#10752641)

This brings up a complaint I've got with your signature:

Try not. Do or do not, there is no try.
-- Dr. Spock, stardate 2822.3.

Dr. Spock was a baby doctor. Mr. Spock was a character in the Startrek series and movies. But Yoda spoke those words in "Star Wars: The Empire Strikes Back".

Maybe if you applied the principles of properly designing it from the start, instead of choosing the option of continually hacking at your signature, it wouldn't be so buggy by the time it hits Slashdot. Or I guess one hand wasn't aware of what the other hand was doing.

Re:Sometimes you gotta take a look around. (5, Insightful)

steveha (103154) | more than 9 years ago | (#10752658)

It makes no sense to choose the option of continually hacking at a program until it works as opposed to properly designing it from the start.

There is something to this, I guess. But that's the real trick, isn't it? The problem is that real life isn't like programming class in college.

In class you get an assignment like "write a program that sorts text lines using the quicksort algorithm." This simple statment is a pretty solid specification; it tells you everything you need to know about how to solve the problem. How many features does this project have? As described, exactly one. You might get fancy and add a case-insensitive flag; that's another feature.

In real life, you get a general description of a project, but the project implies dozens to hundreds of features. Your users may not even know exactly what they want. "Make something like the old system, but easier to use." You might spend a great deal of time designing some elaborate system, and then when the users actually see it they might send you back to the drawing board.

So the best approach is generally to try stuff. You might make a demo system that shows how your design will work, and try that out without writing any code. But you might also code up a minimal system that solves some useful subset of the problem, and test that on the users.

Another shining feature of the "useful subset" approach to a project is that if something suddenly changes, and instead of having another month on the project you suddenly have two days, you can ship what you have and it's better than nothing. As I read in an old programming textbook, 80% of the problem solved now is better than 100% of the problem solved six months from now.

Note that even if you are starting with a subset and evolving it towards a finished version, you still need to pay attention to the design of your program. For example, if you can design a clean interface between a "front end" (user interface) and a "back end" (the engine that does the work), then if the users demand a complete overhaul of the UI, it won't take nearly as long as if you had coded up a tangled mess.

One only has to compare the security woes of Microsoft or Linux with the rock-solid experience of OpenBSD for an example.

I'm not sure this is the best example you could have chosen. Linux and *BSD build on the UNIX tradition, and UNIX has had decades of incremental improvements. Some bored students in a computer lab figure out a way to crash the system; oops, fix that. After a few years of that, you hammer out the worst bugs.

But UNIX did start with a decent design, much more secure than the Windows design. Windows was designed for single users who always have admin privileges over the entire computer; it has proven to be impossible to retrofit Windows to make it as secure as it should have been all along. The Microsoft guys would have done well to have studied UNIX a bit more, and implemented some of the security features (even if the initial implementation were little more than a stub). As Henry Spencer said, "Those who do not understand UNIX are compelled to reinvent it. Poorly."

steveha

Does this mean C++ is dying? (2, Funny)

Anonymous Coward | more than 9 years ago | (#10752444)

Can someone confirm this at NetCraft?

The difference is (2, Insightful)

chirayuk (643356) | more than 9 years ago | (#10752446)

...that with .net - a patch to the framework can fix the buffer overflows (and other bugs) which are discovered and the benefits will be instantly seen by all applications using it. With C/C++, etc - you need to scan and fix each individual application for bugs. Its easier to fix the runtime than individual apps because every exploit would generally be exploiting the runtime (as long as its managed) which would make the runtime very robust. WIth C/C++, each time you discover a buffer overflow or similar exploit in an app, it does say anything about other apps which might have similar problems.

Re:The difference is (1)

ufnoise (732845) | more than 9 years ago | (#10752516)

If the software was compiled using shared system libraries, you can do the same thing with C/C++ apps. If there is a flaw is in the program, and not in the "framework", you'd have to redistribute (or patch) the application anyways.

Re:The difference is (0)

Anonymous Coward | more than 9 years ago | (#10752535)

That sounds totally wrong.

With C/C++ you'd scan and fix the .so shared libraries in the same way you would a Mono .dll.

I bet even in Microsoft's world, if a .net thingy is written without sharing code it'll have to be scanned individually; while if a C++ library is shared, fixing it will benefit all programs that use it.

Authors Impartiality (4, Informative)

Anonymous Coward | more than 9 years ago | (#10752455)

...[switch to a] minority product... ...open-source tools like Linux, Apache...

From netcraft:
Apache 67.92%

Sure... Minority Product.

Author obviously isn't the most impartial of writers.

Re:Authors Impartiality (0, Funny)

Anonymous Coward | more than 9 years ago | (#10752574)

NETCRAFT CONFIRMS IT!

2@1time (3, Insightful)

l3v1 (787564) | more than 9 years ago | (#10752460)

[...]popularity and market share of Microsoft's products that are responsible [...] the problem is largely with C/C++ [...]

Yup, that's 2 bullshits in one sentence.

he's right about some things (3, Insightful)

EllynGeek (824747) | more than 9 years ago | (#10752463)

But not many. Just another Microsoft droid spouting the same tired propaganda, and completely devoid of facts. First of all I don't believe 90% market share, especially not worldwide.

Secondly, its record speaks for itself- windows, outlook, and IE are exploited because IT'S SO FREAKING EASY. Sure, you can maybe sort of lock out users from core system functions, but you can't lock out applications from altering core system files. Hello, the Registry! Hello .dll and .vxd! Just visit a Web site and poof! ownz0red. Just leave your winduhs system connected to the Internet, and bam! Instant spam relay. such a friendly lil OS!

Really dood, you call yourself a programmer- you should know better. Face the facts. If you can.

Re:he's right about some things (0)

Anonymous Coward | more than 9 years ago | (#10752513)

Do the research yourself. MS definitely has 85-90% marketshare of desktops, with the remainder split with Apple and other operating systems. Servers are a different matter and are not themselves devoid of bug reports and a plethora of patches.

Does anyone outside of high school use the spelling, "dood"?

Re:he's right about some things (0)

Anonymous Coward | more than 9 years ago | (#10752514)

But not many. Just another Microsoft droid spouting the same tired propaganda, and completely devoid of facts. First of all I don't believe 90% market share, especially not worldwide.

Secondly, its record speaks for itself- windows, outlook, and IE are exploited because IT'S SO FREAKING EASY. Sure, you can maybe sort of lock out users from core system functions, but you can't lock out applications from altering core system files. Hello, the Registry! Hello .dll and .vxd! Just visit a Web site and poof! ownz0red. Just leave your winduhs system connected to the Internet, and bam! Instant spam relay. such a friendly lil OS!

Umm, yes, you can prevent applications from altering core system files. Don't run with Administrator privileges and C:\Windows, HKEY_LOCAL_MACHINE are both read-only to both yourself and applications that you run. Perhaps you are the one spouting garbage that is devoid of facts.

Re:he's right about some things (4, Insightful)

0x461FAB0BD7D2 (812236) | more than 9 years ago | (#10752576)

What is ultimately interesting is that if IE was not as popular as it is, the bugs would still exist, and it would still be exploited. The only difference is that it wouldn't have the impact that it does now.

The interesting thing is that C/C++ is not to blame. C and C++ provide enough means to avoid buffer overflows as they do the means to create them. But in any software company, getting products out in time takes precedence over good code. That is the problem. The language used only changes the exploits and vulnerabilities available, not the fact that they exist.

The only way to reduce such security concerns is to change the culture in the software world.

Blaming the language is just an excuse (4, Insightful)

Anonymous Coward | more than 9 years ago | (#10752468)

It's really not that hard to avoid buffer overflows in C/C++. It's not the fault of the language, but of the programmer. Obviously, avoiding buffer overflows is an added thing to think about when coding in C/C++, but I've worked with enough Java programmers to know that no language can compensate for a poor/ignorant programmer.

It's just an excuse, plain and simple.

Re:Blaming the language is just an excuse (1)

Lisandro (799651) | more than 9 years ago | (#10752610)

I was just going to post the same. C/C++ are more prone to buffer overflows, given - if the language is compiled the possibilty can't be avoided, even less with such "low level" ones like these. But it's not that hard to harden (pun!) C/C++ against overflows: if you're inputting data, of any kind, and no matter what, check it for validity. Presto!

C# and Java are nicely sandboxed and have many nice features both C and C++ lack, but all of that comes with a performance hit, no matter how minor. For some people it's not a choice, and like the parent said, writting secure C/C++ is no rocket science. At all.

technically (0)

Anonymous Coward | more than 9 years ago | (#10752469)

Buffer Overflows are avoidable with good programming practices and good code maintance. This what Microsoft failed at. Furthermore they are a nice juicy security target so their multitude of mistakes are taken advantage of. Market share is only one part of the equation, one most not overlook "quality".

I dissagree....MS is simply to rich, lazy, sloppy (0)

Anonymous Coward | more than 9 years ago | (#10752470)

Microsoft had it's chance to make good, stable, reasonably priced OS's but they took the easy road of quickly pushing out products out the door, numerous un-ethical marketing and fear marketing programs, very unethical buisiness practices and they have only themselves to blaim that linux now threatens them in the long term...hiding behind patents won't work in the long term...the only thing that will work is good, stable operating systems...its not like cpu architectures wear out...so a good operating system will last a very long time. Only an idiot would want 5000 versions of windows when linux is availible.

purify? (0)

Anonymous Coward | more than 9 years ago | (#10752489)

I mean why can't a company with the financial resources of Microsoft invest some back into solving these kinds of problems. If buufer overflows are determined to be the problem then Purify and get on with it...

IIS vs. Apache? (3, Insightful)

whoever57 (658626) | more than 9 years ago | (#10752496)

Once again, another defender of Microsoft's software fails to explain why IIS, with it's smaller market share, has had far more vulnerabilities and more severe vulnerabilities than Apache.

I think what all MS apologists ignore is the security in depth that exists in *NIX systems. They ignore issues like a vulnerability in Apache may not result in a root compromise, because it is running as an unpriviledged user.

unpriveliged user (0)

Anonymous Coward | more than 9 years ago | (#10752605)

Then run your IIS as an unpreveliged user. Windows allows you that flexibility.

odd ideas about programming (2, Insightful)

belmolis (702863) | more than 9 years ago | (#10752503)

Maybe I'm just ignorant and ill-read, but I've never even heard of Writing Solid Code, which according to the article is a classic. I somehow missed it while reading The Art of Computer Programming, The Dragon Book, The Structure and Interpretion of Computer Programs, Software Tools, and the like.

I'm also amazed at the idea that competant programmers in a decently run company can't avoid writing software full of bugs because C and C++ lead to buffer overflow errors. They're easy enough to avoid. I've never had one in anything I've written and its not as if I've never had a bug.

Re:odd ideas about programming (0)

Anonymous Coward | more than 9 years ago | (#10752547)

Actually, none of the books which you claim to have read approach the problem of software security.

TAOCP is known for being a thorough treatment of algorithms. It does not contain anything related to software security (as in preventing programming errors)

The Dragon book is about compilers. While you may glean quite a bit of information about how to parse a syntax tree, nothing in there is going to teach you how to prevent yourself from walking off the end of an array.

And so on and so forth. All those books rely on you having the ability to program well in the first place. A badly written compiler doesn't get brownie points because its author read Sethi's post doctoral thesis on accelerated BNF parsing theory.

WSC is for all the college graduates who think they've figured everything out when in reality their code has never had to face the scrutiny of a QA team. It's not the best book about software construction, but it does hit on some very important topics about coding defensively and clearly.

Re:odd ideas about programming (1)

belmolis (702863) | more than 9 years ago | (#10752582)

The books I listed are examples of real classics. I never suggested that they dealt with secure programming. Your insinuation that I haven't read them is baseless but fitting for an Anonymous Coward.

Re:odd ideas about programming (0)

Anonymous Coward | more than 9 years ago | (#10752597)

I insinuated no such thing.

Classics, yes, indeed they are. However, does having read the classics imply that you are somehow better than someone who read all those in addition to other offerrings?

Writing Solid Code is like the Star Wars of computing literature. It has enough meat to be interesting and keep you coming back for more, but it ain't Gone with the Wind.

Re:odd ideas about programming (1)

belmolis (702863) | more than 9 years ago | (#10752636)

I said nothing about being better than anybody, nor did I suggest that I had read only the classics. I merely compared real classics with what the article claimed to be a classic. Writing Solid Code may well be a perfectly useful book - I have no opinion of it since I haven't read it. What I wondered was whether it is really a classic since, unlike many real classics, I had never heard of it. Thus far the response I've seen makes me think I was right - it may be a good book, but calling it a classic is hyperbole - but I was prepared to learn that I was wrong and had missed a true classic.

When someone mentions things that he has read and you respond with a reference to the books he "claims" to have read, you're casting doubt on whether he has read them. That's the insinuation to which I referred.

Re:odd ideas about programming (0)

Anonymous Coward | more than 9 years ago | (#10752664)

WSC may be considered a classic in the sense that McConnell's Code Complete, Eckel's "Thinking in XYZ" series, and Brooks' Mythical Man Month are. Well worth the time to read them.

None of those are exceptionally good, but they come up in technical conversation (as in this article) enough that to not know of their existence is a little worrisome at first glance. It's not necessary to agree with everything in them, but to have an opinion about them is usually indicative of having broad horizons as a software engineer/manager/etc.

Did you not claim to have read those books? Did I say anything other than that you had claimed so? Perhaps you are reading too much into my posts.

Re:odd ideas about programming (1)

nate nice (672391) | more than 9 years ago | (#10752660)

Ahh, The Dragon Book. I've read that myself, and it's a fine book on compilers, but man is it hard to read. The topics and theory are right on and easy to digest, but talk about a dry book. I had to stop myself from reading but thinking of other things so many times in that book.

I recommend to anyone interested in computer programming to read it, and if you want to write any type of compiler or compiler like system. But beware, this book is D-R-Y.
I would also recommend "The Pragmatic Programmer" as well as "Mastering Regular Expressions".

ActiveX (1)

LittleLebowskiUrbanA (619114) | more than 9 years ago | (#10752508)

I really don't think C/C++ are to blame for ActiveX vulnerabilities.

Re:ActiveX (3, Insightful)

omicronish (750174) | more than 9 years ago | (#10752542)

I really don't think C/C++ are to blame for ActiveX vulnerabilities.

I completely agree. The problem with ActiveX and some other Microsoft ideas is that they're fundamentally flawed with regards to security. You simply don't allow arbitrary code to download and execute. ActiveX shouldn't exist at all, and you're right, the problem is deeper than the language chosen.

Re:ActiveX (1, Insightful)

Anonymous Coward | more than 9 years ago | (#10752609)

I remember ten? years ago when the first ActiveX stuff was introduced and thinking and hearing others mention how it was "fundamentally flawed with regards to security."

It was generally agreed upon that "You simply don't allow arbitrary code to download and execute." And that "ActiveX shouldn't exist at all."

That it took the Retards From Redmond a decade to figure out what even the most junior engineer should know about computer security is a damning indication that the problems at MS are "deeper than the language chosen."

C++ to blame (5, Funny)

delta_avi_delta (813412) | more than 9 years ago | (#10752515)

Obviously it's all the fault of C++... because no other vendor but Microsoft uses this obscure and arcane language...

So which of these will it fix? (2)

interiot (50685) | more than 9 years ago | (#10752518)

So which of these things will an all-maanaged-NET-code environment fix?
  • Companies who insist on putting maximally-powerful scripting languages in every possible application and document format they can get their hands on
  • Companies who are only now implementing the concept of a root account
  • Companies who choose to develop ActiveX web objects over Java applets, because money is better than security
  • An environment where users download and install spyware themselves

I would agree with TFA if not for one thing.... (5, Insightful)

Vladan (829136) | more than 9 years ago | (#10752532)

Methodology matters.

I would agree with TFA if the author were comparing Internet Explorer 4 with, let's say, Netscape 6 or Opera 7. If he were, then I would whole-heartedly agree that IE is a victim of its own popularity and that software monocolture is an "evolutionary" reality mirrored in biological systems.

But...

There is a difference between how IE code gets written and how Mozilla code gets written. I'm not going to make any asinine qualitative comparisons between the skills of Mozilla contributors and MS staff (I respect both), but let's face it....

YOU know the difference between writing a commercial product with an unrealistic deadline, a list of new features four pages long (most of which are crap) and under the direction of non-technical managers who like Gantt charts and daily productivity reports and writing a project for your own self-satisfaction.

Mozilla code is written incrementally, with the goal of quality in mind, under public scrutiny (no peer review beats public scrutiny) and many of the contributors are doing it because they want to do it and want to do a good job. It's their pet project.

Compare the quality of code you write for work or in college under strict deadlines, and the code you write for fun.

- How many alternatives algorithms do you go through with each?
- Do you settle for "good enough" when you are writing code for yourself?
- Are you doing your own corner-case QA as well as you could be when you make that check-in into the company CVS when you know that QA will most likely test it (as an intern, I used to share a desk with QA guys, the catch is that they love to cut corners).

Not to mention endemic problems with large corporate projects of any type: corporate pride which prevents people from going back on bad decisions (ActiveX and IE security zones), lack of management support (how many top coders are still actively developing IE? any?), and all kinds of office politics. Many of these are avoided with well managed open source projects.

Cheers,

AC

He makes good points.. (1)

d_jedi (773213) | more than 9 years ago | (#10752541)

That's not to say all of the security problems in software are the "fault" of C++ (if you're careful, you can use it securely), but the runtime checks that C++ neglects in favour of execution efficiency certainly play a large part. I would expect C/C++ (in it's current form) dies off as the dominant language of software development within the next 5-10 years because the additional execution efficency will become less and less significant with hardware improvements.

Re:He makes good points.. (1, Insightful)

Anonymous Coward | more than 9 years ago | (#10752665)

I would expect C/C++ (in it's current form) dies off as the dominant language of software development within the next 5-10 years because the additional execution efficency will become less and less significant with hardware improvements.

We've been hearing this for years now. Unfortunately, computers will never be "fast enough" where speed no longer matters. Games will always push as many polygons as possible, and when they have so many that you can't tell the difference, they'll up the resolution and start again. Emulators will always aim for higher and higher targets (look at PearPC, for example. Can that take a large performance hit for security?). Cryptography tools and video / audio codecs will also need to push higher and higher bitrates through. Point being: there will always be a need for c/c++, and even assembly. It's just that now, it's less likely neccesary for that text editor you're working on, or that picture viewer.
I do, however, agree that with time (a lot longer than 5-10 years), the majority of applications that do not need cutting edge speed will be written in different languages. Much like what happened to x86 assembly during the past 10 years or so.

Makes Sense (3, Funny)

Fringex (711655) | more than 9 years ago | (#10752543)

Being the most popular always came with negativity. Honestly, why would anyone care about writing virii, worms and other means of computer assault on Linux. It fills an extremely small gap in the number of consumer desktops used worldwide. It is more fun to hash the Big Redmond Giant.

You don't make something opensource if you wanna make money. That is a straight up fact. Have there been successes? Oh yeah, there have been plenty. If you wanna make the big bucks you keep it in house so no one can profit off your work. However, your company can't make money if you are continuously working on a product and not selling it. So does Microsoft release buggy code? Yeah.

It is a matter of money. Bill Gates didn't start Microsoft because he wanted to touch lives, he made the company to make money. That is the general reason anyone starts a company. Dollar signs.

So you have deadlines. A good example is the rush developement and release of EQ2. Hell you can even compare it to any EQ expansion. Full of bugs, exploits, instability, etc. Why? Money. You don't make money programming to make it perfect. You make money by having a product good enough that people will use it. Why else has EQ maintained a stable subscription base over five years. Granted there have been jumps in either direction but it has been stable enough to open more servers.

Expansions like Gates of Discord, Luclin, Omens of War and Planes of Power all had more than their fair share of bugs. Money is the underlying issue. The expansions were good enough to release but not solid.

The same can be said for Microsoft. Windows is good enough but can always be fixed through patches. If they are gonna keep it in house forever, then they will never make money.

Crisy underflow? (1)

Doc Ruby (173196) | more than 9 years ago | (#10752563)

Microsoft's got billions of dollars and thousands of developers, in a market where there are many thousands more developers looking for jobs. Why don't they write a tool that search/replaces C/C++ source code for buffer allocations and replaces them with calls to a class or struct with bounds checking? It's not a trivial problem, but if they put their money where their mouth is, their pledge to prioritize security would match this whining about C/C++ buffer bugs. Unless their agenda is merely to herd everyone into programming C#, at the expense of the massive losses while legacy C/C++ code is still in vogue.

The problem with the "king of the hill" scenario.. (2, Insightful)

mark-t (151149) | more than 9 years ago | (#10752595)

Is that it doesn't really work.

The claim is that windows gets attacked so much because it's the most popular... but consider the following:

Look at the different web servers in the world, and look at what percentage of them run Microsoft's webserver and what percentage of them run another system. [netcraft.com]

Now take a wild guess which webserver actually has the greatest number of exploits for it floating around. Anyone who pays any attention at all to their access logs on their webserver will tell you they get almost insane numbers of IIS exploit attempts on their webservers each and every day.

But Microsoft doesn't have the marketshare in the web server market to justify the disproportional number of attacks it gets, yet it's _CLEARLY_ in the lead for being attacked.

Conclusion: Microsoft's view that they are being "picked on" because they are in the lead is false. They are being picked on because they are highly accessible target that develops software that is easy to exploit, and Microsoft is simply too stubborn to admit that it has a real problem, insted amounting to blaming it on something resembling "jealousy".

Summarizing, then... (4, Informative)

nigham (792777) | more than 9 years ago | (#10752611)

C/C++ as a language has bugs.
Actually, any program has bugs.
IE and Firefox are both programs written in C/C++.

Therefore,
1. What is wrong with IE is wrong with Firefox
2. The quality of coding is mostly irrelevant to the quality of a program, it being mostly dependent (inversely) on how many people use it.
3. If Firefox gains market share, it will have bugs! It has to! You'll see!!

Listen to little brother crying...

Sure, blame C and C++ (4, Insightful)

Sivar (316343) | more than 9 years ago | (#10752617)

"...and he notes that the problem is largely with C/C++ and mostly because of the buffer overflow problems."

OpenBSD and OpenVMS are written in C. Qmail and djbdns are written in C.
Is it difficult to prevent buffer overflows? If you are reading a string, either use a string class, or read only as many characters as the character array can store. (What a novel idea!) If you are writing a string, among other things, set the last possible character of that string to null, just in case.
These are but single simplified examples, but it is not impossible by any means, or even all that difficult, to write solid code.
Among other things, the problem is that it takes individual effort to make sure every static-sized buffer isn't abused. As Murphy would tell you, human error is bound to crop up--increasingly so as the complexity of the project increases. I believe there was a post on the formula for this not too long ago.

As to the solution, well, that's a tough one. Higher level languages (Java, C#) help reduce these problems (and help reduce performance as well), but are just a band-aid. Perhaps the Manhattan Project [arstechnica.com] (no, not that one [atomicmuseum.com] ) will come up with something better.

Until then, try to avoid products which have proven themselves to be full of holes year after year, week after week. And no, this doesn't just include all Microsoft server software. BIND and Sendmail come to mind.

how many buffer overflows holes linux? (0)

Anonymous Coward | more than 9 years ago | (#10752622)

This article makes me wonder about the potential security problems with overflows in open source code. Are we more vulnerable than m$? or should we all switch to openBSD?

I'm Not Sure Anyone Knows (3, Interesting)

nate nice (672391) | more than 9 years ago | (#10752630)

I'm not convinced this man, Microsoft or anyone else for that matter knows why they have the problems they do. If they did, I'm sure Microsoft would be very interested in obtaining this information so they could make higher quality software.

My guess is, and since I do not work at Microsoft or know their culture first hand, is they are a bloated, over managed institution that provides a fertile breeding ground for errors to compound. It's like NASA in some respects, where you just have too many layers of accountability which allows many things to slip through the cracks.

I'm not sure it's fair to blame the programming languages used for errors. Bad code is often proclaimed as a major short coming of C++, but in the end it comes down to the design, programming and process. Many very large and successful software projects have been constructed using C/C++, so I find it a lame excuse to blame the language.

One big problem that many agree on is in the case of Microsoft there is a large market pressure to release things before they are ready. This allows you to get your product out to customers who will then be less likely to use a computers product, even if superior, but released later. Everyone knows the price of bug fixes goes up after the software is released, but I'm sure the mathematicians at companies like Microsoft have calculated the bug cost to profit ratio in releasing the software in particular states and the most profitable option is taken, regardless of acceptance.

I would be interested in knowing what Microsoft's error to lines of code ratio is. Larger than typical, smaller? I mean, Microsoft apparently has really good talent working for them. You would imagine they would produce really good software. What gives?

MS on standards (0)

Anonymous Coward | more than 9 years ago | (#10752644)

I he seriously trying to say non-compliant software would be more secure???

"Standards are good, and you'll pry C and C++ out of our programmers' cold, dead hands."

All a matter of effort (2, Insightful)

RocketRainbow (750071) | more than 9 years ago | (#10752663)

You know, maybe there's a point here. Perhaps if everyone switched to some other language, bugs and exploits would trend down. But there's more to it than this, and this isn't the biggest issue.

If you want to remove the errors from your code you have to dedicate the time to do so. Microsoft have shown that they are not willing to do so - they optimise for speed, integration and good looks rather than security and effectiveness.

And now they're falling apart on their traditional specialty too, because their software is like Swiss cheese. You can use it to make a sandwich, but you can't build on it.

As people have pointed out, Microsoft is not the monolith most laymen assume. Oh, sure, you and I see a Microsoft logo or picture when we turn on our computer, but who knew that most of the Internet was running on Linux, BSD and a handful of related OSes? Who knew that most of the world's fileservers were Novell? These are the real targets in the networked world, yet it's IIS that gets it. It's Windows 2000 Server that gets it.

Duntemann is right - Microsoft don't hire total retards to write their programs. Given the opportunity, they have shown that they can do what they're supposed to do. But they aren't supposed to do security, so they don't.

Microsoft may be changing their minds now. They are certainly marketing in that direction, but who knows? They're one of the most successful marketing companies in the world, but their lies are wearing thin (remember all those blue screen TV ads for Windows XP?)

It's no accident that they're using the languages they are at Microsoft, and it's no accident their work is inefficient and full of holes. They neglected these areas on purpose so that they could focus on "it runs fast and it comes with the computer."

Sloppy code is sloppy code (1)

Angst Badger (8636) | more than 9 years ago | (#10752666)

Most buffer overflows are the result of simple laziness. For almost all cases, it is possible to write a generalized set of functions or, in C++, a class to manage dynamic buffers. There are, in fact, umpteen million implementations of resizeable buffers. This is not a flaw in C or C++ any more than a gun is flawed because it can be used to shoot yourself in the foot. Being careful and alert is a prerequisite for using most powerful tools.

Of course, it grieves me to be in the position of defending C++, since the whole type-safety nonsense was largely a response to the lamentable fact that some programmers can't bother to read a function prototype or treat void pointers with respect.

But then, C++ is a perfect example of the futility of attempting to design tolerance for carelessness into a language. C++ did indeed fix many of the problems of C, but as Microsoft and many others demonstrate, sloppy, careless coders are perfectly capable of writing sloppy, careless code in any language. Buffer overflows may be impossible in Perl, but have you noticed any shortage of buggy, poorly conceived code in Perl? Java? Python? For every correct implementation, there are countless trillions of incorrect ones. How are you going to plug that hole with a yacc grammar?

The currently dominant languages have a lot to recommend them. And for most of them, there are excellent tools for checking the correctness of code, enforcing good programming practices, generating accurate and up-to-date documentation, and so on. But if you don't use them, or you don't put serious and careful thought into design, implementation, and maintenance, you're going to produce buggy software in whatever language you're using.

Lack of competition and poor design and implementation are Microsoft's problems. Buffer overflows are just the characteristic symptom of careless coding in C and C++. If Microsoft switched to Java or Eiffel or ML tomorrow, the buffer overflows might vanish, but something else would take their place.

His reasoning looks very flawed to me (5, Insightful)

jesterzog (189797) | more than 9 years ago | (#10752671)

His argument, spelled out, seems to be:

  • MSIE and Firefox are both written in C/C++, therefore:
  • MSIE and Firefox both have lots of buffer overflow related bugs.
  • MSIE suffers more because it's more popular and more homogeneous, allowing worms to spread more easily.
  • People can flock to Firefox, but if this happens then Firefox will become more popular and more homogeneous. Consequently,
  • There's no point flocking to Firefox. Give in to software monoculture, and wait for an answer that he already admits probably hasn't been invented yet.

Personally I find this argument to be quite baseless, and I'll believe it when I see it. Even if he is correct and Firefox might have as many bugs (because hey, it's written in C/C++), he doesn't seem to've provided any logical reasoning for people who are about to move to change their mind.

Even Jeff Duntemann admits that MSIE supposedly has at least as many bugs are Firefox. Given this reasoning, there's the choice between deploying MSIE (which is proven over and over again to be unsafe and full of security holes), and Firefox (for which nothing is proven).

It seems very shallow --- he's pitting something proven versus something unproven, and essentially claiming that we should assume they're both identically bad. I'll take my chances with Firefox, thank you very much. If everyone flocks to Firefox and it suddenly becomes a big security risk, I'll deal with it at the time.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?