×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

176 comments

De-Complied / Un-assembled (1)

JohnHegarty (453016) | more than 12 years ago | (#2945835)

Most code can be de-complied or unassembled, if people have enough time / money on their hands.

Linux is especially insecure! (-1)

Evil Inside (552726) | more than 12 years ago | (#2945836)

A burgular can easily steal something if somethings open and he can get inside, its the same with linux. The penguin is ill, very ill! So watch out, and say no to hackers and penguins!

FP! (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2945846)

i like unclosed EM tags

Bridges and software (1, Insightful)

October_30th (531777) | more than 12 years ago | (#2945850)

It's damn good that bridges and high rise buildings are not built the same way as software...

Re:Bridges and software (3, Insightful)

roybadami (515249) | more than 12 years ago | (#2946070)

Hey, they used to be. Look at all those old cathedrals that were built long before anyone knew how to analyze structures.

They just learnt from what stayed up...

Re:Bridges and software (0)

Anonymous Coward | more than 12 years ago | (#2946383)

hmm... but those old cathedrals and what not were massively "over-built", with walls stronger and thicker than they needed to be, buttressing "just in case" and so on - software is NOT, in general, comparably "over built", you don't (much) get people using a strongly-typed language and still programmatically verifying the type of every argument at run time, for example...

Re:Bridges and software (2, Insightful)

steve_l (109732) | more than 12 years ago | (#2947160)

If consider that all the cathedrals you see today are the ones that didnt fall down, you will realise that the gap betwen software and cathedrals is smaller than you think.

Also, butressing was a mid-project refactoring on a few of them, as their sibling projects started to fall down as they got above a certain size.

Finally, cathedrals were a projects lasting a few hundred years with fairly stable requirements 'watertight building to worship a deity"; I dream of s/w projects that stable.

Re:Bridges and software (0)

Anonymous Coward | more than 12 years ago | (#2946855)

If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.

Re:Bridges and software (2)

WNight (23683) | more than 12 years ago | (#2947287)

If people built houses the way programmers wrote programs, the first woodpecker would destroy all the housing, but they would have been created for free by copying the first house, and would be restored from backups almost instantly.

You Mean Windows Versus Embedded Software! (0)

Anonymous Coward | more than 12 years ago | (#2947594)

It's damn good that high quality embedded software is not built the same way as Windows.

Otherwise cars and aircraft would crash and people taking radiation therapy would die!

Root of the problem (5, Insightful)

dezwart (113598) | more than 12 years ago | (#2945865)

The root of the problem lies in the fact that the program/product has to be released by a deadline. Causing common sense and good coding practise to be thrown out the window due to the fact that you'll lose you job if you don't do it.

Deadlines are normally imposed by companies trying to earn a living through the development of software.

Then it would be a good idea to think that the Open Source community, not faced with deadlines, would be able to code the programs in a more ideal situation, leading to code that has a higher degree of elegance and security than code developed by companies attempting to make money from it.

Then you have the flip side of that, where the software may perpetually never reach a state of stableness since it is contunually in flux. But, how you view this state is totally dependant on your point of view.

At least the code in flux has a higher chance of adapting to it's environment and thus surviving over the slower to adapt Closed Source code.

Re:Root of the problem (1)

ackthpt (218170) | more than 12 years ago | (#2945902)

In summary, this article seems to overlook that secure practices don't scale, particularly at M$. This error makes the article itself insecure be careful how you read it, it may infect your mind with lack of insight.

Re:Root of the problem (2)

dboyles (65512) | more than 12 years ago | (#2946270)

Then it would be a good idea to think that the Open Source community, not faced with deadlines, would be able to code the programs in a more ideal situation, leading to code that has a higher degree of elegance and security than code developed by companies attempting to make money from it.

It would be nice to think that this makes OSS more secure, but to tell the truth I don't buy it. Or maybe I only buy it to a very limited degree. In so many aspects it looks like Linux is trying to play catch up to other OSes. A "rival" OS comes out with a new feature and the kernel folks (naturally) want to make sure Linux has the same functionality.

I think this is even more widespread with the move to the desktop. 3D performance, USB, sound, etc. have all taken pretty high priority in the kernel as far as I can tell. That's not bad, of course (I use Linux on the desktop), but that development for desktop users has to take away from time that could be spent making the kernel perform its primary functions more efficiently and securely.

I'm not a kernel hacker. I can only write the simiplest of C programs. Am I way off base? Is the kernel as efficient/secure as it can reasonably be and we should just concentrate on improving application software?

Re:Root of the problem (0)

Anonymous Coward | more than 12 years ago | (#2946355)

Well, linux has support for some things before anything else - bluetooth and IPv6 spring to mind... Linux has support for both long before windows, and a short time before the BSDs...

Re:Root of the problem (1)

kz45 (175825) | more than 12 years ago | (#2947067)

Well, linux has support for some things before anything else - bluetooth and IPv6 spring to mind... Linux has support for both long before windows, and a short time before the BSDs...

Do you have any better examples than this?
Bluetooth is used in about .01% of the cities in the U.S, and IPV6 won't be fully supported by ANYONE, for about 5 (or more) years.

That's like saying: Windows has supported hyperspace transporting LONG before linux or BSD.

Re:Root of the problem (1)

kputnam (488584) | more than 12 years ago | (#2947359)

While what you say is usually true, it does not relate to security, which was your orignal point. Free Software usually lags behind proprietary software, although that is not always true. However, first you say that you don't beleive it is more secure and then go on to explain that it is lagged behind in features.

I personally agree with the original post because I have experience with being put under deadlines, and writing code in my spare time. When I have a deadline, I have to focus on just getting the job done instead of getting it done elegantly. This leads to problems when the requirements change because my code is a mess and now I have to change things all over the place.

When I am working on my own free time, I write the code in the most generic, abstract, and elegant way for even simple things. For example, I have a small program that downloads the slashdot.xml file, parses it, and prints out headlines. I wanted to have this cached and expire every hour... but I wrote the caching mechanism so generically that I now use it with several pages, not just the slashdot headlines.

And with regards to security, when I have a deadline I try to get the work done and think of security as a second possibility. If I take the time to build the security into the application instead of adding it afterward, I end up with a very secure, but incomplete and useless program. When I develop on my own time, I have the time to think of security before I actually code anything.

This doesn't provide any factual evidence to Free Software being more secure than proprietary software, but it does give some ideas on why it might be, if it is.

Re:Root of the problem (2, Insightful)

jjohnson (62583) | more than 12 years ago | (#2946396)

Deadlines aren't the problem: unreasonable, inflexible deadlines are the problem. All the vices associated with coding under deadline pressure come from bad time management, not the simple fact that some thing needs doing by some specific time.

Joel Spolsky [joelonsoftware.com] goes on at great length about proper scheduling of software development, and seems to get it right.

Re:Root of the problem (2)

tshak (173364) | more than 12 years ago | (#2946423)

Deadlines are normally imposed by companies trying to earn a living through the development of software.

Then it would be a good idea to think that the Open Source community, not faced with deadlines, would be able to code the programs in a more ideal situation, leading to code that has a higher degree of elegance and security...


Deadlines affect both Open and Closed Source projects. Everything is market driven. Open Source software is almost always being written for a market. Just look at how the Linux GUI has evolved. When we saw the first light of KDE or Gnome, they where extremely unstable. But they where released because there was a deadline. The deadline was, "We need a GUI now to compete with Windows" (Yes, I know what Linus thinks about this).

At least the code in flux has a higher chance of adapting to it's environment and thus surviving over the slower to adapt Closed Source code.


First, how is "code in flux" secure? Second, how is Closed Source "slower to adapt to it's environment"? Here is one of many examples: IE4 (in late '97) almost fully implemented the W3C DOM recommendations while Mozilla (5 years later) is just now finishing them up. However, Opera - which by 2000 had good DOM support - has been able to compete at a great pace.

wrong wrong wrong (-1)

ArchieBunker (132337) | more than 12 years ago | (#2947099)

Does any of this free software have a deadline? Nope didn't think so and how many bugs and security holes pop up? 0wn3d b1tc|-|.

or maybe the problem is..... (1)

_avs_007 (459738) | more than 12 years ago | (#2947140)

Everyone that designs a bridge or skyscraper has some of of state regulated certification, and gobs of experience/education to go with it.

Software industry is not like that. You have everyone and their brother coding in VB thinking they are the god of coding, despite not having any formal CS background teaching theory, structure, etc etc.

So you end up with gobs of "terribly" designed and structured code. A long time ago, I was working at a financial software company. I was was one of few CS people they actually had, and I was horrified at how "ugly" their code was. I'm so glad I don't work there anymore. They had some of the most "barely" functional code I'd ever seen. That company thought that oodles of string manipulations was perfectly fine, and did not comprehend when and when not to spawn threads, or even how to take advantage of them.

Imagine what our bridges/buildings would look like if a bulk of the designers had no structural engineering background, and were hired because they had "experience building tree houses" at home....

bridges (1)

_avs_007 (459738) | more than 12 years ago | (#2947148)

If bridge/building designers were hired the same way a lot of software guys are, we would've had a BUNCH more Galloping Girdies...

Throw-away code (2)

entrox (266621) | more than 12 years ago | (#2945873)

The article is surely right about their comment about the throw-away mentality with assignment. But there are exceptions: at my University [uni-stuttgart.de] there is a so called "Software Engineering" degree, where the emphasis is on good code with good documentation and many test-cases. Correct code only amounts to 50% of the final mark; the other half comes from documentation, comments, testcases and how well you followed the style-guide. I quite like it, because the assumption is that basically all software in todays world simply sucks.

Re:Throw-away code (2)

MarkusQ (450076) | more than 12 years ago | (#2945951)

...where the emphasis is on good code with good documentation and many test-cases. Correct code only amounts to 50% of the final mark; the other half comes from documentation, comments, testcases and how well you followed the style-guide.

If I were to write a trojan-patch, I would want to have lots of reassuring documentation, good subroutine/variable names that sounded like they were doing what they should be doing, and follow the style guidelines to make sure it looked like I was a good guy.

If I were checking a patch in uber-paranoid mode, I'd strip out all the comments, rename everything to Fxxx and Ixxx etc. and run it through a formatter. Then I'd read it to see what the code actually does rather than seeing what I'm supposed to think it does.

And yes, I actually have done this on occasion. It's kind of fun, once you get into it, like working a crossword puzzle or solving the cube for the first time.

-- MarkusQ

Re:Throw-away code (3, Insightful)

entrox (266621) | more than 12 years ago | (#2945980)

Haha, I never saw this from that angle. Yes, I think it's a valid (albeit a little funny) point, but you assume that you've got a rogue coder on your team.
Let's face it: security holes are bugs, and good tests and documentation help spot them earlier. Obfuscating your code intentionally won't make your life easier :)

Re:Throw-away code (2)

MarkusQ (450076) | more than 12 years ago | (#2946236)

Haha, I never saw this from that angle. Yes, I think it's a valid (albeit a little funny) point, but you assume that you've got a rogue coder on your team.

Actually, the case in question involved code from a programmer in another company (we were in a joint venture) and the environment had gotten so convoluted that I was pointedly assuming as little as possible. I didn't assume he was rogue, just that I wanted to know what was really going on, not just what someone else thought was going on. I think my starting postulate was something like "I trust almost all of mathematics and a fair amount of physics. Everything else gets checked."

Let's face it: security holes are bugs, and good tests and documentation help spot them earlier. Obfuscating your code intentionally won't make your life easier :)

Agreed. But I also agree with the button that says "Don't get suckered in by the comments; the bugs are in the code."

-- MarkusQ

Re:Throw-away code (0)

norwoodites (226775) | more than 12 years ago | (#2945966)

You should follow your own style and not of others while in the University but make it constituent though one assignment and maybe make changes to the style while learning what is best for you.

Grading by looking at someone style is wrong because everyone (including teachers) have a slightly different one.

Sloppy design, sloppy coding, sloppy enforcement (4, Insightful)

ackthpt (218170) | more than 12 years ago | (#2945874)

The biggest problems I've seen in making code secure are the gaps between design, coding and code review.

Often the designer doesn't consider the bigger picture, how this piece fits it. It can be as simple as not requiring verification on input.

Coders if rushed, inexperienced, or simple bad (like rafts of people who suddenly became "programmers" in 1998-2000 when demand was extemely high, even though the only had a couple classes and were really english, anthropology, history, or other majors, just to fill positions) will fail to see the lapses left by designers and build porous code.

Lack of review or review that so anal it's focus is spelling errors in prompts or whether there's enough documentation lines, but fails to identify where secure practices are not followed. Well, don't get me started. ;)

Last, Q/A, everyone knows Microsoft's Q/A is called ship it out and let customer support pick up the bug reports and then sometimes charge the people reporting for the fix. Q/A is often the first department cut in layoffs, because management underestimates its importance. To bad, like the Enron execs, they won't take a cut themselves to save the product and the company. Good Q/A needs to ask the unthought of questions, what happens if I do this instead of what's expected?

Perhaps somewhere in the evoltion if IT that has lowered programmers from the status of mystical wizards to grunt code jokeys, management will recognize that code, even new products, aren't just some big patch and give it the attention and personnel it really deserves.

Re:Sloppy design, sloppy coding, sloppy enforcemen (1)

dezwart (113598) | more than 12 years ago | (#2945888)

The day that Management stops counting beans will be the day that the US stops being a Capitalist economy.

Re:Sloppy design, sloppy coding, sloppy enforcemen (2)

Reziac (43301) | more than 12 years ago | (#2946586)

I do some beta testing for various sorts of projects. I have a reputation as "the tester who can break anything", for good reason: I often do the unexpected (ie. whatever came into my head at the moment) and thus expose hitherto unknown bugs.
More often than not, the response from the coder is "You aren't supposed to do that!" not "Ooops, I need to fix that."

Yeah, that's real useful if the idea is to produce a stable product that users can't break just by doing random things.

Re:Sloppy design, sloppy coding, sloppy enforcemen (2)

Tassach (137772) | more than 12 years ago | (#2947106)

the response from the coder is "You aren't supposed to do that!" not "Ooops, I need to fix that."
I got the same response in my days as a tester. The proper response is, of course "If the user isn't supposed to do that, then you shouldn't let him do it." I've found that at least 50% (and often more like 80%) of the effort involved in writing code is in error detection/prevention and sanity checking.

I wish more people think like you do (1)

_avs_007 (459738) | more than 12 years ago | (#2947206)

I'm a software engineer working in a research lab position. Aside from coding, I spend a lot of time discussing various projects with various co-workers. I'm surprised that almost everytime I nitpick their ideas, their response is, "the user shouldn't be doing that", or "that should never happen"...

My philosophy is to assume that the end-user is an idiot and doesn't know what they're doing. My philosophy is also to assume that people who use my objectsare also idiots, so I design my objects as such. ie, "When you initialized that state variable, you initialized it as an int32, so when you try to change its state, I'm going to make damn sure you are giving me an int32", I'm not going to assume the developer is smart enough to figure that out. Because we all know that to assume makes an "ass" or "u" and "me"

The solution is clear (1)

SumDeusExMachina (318037) | more than 12 years ago | (#2945878)

Clearly, we have stumbled upon the fact that, like self governance, humans are by nature too incompetant to code for themselves. Therefore, I propose that we create an AI for the sole purpose of writing all our code so that we no longer have to suffer through gaping security holes introduced every time a human writes code.

Just to be on the safe side, we should probably outlaw compilers too. There's no telling what a malicious hacker might be able to do when he has hacking tools like gcc to write deliberately insecure code and run it on machines.

Re:The solution is clear (0)

Anonymous Coward | more than 12 years ago | (#2946595)

Therefore, I propose that we create an AI for the sole purpose of writing all our code so that we no longer have to suffer through gaping security holes introduced every time a human writes code.

Sounds good in theory. But how do you propose we create this AI without humans coding the AI? The AI could be programmed to create mistakes. Just take a look at those compiler back doors that are possible. It could work almost like that.

Redirecting to google.com (2, Informative)

togtog (104205) | more than 12 years ago | (#2945883)

Security Focus is redirecting to google. Which has a cache of the page, please use that instead.

http://www.google.com/search?q=cache:M13ch6-wvbw C: www.securityfocus.com/infocus/1541+&hl=en

Coincidence ? (2, Interesting)

anpe (217106) | more than 12 years ago | (#2945884)

Strange but this article, albeit raising some intersting issues, seems to focus on excusing Microsoft flaws in developpement.

The article states that security holes are inherent to developement. That's OK, but what about their frequency ? Have a look at, let's say, Apache vs IIS

The question isn't if the code has security flaws. It certainly has. The point is the methods you use to avoid it. I think OpenSource has a way of resolving sercurities issues. OS has an army of benevolent geeks at his disposal. Competent people that know how to write a patch or at least submit a bug report.

On the other hand, MS only proposed a bug report interface with the recent XP.Sorry but, a Bill Gates company-wide memo to write better code is a PR operation, not a method.

Whoever called this "flamebait"... (0)

Anonymous Coward | more than 12 years ago | (#2946557)

... is gonna get it at meta-mod time.

Wrong conclusion (5, Insightful)

Ben Jackson (30284) | more than 12 years ago | (#2945892)

The author's conclusion is that insecure software is a result of a lack of focus on security. The real problem is programmer ignorance about the fundamental issues and the technical details of writing secure software.

To have any hope of writing secure software, a programmer first has to be aware that a problem exists. Aware of issues like safely handling user input and securely transporting data (and when it's appropriate and when it's pointless).

Once a programmer is aware of the existance of these issues he can start learning about all of the technical problems of writing secure code. In a UNIX environment, it's things like not exposing unnecessary parts of the filesystem to external users, and not blindly writing to files in /tmp, and not trusting your PATH or your IFS in privileged scripts.

Forget focus, we need education.

Re:Wrong conclusion (2)

sketerpot (454020) | more than 12 years ago | (#2946161)

Let me tell a story about a Python script I wrote once that made me glad I wasn't connected to the internet.

It all began when I wanted to create a page that would copy any other page. It would retrieve the HTML from an URL and display it. Simple? Too simple. It worked very well for a while. I would tell it to get other files off localhost, and everything was happy. Then I decided to try to get something off the filesystem. file://something. It worked. I tried to get /etc/passwd. It worked.

Then I put in something to make sure that the URLs were HTTP URLs.

The moral: be paranoid, and never trust any data from your users. I hear Perl has a feature called "taint" to help keep track of insecure data; that would've been useful....

exactly! (1)

_avs_007 (459738) | more than 12 years ago | (#2947221)

I was talking to a coworker about this a few days ago. You can tell all your engineers to drop what they are doing for a month and focus on security, but that isn't going to do jack if these same engineers don't comprehend security. One man's idea of a secured object could turn out to be the most open object. You need to understand how security works, and the differences between: Authentication, Authorization, Confidentiality, and Integrity. Without this knowledge it will be impossible to design a secure object.

good code? (1)

Terry Dignon (548614) | more than 12 years ago | (#2945899)

i believe the key to having good code is to have multiple people write and critique it. (hint: open source)

He nailed it. (5, Insightful)

Anonymous Coward | more than 12 years ago | (#2945905)

Suddenly design and coding style are thrown out the window in favor of down and dirty "do what works, we'll fix it later" coding. Initially some of the more idealistic (and typically youthful) coders feel that this sort of programming is wrong; this feeling usually passes quickly under the tutelage of the more experienced team members.

When code has to be done before a certain deadline (usually yesterday), this kind of shit always happens. I happen to be one of those idealistic (youthful) coders, and cringe thinking about what sometimes goes into released software. Is it any wonder why there are so many bugs in software? There is never even time to design, let alone test.

Why does this happen? No one really has perfected the art of accurately estimating projects. So you end up taking a quick look at the project's complexity, compare it to something you did before, and tell them how long that previous project took. Then when you give sales/management the time estimate (which is usually bogus anyway), they just ignore it and continue on with their own schedule.

Then you have sales/marketing types who consider software to be "magical." They don't have a clue how it's designed, written, and tested. All they see is something in a box that they have to sell. So when they ask for more features (as if you simply add them like you add flour to a recipe), and an engineer tells them that rushing it out may lead to security holes, etc. etc. they blank stare.

It really doesn't have to happen, though (2)

Anonymous Brave Guy (457657) | more than 12 years ago | (#2946560)

When code has to be done before a certain deadline (usually yesterday), this kind of shit always happens. I happen to be one of those idealistic (youthful) coders, and cringe thinking about what sometimes goes into released software.

I am also one of those young, idealistic coders. I work in a team of seven people, with two senior developers, four junior (including me) and a project management type who works with the two team leaders and handles a lot of the paperwork and client contact.

I have always maintained that "quality is free", and for two years on this project, I have been proving it. I now get assigned far more than my fair share of the more challenging/high risk tasks, because they know I'll get it done, and it'll work in the end (or at least if it doesn't -- hey, no-one's perfect :-) -- it'll be easy to fix).

The old-timers, after making the usual comments about "can't do it that way in the real world, son" or "that's just not what happens", are slowly but surely coming around to acknowledging that actually, you can. I recently had a performance review, in which my manager recorded three things I think are really important here. First, I am prepared to look critically at multiple designs and pick the one that works best for the task at hand. Secondly, I do test things thoroughly. Thirdly, I do generate rather more than my fair share of the output from the team. I don't think these facts are independent.

Of course, I'm blessed with immediate management who are smart enough to let me get on with it. Sometimes they raise concerns about what I'm doing, particularly where I prefer a strategy that on the face of it looks like a bigger risk but in reality is likely to pay off much more. After all, it's their job to raise those concerns, and mine to address them. But generally, if I have a good argument in favour of my choice, they leave me be. I'm sure they still mumble things about "can't do it that way" under their breath occasionally, but the results are clear for all to see.

I think I am living proof of the fact that quality is free. Well, actually, it pays refunds... "Best practices" are called that for a reason, and it never ceases to amaze me how few people in business get it, and how much money is wasted as a direct result. Hey, if I can do it, there's no reason other people can't. All it takes is management with a couple of brain cells to rub together.

Is it any wonder why there are so many bugs in software? There is never even time to design, let alone test.

There is always time to design properly and test. Good design, implementation and testing takes a lot less time than bad design, botched implementation and rushed testing. You just have to have enough faith to do it, and resist the management bull that is short-termism. If you can do that and get the results for long enough, then you'll establish a level of credibility that commands the respect of your superiors, and you've won.

You sound like me.... (1)

_avs_007 (459738) | more than 12 years ago | (#2947235)

I like to put some thought into my code, before I implement something. Hence my objects are easier to use and extend, hence I get more challenging tasks.

Also, the reasons you state, is why I like working in a lab/research group, rather than a product driven group. We don't have those arbitrary deadlines pulled out of someone's hind-quarters, and we don't code with making the most $$$ in mind. Well, not in the usual way anyways. Our motivation is not to sell products. Our motivation is to design cool things that will make you want to go out and buy our silicon. Heck sometimes when we code, we don't even care if you buy our silicon or somebody elses silicon. Just as long as we create a need for you to buy somebody's silicon, then we'll leave it to marketing to steer you in our direction ;)

One time (1)

_avs_007 (459738) | more than 12 years ago | (#2947252)

One time somebody told me that they didn't understand the Object Oriented world, etc etc. Or that you can do the same things with this "older" language etc etc.

My mentality to that was, "If I thought like you did, I would still be programming in assembly"

Another time, someone made the reference, "You can't teach an old dog new tricks"...

My response was, "If I ever get like that, put me out of my misery"...

You can be deng sure that even when I'm old and gray, I'll still be learning every new technology out there just so I can rest easy.

Heck, I may still even be reading /. and get nailed as a troll for bickering with some of the youth and some of the old-farts.

Re:One time (0)

Anonymous Coward | more than 12 years ago | (#2947610)

My response was, "If I ever get like that, put me out of my misery"...

Just wait until programming is nothing more than dragging pictures around and connecting them and you never see a line of code.

Re:He nailed it. (0)

Anonymous Coward | more than 12 years ago | (#2946648)

So you end up taking a quick look at the project's complexity, compare it to something you did before, and tell them how long that previous project took.

One way I've found to give a more accurate estimate is, instead of looking at the project as a whole, to divide it into lots of smaller parts, estimate the time necessary for each of those, and add them. Typically the sum is completely different from your initial ballpark estimate.

In other words, calculating an estimate should be done only once you have a preliminary design: that way you'll actually *think* about what you're going to do instead of just throwing a number in the air.

Re:He nailed it. (2)

smagruder (207953) | more than 12 years ago | (#2947430)

I happen to be one of those idealistic (youthful) coders...

I'm 35 and it's still nice to be "idealistic and youthful" in my programming efforts. I have rarely had the privilege to work in a shop where crap-coding wasn't the rule. But I know from experience that appropriate upfront design and follow-through as well as ensuring readable/maintainable code make for a far more efficient and effective programming environment. And happier programmers too.

No one really has perfected the art of accurately estimating projects.

On top of this, very few programmers have the personality, confidence and communications abilities to take on the ignorance of management and personnel in other business units. Programmers must always be aware of their audience/customer, have empathy for their concerns and simultaneously be able to defend/support every technical decision they make in clear-cut terms.

It's all OK (1)

the bluebrain (443451) | more than 12 years ago | (#2945937)

From the end of the article:
"However, there may be a light at the end of the tunnel. Recently, Bill Gates issued a company-wide memo to all Microsoft employees dictating that from this moment forward, security will be a programming priority"

Oh, that's all right then :)

From a systems and network perspective (1)

Vexler (127353) | more than 12 years ago | (#2945970)

I am a network engineer in a large manufacturing firm, and I can tell you that the cook is only as good as his ingredients - if the patches are, well, patchy, then the network and higher-end systems architecture will be insecure, no matter how many patches are installed. There is only so much that a network or systems guy can do before (or unless) he sees the actual code and can address the issue from there. Remember that not-so-famous patch that Microsoft put out last summer, only to release a patch for that patch? That was where the dam finally burst for me.

Inexperienced programmers and C/C++ (2, Insightful)

neonstz (79215) | more than 12 years ago | (#2945979)

Where I work there are people, people who're responsible for an important part in a project, who can't understand why returning pointers to variables on the stack (from functions in c/c++) is bad. When this happened to one guy, he blamed the library he was using (an in-house library we're currently developing). When a colleague checked out the code he was horrified that the guy did just that, returned a pointer to this local variable.

But how do you differentiate between good and bad programmers? First of all I think a good programmers have to really enjoy programming. When I went to college (software development degree), I coded a lot of stuff in my spare time (I'm not saying that I'm a particulary good programmer, but at least I'm better than some of the other guys at work :). Not everyone does that, some hardly complete their programming assignments. This means that after some years of college, they will get their degree but they can't write a good program. But they will still get a job.

When writing software, especially in C, C++, you have to have a good knowledge of how stuff actually works. How virtual functions work, the difference between the stack and the heap, what happends when objects get out of scope and stuff like that. This stuff may be a boring part of the programming course, but it is actually very important. One problem is that in some places people don't learn C or C++ at all, only Java, and thus they don't need to learn most of this stuff. (Although they maybe have to learn a lot of java-specific stuff, such as how the garbage collector works etc).

The problem, as I see it, isn't that there are too many inexperienced programmers, just too few of the good ones. Another problem is the tool. Many projects is written in C or C++, which pretty much allows you to do everything. It is possible to write robust programs in C++. If I should manage a large C++ project, one of the first thing I would to is to ban almost all use of pointers and C-style arrays. Smart pointers with reference counting, array-classes with optional boundschecking and things like that. Why use char* when you can use std::string (or your own string class). Another solution is to not use C/C++ at all, but in many cases this is just not an option. And I think that C++ is a really powerful language, which with a tiny bit of effort by the programmer(s) can be a robust language, even for "newbies".

Back to 1 program for 1 task, but better? (1)

McDutchie (151611) | more than 12 years ago | (#2945996)

The article pinpointed some of the main causes behind insecurity well: bloat (integration of unrelated functions) in single programs exacerbating insecurity which is in turn exacerbated by integrating several bloated programs with each other.

I feel we need to return to the old Unix model of one program, one function. Small programs that do one thing well are a lot easier to debug and make secure.

"Integration" could be attained by making several small programs collaborate according to open standards. It's got to be possible somehow to do this *and* attain the level of user friendliness today's lusers expect.

[Before you all yell "UNIX pipe", it actually has to be usable by the average mouse-clicking Joe. The challenge is making this work well with a GUI. Nobody has managed this so far, but I believe it's got to be possible.]

For example, a GUI-based word processor would by itself include only the bare-bones functionality, such as text editing and basic layouting. It would not include a spell checker; the spell checker would be a separate GUI program which can collaborate with the word processor using an open protocol that would regulate permission to insert a menu item to invoke the spell checker and edit the text directly in the document, without the need to save it to disk first. (MacOS users might recognize Word Services in this description.)

Another obvious advantage, beyond security, is that power users could construct their own working environments from such applications - e.g. using a different spell checker or text editor. Using the different basic programs in various combinations would in turn expose more bugs, improving security.

To keep this user-friendly, collaborating programs could be bundled into application folders, much like Mac OS X does already with the files belonging to one application. Opening the folder would launch all the contained programs at once. (Or perhaps the user could define a "master" program that is launched in the front, with the "slave" programs launched in the background and invoked as needed.)

If open, GUI-based collaboration protocols exist for every imaginable type of functionality, you could combine ("integrate") as many small, well-tested and well-functioning programs of different manufacturers as you want, to give the impression of a big integrated package, without compromising security.

Of course, fat chance that such an idea would go mainstream in the near future, as it would mean the end of the Micro$oft business model. (Imagine! No need to upgrade the entire package and take loads of unwanted extra junk just to get that one function you want!)

Apple tried something rather like this once with OpenDoc, but it was not as open as the name suggested, plus it was bloated, plus the user was not ready for its extremely document-centric model (which is not part of my idea above), so it failed. I think this model deserves a second chance, done right this time - the Open Source way.

Re:Back to 1 program for 1 task, but better? (2)

C. Mattix (32747) | more than 12 years ago | (#2946138)

This is what well designed OO projects are supposed to be like. The key is then make the infrastructure between the components secure. When that communication/whatever is secure, then each component is the aformentioned "little program" that does "one task". I think one of the biggest problems is the amount of hybrid OO/traditional programs our there.

Re:Back to 1 program for 1 task, but better? (2)

arkanes (521690) | more than 12 years ago | (#2946782)

This was also the basic idea behing ActiveX, COM, COM+, CORBA, and whatever the acronym of the day is. Of course, it doesn't seem to be having the intended effect...

Incidently, I saw a neat mock-up of a GUI wrapper for this sort of interface - you were able to drag and drop linking symbols from the menu bar to other windows in order to create "links" between the programs. While it was really need within the limited demo I saw, I don't think it ever went anywhere :(

Nonsense: Consider Open BSD (2, Insightful)

Futurepower(tm) (228467) | more than 12 years ago | (#2946010)


In my opinion, the article is extremely badly written. Also, it is nonsense, as is easily proven by giving a link to another operating system:

Open BSD: Four years without a remote hole in the default install! [openbsd.org]

If the Open BSD team can make a secure operating system as volunteers, Microsoft, with a reported $33 billion in the bank, could take one of those billions and clean up their code.

Microsoft's security problems come partly from feeling that they don't have to care, apparently.

Also, maybe there is some secret U.S. government surveillance agency that requires that Microsoft operating systems not be secure. For years the U.S. government tried to prevent cryptography. For example, see these notes from the Center for Democracy and Technology: An overview of Clinton Administration Encryption Policy Initiatives [cdt.org]. The notes say, "The long-standing goal of every major encryption plan by the [U.S. government] has been to guarantee government access to all encrypted communications and stored data."

It is not impossible that software insecurity is secret U.S. government policy. The U.S. government is involved in many hidden activities, as this collection of links and explanation shows: What should be the Response to Violence? [hevanet.com]

Re:Nonsense: Consider Open BSD (3, Insightful)

WolfWithoutAClause (162946) | more than 12 years ago | (#2946088)

>Microsoft's security problems come partly from feeling that they don't have to care, apparently.

Or more precisely, that features were literally more important than security.

If they spend 80% of their time trying to improve their feature set, then they will only be able to spend 20% worryting about security; and if that turns out not to be enough, tough.

What's been happening recently is the fact that Linux is competing with them, and is seen as more reliable, has actually hit Microsoft in their pocket books. They are having to change their priorities to adapt to this new threat to them.

It will be interesting to see if they can change perceptions quickly enough.

>Also, maybe there is some secret U.S. government surveillance agency that requires that Microsoft
>operating systems not be secure. For years the U.S. government tried to prevent cryptography.

That's more or less one of the two jobs that the NSA does, to 'protect national security' the other is to protect commerce. The latter probably requires a secure OS, the former doesn't. (That's why there were export versions of software). NSA is pretty schizoid organisation; but most of the time they do a good job.

Re:Nonsense: Consider Open BSD (0)

Anonymous Coward | more than 12 years ago | (#2946219)

What tosh! look up your facts, OpenBSD has had several local root compromise holes (http://www.securiteam.com do a search). I agree OpenBSD is more secure, than any other publicly available operating system, and I advocate it, just don't be ignorant to the facts.

"Local" is NOT "remote". (2)

Futurepower(tm) (228467) | more than 12 years ago | (#2946274)


Note that "local" is NOT "remote".

It is assumed that you can trust your own staff more than you can trust all the hackers on the entire Internet.

Re:"Local" is NOT "remote". (0)

Anonymous Coward | more than 12 years ago | (#2946997)

Oh dear, so what happend to "security in depth", more security attacks come from internal staff, (knowingly or not) then they do the internet attacker. A perfect example would be nimda. The receptionist that downloads the email and executes the attachment. I think the main principles of secure code are:

IDEA-PLAN-TEST-IMPLEMENT-TEST-EVALUATE OPENLY-
TEST.EVOLVE.

Maybe we can't but help evolve software, and the only way to test it, is out in the wild. :)

Formerly secret document (2)

Futurepower(tm) (228467) | more than 12 years ago | (#2946238)


After considering policies like the one below, it is not difficult to imagine that there may be a U.S. government agency that wants Microsoft software to be insecure.

Page obtained as a result of the Freedom of Information Act [cyber-rights.org].

It says, "I am here as a special envoy appointed by the president and reporting to the special Deputies Committee of the NSC."

"Our goal is a world in which key recovery encryption systems are the dominant form of technology in the commercial market."

At the time, there was no public discussion that the U.S. government was doing this.

NSA Linux (1)

RageMachine (533546) | more than 12 years ago | (#2946326)

Although nobody can prove that one OpenSource OS is more secure than the other. OpenBSD has had its share of security flaws just like every other system. FreeBSD 4.3 & Linux 2.2.15 came to mind.
Linux 2.4 hasn't had a serious security flaw yet. And it is at a 2.4.18 (patch) level. Which is a better record than the 2.2.x series.

Any program ftp/httpd/smtp, that has a security flaw, effects ANY UNIX based system that uses it. Unless the flaw is OS specific.

Re:NSA Linux (3, Informative)

Dwonis (52652) | more than 12 years ago | (#2946832)

Linux 2.4 hasn't had a serious security flaw yet. And it is at a 2.4.18 (patch) level.

The iptables connection tracking security flaw was a major flaw.

new era (2, Interesting)

Veteran (203989) | more than 12 years ago | (#2946048)

All code attacks are nothing but an attempt by people to maintain illusions of superiority: "I must be a better programmer than Linus Torvalds because I can sabotage his work." It is the vandal throwing paint on an existing painting and saying "See I am an artist too". No, if you were a better programmer than Torvalds you would have written a better kernel than he wrote.

People become 'elite crackers' because it is much easier to do destructive things than constructive things; buildings are much easier to tear down than to build in the first place. Because of the asymmetry of the effort involved they get the illusion that they are superior to the people whose code they are cracking.

There is a lot of frustration in youth; the discovery that there are people who have done
much better work than you have ever done - or will ever do - leads to an illusion of inferiority. People attempt to counter that illusion with an illusion of superiority. Not everyone can be as good a coder as Alan Cox - I know I am not even vaguely in his league - but that doesn't make me feel inferior; just different. Nor does being a better coder than most people make me feel superior to them; just different.

Knowing and understanding your limitations and weaknesses is just as important in life as knowing your abilities and strengths. Most people try to hide their limitations and weaknesses from hemselves rather than exploring them, and that is a serious error; you can only do that by lying to yourself. Lying to yourself - when you don't even have a clue that is what you are doing - is a miserable way to go through life.

Re:new era (1)

bad-badtz-maru (119524) | more than 12 years ago | (#2946419)


All attempts to psychoanalyze a certain subculture by people not qualified to do such is most likely an attempt by the "analyst" to maintain illusions of superiority over those in the subculture being "analyzed".

maru

Aren't most security holes (5, Insightful)

buckrogers (136562) | more than 12 years ago | (#2946096)

caused by the C libraries poor implementation of strings, and by the lack of any runtime bounds checking?

The argument that these things slow down code too much doesn't make much sense, considering that we have to do the runtime bounds checking ourself, everytime, and that we occasionally make mistakes.

I think that it is time we drop all insecure functions from the standard C library and replace the library with a bounds checking version that also was more complete and consistent.

It would also be interesting to have a taint flag on the standard C compiler like the perl compiler has to detect when people are using user input as format strings and the like, without cleaning the input first.

Re:Aren't most security holes (0)

Anonymous Coward | more than 12 years ago | (#2946506)

Yes, and that's why we have C#.

Re:Aren't most security holes (1)

thomas.galvin (551471) | more than 12 years ago | (#2946632)

Yes, and that's why we have C#.

I was going to learn C#....so I opened up one of my Java programs, and replaces all the "String"s with "string".

;-)

Re:Aren't most security holes (1)

kz45 (175825) | more than 12 years ago | (#2947100)

I was going to learn C#....so I opened up one of my Java programs, and replaces all the "String"s with "string".

SO was I, until I realized you have to distribute a set of 20MB DLLs with your program.

Re:Aren't most security holes (1)

_avs_007 (459738) | more than 12 years ago | (#2947274)

What are you talking about? The dot net runtime installer is ONE 20 meg executable.

Besides, windows will ship with the .net framework already installed.

Or, you can tell VS to package your whole app into one .msi installer file, and will package only parts of the .net framework that your app has dependencies on.

In our lab, all our machines have the .net framework installed already. So when I deploy an app to another machine, I only need to copy the assembly over. So if you app consists of only one .exe file, that that is the only file you need to "copy" over.

so use it (2, Informative)

Anonymous Coward | more than 12 years ago | (#2946513)

If you want run-time bounds checking, use run-time bounds checking. --enable-bounded is there right next to the --enable-omitfp in your glibc configure script.

Your argument that you have to do your own bounds checking, every time, is wrong. If you have a good grasp of the C language, you should be able to code perfectly secure programs that only perform bounds-checking on external (e.g. user-input) strings.

C is a lot like X: the people who criticise it are exactly the people who don't understand it. If you want bounds-checking, use bounds-checking. If you want garbage collection, use garbage collection. If you want the specific warnings that you've mentioned, use lint. ALL OF THESE TOOLS ALREADY EXIST AND ARE IN COMMON USE. It's alright if you're ignorant of these tools, but for heaven's sakes don't blame the C language for them.

Re:Aren't most security holes (2)

defile (1059) | more than 12 years ago | (#2946826)

The argument that these things slow down code too much doesn't make much sense, considering that we have to do the runtime bounds checking ourself, everytime, and that we occasionally make mistakes

It's a stupid argument. If you profile all of the programs on freshmeat, 95% of them will be bound by interactive user input, or disk, or network, or memory, not CPU.

Unless you have a specific need or die-hard preference, most programs today should be written in a high level language. If you even have CPU bottlenecks, you can rewrite the hotspots in a lower level language--kind of like how people used to optimize portions of C code by rewriting it in assembly.

I suggest Python. ;)

Re:Aren't most security holes (0)

Anonymous Coward | more than 12 years ago | (#2947141)

[...]
It would also be interesting to have a taint flag on the standard C compiler like the perl compiler has to detect when people are using user input as format strings and the like, without cleaning the input first.

Perl's taint checking happens at run time, not compile time. If you're willing to bite the bullet and let the language do a bunch of run time analysis, you ought to switch languages rather than repeating Bjarne's C-plus-bolt-ons mistake.

Greg Bacon

Mistitled article (1)

Grax (529699) | more than 12 years ago | (#2946144)

This article is really about why coding on deadlines is insecure. It overlooks developer-controlled projects that are done when they are done.

Re:Mistitled article (0)

Anonymous Coward | more than 12 years ago | (#2946231)

I thought it was talking about developer controlled projects!?!, are developers not matured into team-leaders, then managers. I think that human fallibility is more the case, oh and Ego's, pay-rises, etc, et al.

Re:Mistitled article (1)

Grax (529699) | more than 12 years ago | (#2946327)

No. Managers are a completely different species from coders. A developer controlled project means that the actual coders make the decision to say it is done and assign the version numbers to it.
(At least in my experience.)

I have never been managed by a former or current coder. My managers have been failed coders that are good at paperwork, marketing personnel, and a manager from another company hired for that purpose.

The reason a lot of useful projects in the open-source world have version numbers less than 1 is because the version numbers are assigned by the developer and a version 1.0 means the developer says it is ready.

By contrast a company releasing a closed source product assigns version numbers to make the customer feel good. A 1.0 assigned by the managers, schedules, timelines, etc means "it is barely functioning but we want to make some money from it now".

Software Engineering not yet Engineering (5, Insightful)

gilroy (155262) | more than 12 years ago | (#2946175)

Software often blows up. Bridges tend not to fall down. Why? Because the field of civil engineering has matured greatly. We know a lot about why bridges falls down and how to avoid it. There are standard tools, standard analyses, and standard, well, standards. We also have some regulatory oversight over construction projects -- construction code and occupancy code. These don't guarantee success, but they usually throw a spotlight onto cost-cutting and corner-cutting as causes for failure.


Consider, however, software engineering. The platform you use, the language you speak, the tools you employ -- they all evolve over short time scales. None have had a century or more of Darwinian pressure applied. No one expects them to work, fully. The liability for failure rests with the company or person using the software, not with the company or person writing it. We haven't had the time to develop the technical or social methods for preventing bad software and reinforcing good software.


How many computer programmer professional societies require rigorou entrance exams and periodic proof of competency?


This will continue until the costs are brought back to the companies that write insecure code. This can happen through government regulation -- the creation of a "software building code" -- or through the dead hand of Adam Smith -- companies start to avoid purchasing insecure software.


The greatest sign that this sort of sea change might be a-coming? The fact that Microsoft feels there is enough market interest to attempt, at the very least, to jump onboard a PR train.

Re:Software Engineering not yet Engineering (2)

sjames (1099) | more than 12 years ago | (#2946773)

Software often blows up. Bridges tend not to fall down.

Versatility is another issue. Bridges are single purpose constructions that are custom designed for a single installation in a reletivly unchanging environment. Nobody has to produce a bridge that must be deployable over any river or across a small canyon, or over a highway with no changes. Bridge designs are never required to be 'cross platfor', that is compilable into concrete, steel, or tissue paper.

Another factor is load. Whenever a novel situation presents (moving a truly massive object over the bridge for example), engineers first determine if the bridge in it's current state of repair can handle the load. Software, on the other hand, is simply loaded 'till it breaks.

Finally, there is almost never a neutral or even adversarial third party with oversight. The programming team cannot tell management that the demanded timetable will result in the software being refused certification and send the project back to step one. Instead, insisting on a timetable that will allow you to produce secure and robust code will eventually get you replaced by someone who will just smile and nod when management (really marketing) tells him/her what the timetable will be.

Re:Software Engineering not yet Engineering (2)

gilroy (155262) | more than 12 years ago | (#2947373)

Blockquoth the poster:

Finally, there is almost never a neutral or even adversarial third party with oversight.

Are you asserting that, in building a bridge, there exists no pressure to finish on time, or better, early? To hold down costs? To maximize profits?


Of course there is. Why then don't most bridges fall down? Because the law clearly stipulates who would be held liable -- the company that cut corners, that used substandard materials, that ignored the advice of its professionals. One of my points is, we don't hold software companies liable for the failure of their software. You can cry "special circumstance" all you want, but this lack of responsibility contributes a lot (IMHO) to the relatively immature state of software engineering.

Re:Software Engineering not yet Engineering (2)

sjames (1099) | more than 12 years ago | (#2947473)

Because the law clearly stipulates who would be held liable

That would constitute part of the neutral or adversarial third party oversight. Sure, some companies cheat anyway (pretty much the human condition), but the temptation is reduced by those third parties. I also note that in most cases, the cheaters do the engineering correctly, then cheat in the actual construction.

If software companies could be held liable if they ignored their software engineers, perhaps timetables would be controlled by the engineers rather than marketing.

A simple way around the issue of open source programmers being unable to afford lawyers and insurance is simple enough. Make the liability laws apply to business transactions (pay software, contract software) only. Further, perhaps various 'levels' could be defined. The strictest goes to software that could actually endanger human life. The least strict commercial goes to annoyances such as rendering glitches in a game.

If someone uses a program in a manner inconsistant with it's rated level (such as rigging a flight simulator as a navigational aid), the liability goes to them.

If the rating was required to be displayed prominantly, consumers might be better able to exercise some choice.

Re:Software Engineering not yet Engineering (0)

Anonymous Coward | more than 12 years ago | (#2946850)

You seem to have forgotten that there are no hackers that attack the bridges. Breaking the bridges is called terrorism and is being severely punished. Somehow a lot of people consider it to be OK to break into a computer.

The bridges are not better designed than computers, they are simply less frequently attacked.

Re:Software Engineering not yet Engineering (2)

gilroy (155262) | more than 12 years ago | (#2947357)

Blockquoth the poster:

The bridges are not better designed than computers, they are simply less frequently attacked.

The last part of that sentence is true. The first is absurd. Bridges most certainly are better designed than (most) software: Bridges fulfill their intended function under heavy load and do not collapse. Software, even when properly installed and run, blows up with distressing regularity.


And when people attack bridges, they assault the physical integrity of the structure; they don't explot design flaws. Very rarely, if ever, does a terrorist say "Hey, look here. They forgot to count the rivets. If I pull one it falls apart." Compare that to, say, the number of buffer overrun exploits out there.

I don't get it (2, Insightful)

Fefe (6964) | more than 12 years ago | (#2946225)

This article says nothing whatsoever about why coding is naturally insecure. It says that Microsoft is unable to write secure code. Well, duh!

Actually, coding is not inherently insecure. There are a couple of good counter examples (qmail and djbdns, for example).

Microsoft's code is insecure because this way customers can be made more dependent on them. And each time they download a patch, they get a big Microsoft logo in their face. Talk to a PR specialist if you don't see why this is good for them. Besides, there is no incentive to make bug-free code. Nowadays customers are so used to broken code that they actually believe that it can't be any different.

Redesign programming languages (1)

mathematician (14765) | more than 12 years ago | (#2946277)

I am only a recreational programmer. But recently I have been writing code, and in the middle of the night, I suddenly think of a security hole. I write in C, and it is just too easy for buffer overflows and such like to slip through, even if you are thinking about them as you program. In the end, it was not that hard for me to batten down the hatches in my code, but it was a small program, and I had no time pressure.

It seems to me that we need a new approach to designing code - an approach where things like checks for buffer overflows are automatic in the program design. I have heard of an approach (I heard this maybe 20 years ago) where you "prove" the correctness of the program as you go along. The approach of "proving" the program cannot work for all programs, because of Turing's halting theorem. But most programming tasks could be written in such a way that they could be proven.

As applications become more and more complicated, it seems to me that some very clever person needs to rethink the whole way in which we designed programs. Possibly a very creative breakthrough approach is required.

Go functional? (3, Interesting)

Anonymous Brave Guy (457657) | more than 12 years ago | (#2946572)

You might want to investigate functional programming languages, such as ML and Haskell, then. They do have a very different approach, one that is much more easily proven to be correct. It also has several other demonstrated advantages, including very much faster development and shorter code to achieve the same results as traditional programming methodologies, in several case studies/competitions. Try looking at the ICFP programming contests for the past few years for some very interesting reading. (A quick Google search will turn up all the homepages straight away.)

The big thing holding them back right now isn't technical merit, it's lack of "critical mass". Most managers and senior development types simply don't do enough homework to know about these things and the potential advantages they can offer. But if you're programming purely for recreational reasons, there's no reason you can't play with the best toys. Free compilers and libraries are available for many of these languages. Happy coding...

It's all bullshit (2)

Alex Belits (437) | more than 12 years ago | (#2946376)

There are only three causes of insecure code:

1. Developers' ignorance.
2. Developers' stupidity.
3. Selling underdeveloped software.

Re:It's all bullshit (1)

yggdrazil (261592) | more than 12 years ago | (#2946634)

You're probably right.

I'm a developer. It's my job.

It's my job to try to keep current, and avoid hubris. It's my job to try to do thing properly, and build a solid construction.

Yet I'm always afraid when code ships.

Accidents will happen. There will be errors. I try to not make them in the first place, and to weed them out afterwards by testing my stuff properly, but errors will ship in every project.

It's a humbling job.

But at least I know my job is difficult. And take precautions. A lot of people in the business don't care that much. Either because they don't have proper training and experience, or because they think they are immortal programming gods.

I believe in compulsory code reading in front of colleagues, mayby even managers and customers. Constant peer pressure which will force developers to do things properly. I've been involved in projects which has talked about code reading, but it rarely actually happens.

Two Points (1)

tsprad (160992) | more than 12 years ago | (#2946499)

Two points:

1. Software is still a lot better, and a lot more secure, than the alternatives. Imagine running an insurance company without it.

2. Most of the trouble with software is just overarching ambition. There's no way that millions and millions of lines of interconnected code will ever work consistently, reliably, and securely. But it's just too tempting to add more features, more chrome and tail fins, than to concentrate on the problem to be solved and remove everything that doesn't help solve it.

Why is code insecure? (2, Insightful)

thomas.galvin (551471) | more than 12 years ago | (#2946704)

IMHO, there are three chief reasons code is vulnerable.

Time. The article was right about this one. If you look through our source code, you can see a definate difference between the "we've got all the time in the world, so follow the style guide to the letter, comment everything, and desk check it all before you send it to test" code and the "beta is due on Monday, so tell you girlfriend to have a nice weekend, and could you get some Code Red on the way in, we're going to be here a while" code.

When you are trying to get code done fast, one is much more prone to looking only at the stated goal of the code (i.e. it takes file X, converts it to format Y, and sends it to machine Z) and ignoring things like modularity and security. One tends to be much more concerned with "how do I get this to work" than "how can some one get this to break".

Ignorance. I don't know a whole lot about buffer overflows, or gaining root when I shouldn't have it, etc. I've got a book on it (which I'm sure my sys admin would love to see sitting on my desk), but the fact of the matter is that most colleges don't doa whole lot of teching in this area; what people know about security holes is usually because they hack around (either on their system or someone elses), or they got hacked. The industry would be a lot better off if schools were teaching woul-be programmers what people will try to do to their systems, and how to avoid it.

Over Reliance on the OS. At least in Microsoft's case, I beilieve they are trying to do too many things at the OS level, which means a security flaw that effects one program can often be opened up to exploit all programs. Take, for example, the registry. If one program's .ini file get's nabed, it probably isn't as big a deal as if the entire system registry gets nabed.

Why software developers write insecure code (3, Insightful)

defile (1059) | more than 12 years ago | (#2946757)

Developers who are more inclined to write secure code seem to come from a background that involves administering free UNIX systems in the mid-90s. This is when we started seeing an explosion in the number of nodes attached to the internet 24/7, most of them running a freenix. We were first to bear security problem onslaughts that everyone now deals with today. A sneak preview.

We had to deal with release after agonizingly insecure release from Berkeley, Washington University, Carnegie Mellon. Deal with urgent "security patches" that simply add bounds checking to strcpy, and praying to god that we get our bugtraq email before the script kiddies have figured out how to uncripple the exploit code.

Servers being attacked just because one user was running an IRC bot in a channel some teenage punk wanted to take over. ISPs being knocked off the net just for running an IRC server. Spammers, denial of service attacks, buffer overflow exploits, rootkits, social engineering, man-in-the-middle attack, password sniffing, brute force cracks, .

Developers who lived through this find that the rest of the world (ie, the people starting to do serious stuff on the internet today) are blissfully unprepared for the security onslaught. More NT servers are connected now than ever, ASPs are coming to the harsh reality that they have 40,000 lines of insecure trash running their web site, home users completely unaware that their broadband "always-on" connection really means "always-vulnerable".

The only common traits we share are cynicism. Cynicism for all developers, all companies, all users, everyone. Hundreds of security holes being introduced every second. Every gadget you buy, every shopping cart you push, your comb could have a buffer overflow, careful! that milk might be sour!, oh no! quiet or the cake won't rise!!! they're crawling all over my skin--get them off get them off, use the ice pick use the ice pick!!$%*)!@!!

If you as a programmer don't see the world that way, don't expect to write anything but insecure garbage. But don't worry, you'll learn your lesson just as we all did. And don't be mad at us if we laugh, because we're laughing with you.

Sllop! (-1, Troll)

Serial Troller (556155) | more than 12 years ago | (#2946910)

What a bunch of SLOP on Slashdot today. ANAL COX would be proud -- if he weren't busy shoving his cocks in somone's anal regions.

SLOP. Plain and simple. S. L. O. P. What does that spell, other than VA-Ginux's stock crash and Slashdot's evental downfall? SLOP. A lotta, lotta SLOP!!

Alarm bells going off! (1, Insightful)

Anonymous Coward | more than 12 years ago | (#2947091)

"The Microsoft campus contains some of the most brilliant designers and programmers the world has to offer"

Statements like this are silly. *HOW* can the author say M$ has brilliant designers when all you see is the end product?!?! They could have gone through thousands of design interations, each entirely different, with no vision until they hit on something they think looks good! And brilliant programmers?!?! There are an unlimited number of ways you can write an algorithm but there are only a handful of ways to do it `brilliantly!' Do the programmers at M$ write brilliant algorithms? Well, let's check the source...oops!

Reflections on Trusting Trust (1, Interesting)

Anonymous Coward | more than 12 years ago | (#2947186)

Ken Thompson always knew code was insecure. Almost 20 years ago. I know everyone is fucusing on DDoS attacks and buffer overflows, but we can't ignore that code is not secure on several different levels.

Look no further than his excellent paper "Reflections on trusting trust" that he wrote for his Turing Award Lecture in 84.

There's a copy online at http://cm.bell-labs.com/who/ken/trust.html.

Good read. Timeless!
Here's an excerpt:

"The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code."

Physical construction analogies inaccurate? (2)

michael_cain (66650) | more than 12 years ago | (#2947246)

Lots of people use construction analogies, as in "...the first woodpecker that came along..." Analogy is always slippery, and certainly very little software is built in the same fashion that a big bridge gets built:
  • The bridge is designed, in considerable detail, by a small team of experienced people, long before any construction begins. The team includes experts in diverse disciplines. New approaches are subjected to extremely expensive testing in the form of computer analysis, physical models, etc. There are no "start up" bridge design companies staffed by college dropouts.
  • While hundreds of workers may be involved in the actual construction, very few are doing things that can by themselves render the bridge unsafe. By comparison, almost any programmer on the team can render the entire program insecure by failing to test an input adequately at runtime.
  • Critical bridge components are tested to a degree (and can be tested to that degree) rarely seen in the software world. What's the equivalent of x-raying a large sample of the welded joints?
Just for the sake of argument, I would assert that most shrink-wrap software and the downloadable equivalents are built using standards no better than those of the home craftsman building a bookcase. And like the bookcase, that software works just fine, most of the time, until someone pushes at it the wrong way. However, if we built bridges and skyscrapers the way that craftsmen build bookscases...

The Cruelty of Really Teaching Computer Science (1)

Bobtree (105901) | more than 12 years ago | (#2947453)

is a great essay by Edsger W. Dijkstra (yes, the shortest-path algo guy). the central point is that the complexity of software development is so much greater than anything else we construct that this sort of problem is inevitable.

it can be read here:

http://www.ulla.mcgill.ca/arts150/arts150r1.htm

diversity is the only solution (2)

markj02 (544487) | more than 12 years ago | (#2947529)

Beliefs that something as big and complex as Windows, Office, or Linux can be made secure are misguided. You can do a little better than those systems by using better tools, but that won't save you.

The only solution is to have a wide variety of software, so that any particular fault only affects a small number of users. Yes, you pay for that in interoperability and support costs, but the alternative, an operating system monoculture, will be getting more and more vulnerable and unreliable over time.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...