Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Is Finding Security Holes a Good Idea?

michael posted more than 9 years ago | from the dare-not-speak-its-name dept.

Security 433

ekr writes "A lot of effort goes into finding vulnerabilities in software, but there's no real evidence that it actually improves security. I've been trying to study this problem and the results (pdf) aren't very encouraging. It doesn't look like we're making much of a dent in the overall number of vulnerabilities in the software we use. The paper was presented at the Workshop on Economics and Information Security 2004 and the slides can be found here (pdf)."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered


Google is teh friend (5, Informative)

Mz6 (741941) | more than 9 years ago | (#9398913)

Posting a PDF on /. is almost certain server death. Here are Google's HTML versions:

Is finding security holes a good idea? []

Writing Security Considerations Sections []

Dammit.. AC (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#9398940)

I meant to post as AC.. Karma be damned.

In case the Google'd version gets nailed... (0)

Anonymous Coward | more than 9 years ago | (#9399065)

Here is the freecache to Google's HTML versions: Here [freecache.org] and here [freecache.org] .

Re:Google is teh friend (1)

roror (767312) | more than 9 years ago | (#9399152)

The files are hosted on a university server, they are as good as any other caching server.

Karma Whore (-1, Offtopic)

Gothmolly (148874) | more than 9 years ago | (#9399206)

So if there's a static, 20KB PDF file, it somehow slashdots a server? Bah, I call whore.

Re:Karma Whore (0, Offtopic)

Mz6 (741941) | more than 9 years ago | (#9399247)

I hate loading Adobe's bloatware... I meant to post as AC anyways. Damn... lay off.

first post (-1, Troll)

Anonymous Coward | more than 9 years ago | (#9398918)

frost pist

No (-1, Troll)

Anonymous Coward | more than 9 years ago | (#9398922)

No, but finding gaping assholes [goat.cx] is a good idea.

RTFM?! (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#9398937)

Looks like a school project or something...the company name is RTFM, Inc.

Fixing vulnerabilities is GOOD! (3, Insightful)

zoobaby (583075) | more than 9 years ago | (#9398938)

In order to fix vulnerabilities, you have to find them. However, as soon as they are found and publicized, some script kiddie exploits them. So yes finding them is a good idea, patches just need to be released and INSTALLED before script kiddies expliot them.

Re:Fixing vulnerabilities is GOOD! (5, Insightful)

jwthompson2 (749521) | more than 9 years ago | (#9399059)

This is one of the best points the author makes though. He describes that if automated installation of patches were widely deployed then the benefits to discovery would increase. The problem lies in the number of systems that remain unpatched and thus exposed. The real problem is not that Discovery is not worth the time and money spent, but that it becomes worthless if the patches created are not applied.

Uhuh. Is this good if Microsoft does this? (5, Interesting)

aussie_a (778472) | more than 9 years ago | (#9399201)

In principle, I agree that automatically installing patches is a good thing in principle. But Microsoft has a habbit of changing their licenses and installing DRM when people "upgrade" and/or "install patches."

Also, imagine I have 2 programs. Both automatically install patches. Unpatched they both work fine. But when program #1 is patched, program #2 cannot run at all. Now this will probably be fixed eventually, but in the mean-time, I cannot run program #2 at all. If I need both programs, I'm fucked with the system of auto-patches.

However when I have a choice, how likely am I to install a patch? Not as likely (due to laziness). So the effectiveness decreases significantly.

Re:Fixing vulnerabilities is GOOD! (5, Insightful)

Ra5pu7in (603513) | more than 9 years ago | (#9399213)

The problem with automated patching is that some of the patches interfere with previously working software. When you manage several hundred computers with specially designed software and a blasted patch to fix a security problem can take the computers down when the software is run, you sure as anything will never let the patch process remain automated. I'd rather test it on a few computers before broadly applying it.

Re:Fixing vulnerabilities is GOOD! (2, Insightful)

Anonymous Coward | more than 9 years ago | (#9399102)

As always, this assumes that the only exploits are by script kiddies that can only make use of publicized vulnerabilties. And that is decidedly NOT true!

In fact, script kiddies serve the purpose of forcing vulnerabilities to be patched quicker by writing exploits that are so badly written that they generally don't do much damage beyond crippling attacked machines.

In contrast, the true black hats that use exploits to quietly and competently install keyloggers, spam relays and mine creditcard/banking data do more economic damage over longer periods of time.

Re:Fixing vulnerabilities is GOOD! (2, Interesting)

EvilCowzGoMoo (781227) | more than 9 years ago | (#9399223)

In order to fix vulnerabilities, you have to find them. However, as soon as they are found and publicized, some script kiddie exploits them.

I wonder if this model can be reversed. Instead of software companies spending millions to find the vulnerabilities there is a huge body of free labor out there who will do it for you. This would eliminate script kiddies.

Now when a new exploit comes out its a matter of containing it ASAP and plugging the newly found hole. There would be damage, granted, but would it be more than the cost of finding the vulnerabilities?

I've read the paper and disagree (3, Insightful)

u-235-sentinel (594077) | more than 9 years ago | (#9398944)

While we still have a long way to go regarding security, I believe we're still learning how to design security into systems. People are creative. Computers are not. I believe that we're infants at this stage of computer development. Look at how far we've come in 30 years? Where will we be after 30 more?

It's still a brand new world to explore. We have alot of work ahead of us.

Re:I've read the paper and disagree (1, Insightful)

jhunsake (81920) | more than 9 years ago | (#9399221)

What a generic response. This could be posted to any story, just replace "security" with another topic. I can't believe moderators fall for this shit.

Looks like (0, Funny)

Anonymous Coward | more than 9 years ago | (#9398946)

Looks like Microsoft had it right all along! :o)

/me ducks and runs

MOD PARENT UP (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#9398990)

stupid moderators, it was supposed to be a joke


Anonymous Coward | more than 9 years ago | (#9399153)

A stupid (unfunny) joke.

Are you retarded? (0, Flamebait)

sampowers (54424) | more than 9 years ago | (#9398949)

If no one fixes the holes, someone's going to find them, and exploit them. That's all there is to it.

Re:Are you retarded? (-1)

Anonymous Coward | more than 9 years ago | (#9398961)

Better the "good guys" than the bad!

Re:Are you retarded? (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#9398966)

Are you a retard, a fucktard, or a slashtard?

This is why we need open source (1)

etcremote (776426) | more than 9 years ago | (#9398953)

I'm not 100 percent with the whole free software foundation ideology, but something about it that really appeals to me is the implication for computer security. If we could just get people to provide the blueprints for their software, I think it would go a long way to increasing computer security. Having thousands of people across the world looking at code for bugs is a heck of a [anti-slash.org] lot better than just the dozen developers assigned to a particular product. It's already easy to find ways to make programs crash; let's focus on trying to find ways to keep them up by fixing bugs. Rolf

Re:This is why we need open source (0)

Anonymous Coward | more than 9 years ago | (#9399034)

Making the source code available to anyone makes it easier for people to find holes.

This is a proven, incontrovertible fact.

Don't buy it (3, Insightful)

Omega1045 (584264) | more than 9 years ago | (#9398956)

I cannot believe that sticking your head in the sand is any better. I would think that there are many examples of security holes being found and patched before they could be exploited.

If anything, the data seems to point to the fact that software companies and users need to act on security holes and patches more quickly. This may require better education of the user, and it also would help to have better patching mechanisms.

Re:Don't buy it (1)

BillyZ (169879) | more than 9 years ago | (#9399002)

Exactly. If the developer's don't find and patch them. You can be certain the script kiddies are going to find and exploit them. Irresponsible would be quite the understatement if a software company looked the other way in regards to "holes" in their software simply because it's not economical.

Re:Don't buy it (2, Informative)

jhunsake (81920) | more than 9 years ago | (#9399075)

You can be certain the script kiddies are going to find and exploit them.

By the very definition of the term, script kiddies do not find holes or exploit them, they simply run the exploit scripts.

Re:Don't buy it (2, Insightful)

markov_chain (202465) | more than 9 years ago | (#9399126)

My interpretation of this claim is that perhaps instead of trying to find and fix holes, we should focus on using more secure tools and frameworks, so that they automatically eliminate a whole class of holes. Look at how much pain was caused industry-wide by using C/C++ with all the buffer overflow vulnerabilities, which are trivially avoided in different languages (e.g. Java).

High-larious (2, Funny)

Defiler (1693) | more than 9 years ago | (#9398964)

I like sticking my head into the sand, but the grit keeps scratching my sunglasses. Any suggestions?

Re:High-larious (-1)

Anonymous Coward | more than 9 years ago | (#9399069)

Natalie Portman has the answer to your grits problem.

Re:High-larious (2, Funny)

Bombcar (16057) | more than 9 years ago | (#9399108)

I believe that around here, you're supposed to use "Hot Grits."

Maybe one of the olde-tymers can help us here.....

The alternative is... ? (4, Insightful)

GreyyGuy (91753) | more than 9 years ago | (#9398971)

'Cause trusting the manufacturer to make their product secure has shown to be such a good solution in the past.

The alternative is to not look and leave that to the people who will fix it or the people that will exploit it. Are you really comfortable with that?

Re:The alternative is... ? (1)

robochan (706488) | more than 9 years ago | (#9399209)

'Cause trusting the manufacturer to make their product secure has shown to be such a good solution in the past.

Yep, those manufacturers have been on [slashdot.org] the [slashdot.org] ball [microsoft.com] haven't they?

Okay lets leave the holes (1)

Fullmetal Edward (720590) | more than 9 years ago | (#9398977)

how does this sound. We leave all the holes and every PC gets turned into a zombie spammer and the internet gets slashdotted by the spam.

If you ignorethe problem it gets worse untill no one can deal with it. If you dig at it little by little you may notkill it but you restrict it's growth and make it manageable.

But what about the converse? (4, Interesting)

Hayzeus (596826) | more than 9 years ago | (#9398982)

Let's say we all stopped reporting security holes in software -- would the resulting software actually be any better?

I guess I'm a little unclear on what the research stated is supposed to actually accomplish.

Re:But what about the converse? (1)

bwalling (195998) | more than 9 years ago | (#9399237)

Let's say we all stopped reporting security holes in software -- would the resulting software actually be any better?

No, but there would be fewer machines that were infected with viruses and other crap. The "reporting" only provides script kiddies with a list of ways to be dicks.

It helps admins (5, Insightful)

digidave (259925) | more than 9 years ago | (#9398989)

As a sysadmin, I can tell you for certain that reading bugtraq and other vulnerability lists helps me. I can study trends in software, trends in company response and protect myself against problems. If I know a new worm or vulnerability has a prerequisite configuration then I can make sure to configure software in a way where I won't be vulnerable until a patch is release or until I can apply it.

Anyone who is subscribed to bugtraq can see the bad situation some software is in. Lately there was a lot of posts about Linksys that raised my eyebrow. Do I really want to deal with a company that doesn't properly address vulnerabilities it's made aware of? Good thing bugtraq posters had a workaround for the Linksys remote administration problem.

Re:It helps admins (2, Funny)

saderax (718814) | more than 9 years ago | (#9399269)


Thcs m.ssage wrikken fsing tje Dvorat teyboare payouk.

interesting sig. First one assumes that the message translates to "This message written using the Dvorak keyboard layout. However, the 'E' correctly used at the end of the word assumed to be 'the' and in the beginning of the word 'keyboard' is also used at the end of that word supposedly representing the 'D' letter. The period in the middle of the word assumed to be 'message' translates to 'E' however we can see that natural occurances of the 'E' character appear elsewhere and the period also appears at the end of a sentance correctly. From this i can draw one of two conclusions:
  1. This message was NOT written using a Dvorak keyboard
  2. (or) The message more appropriately translates to "This messagd writtdn using thd Dvorak kdyboard layoute"

Mercatur's hole (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#9398992)


Her brother and his girlfriend have kicked her out of their house because Mercatur keeps bringing men back to their house to copulate with.

So what if Mercatur believe in free love? So what if she is sexually active and attractive? This woman is so hot, can you blame men for being drawn into her web of seduction?

Mercatur was brought up in Texas but she didn't let that hold her back. She's now queen of the Internet Camgirls, with a body to match her reputation.

Mercatur is the sexiest woman on the Internet. Don't you dare deny it.


The real problem, (5, Insightful)

Cow007 (735705) | more than 9 years ago | (#9398994)

The real problem in software security lies in the design of the software itself. No amount of patches and service packs can secure unsecure software. Instead to be secure it has to be biult that way from the ground up. These findings seem to make sense in this context beacause patching software doesen't change the fundamental way it works.

Re:The real problem, (5, Insightful)

Analogy Man (601298) | more than 9 years ago | (#9399232)

Mod this parent up!

More important I think than fixing vulnerabilities and posting patches that may or may not be adopted by users is good design. To extend on the parent's thought... if development teams learn from the flaws in their current and past designs and use those considerations to identify "good" practice and "bad" practice it is likely the end product will be better.

If posting a patch is a "hack me! hack me!" alert and there is not a means of pushing a patch out to everyone, would there be a way that security patches could be obfuscated with "enhancements" and more anonymously roled into scheduled releases?

It's an arms race (4, Insightful)

ajs (35943) | more than 9 years ago | (#9398996)

The goal of searching out vulnerabilities is to find them before the people with black-hats do. This is why most clearinghouses of such reports don't release the information until there is a fix (or until such time passes that vendors have demonstrated a lack of interest in producing a fix): the people who would exploit the bugs need to mount their OWN efforts to discover them.

Ignoring actual bugs, there are many other kinds of security vulnerability. We know that software will always have side-effects that we don't intend. In fact, we desire this (e.g. providing a user with the flexibility to use the product in ways not envisioned by the creator). Sometimes those side-effects have security implications (e.g giving someone an environement variable to control a program's behavior lets a good user do something you might not have thought of, but it turns out a malicious user can abuse this in order to raise their security status).

This means that, as long as software is not static, security bugs will continue to be introduced. Discovering them as fast as possible is the only correct course of action... you KNOW the black-hats will be.

Quick answer (0)

Anonymous Coward | more than 9 years ago | (#9399000)

Is finding security holes a good idea? Only if you want to fix them.

Uphill battles (0, Offtopic)

pudge (3605) | more than 9 years ago | (#9399004)

Should we jail murderers, since it doesn't seem to prevent murders, or curb the murder rate? Whatever.

Anyway, he is looking at the problem on too wide a scale. Slash (the code running this site) is much less vulnerable to various exploits than many of the alternatives that have cropped up, and yes, it has been a huge benefit to the people who run and use this site, undoubtedly.

uh yea. right.. (-1, Redundant)

Anonymous Coward | more than 9 years ago | (#9399005)

Finding vunerabilities and fixing them properly *is* a Good Thing. Especially done by white hats. If not then yes for a brief moment a limited population has the power to cause havoc, and that havoc is certainly exponentially bigger their total in numbers of members. Hence its just plain bad not to be looking for vunerabilities, thats like saying the military shouldn't be looking for vunerabilties in strategies or equipment, if you dont your screwed.

New Study (1, Funny)

Anonymous Coward | more than 9 years ago | (#9399007)

In other news, a new study shows mowing the lawn doesn't stop the grass from growing. Scientists are perplexed at this unusual discovery.

It helps (3, Insightful)

insanely_mad (636449) | more than 9 years ago | (#9399008)

one of the reasons I now use Firefox as my primary browser is because so many exploits were found in IE. So even if Microsoft doesn't respond when exploits are found, these exploits do cause some people to look for more secure alternatives.

Re:It helps (1)

Elecore (784561) | more than 9 years ago | (#9399117)

But people dedicate a lot of time to finding these exploits. It's similar to the Windows/Linux arguement. Windows has a lot more security holes, but it also has A LOT more users to find these holes. If 90% of internet uses suddenly started using FireFox, I bet holes would be found in it as well.

Yes. (5, Insightful)

nacturation (646836) | more than 9 years ago | (#9399013)

To answer the question, yes. Finding security holes is a good idea.

To the unasked question, "Is finding individual security holes the best possible use of a security researcher's time?", the answer is No. The best use of security research is to classify different types of security holes and use that information to create a development framework where those security holes are extremely difficult to recreate. For example, you're not going to find buffer overruns in Java code, since the memory is dynamically handled for you. Eventually, having security levels, encrypted buffers, etc. will all be part of a standard developer's library.

Trust but Verify... (0)

Dark Coder (66759) | more than 9 years ago | (#9399015)

Seems like the future is no longer "Security Through Obscurity" but more like "Trust but Verify"

I trust you ;-)

we need no bugs from the start (1)

Big Torque (196609) | more than 9 years ago | (#9399019)

what is needed is for security to be part of the design from the beginning. We can remove many or all of the over flow problems by just changing the language used to write the software. no password should be saved or sent unencrypted ( with good hard to brake encryption. For example. This needs to designed in to the program or OS not added latter. By dealing with these kind of problems after the fact we are chasing the bugs and pushing the bad design around the code, instead of just having a good design from the start. secure be design not after thought.

knowledge is power, (0)

Anonymous Coward | more than 9 years ago | (#9399020)

but ignorance is bliss.

take your pick

Control for the experiment (2, Interesting)

thechuckbenz (526254) | more than 9 years ago | (#9399022)

A proper experiment would be an application where the developers made no attempt to find security problems. Any volunteers ? Anyone want to install such an application. (Nevermind all the joking about MSoft having already volunteered and how it's widely installed).

I would mark this one as a troll... (5, Insightful)

Rahga (13479) | more than 9 years ago | (#9399043)

Evidence wouldn't show us that searching for security holes improves security... Rather, such a judgement requires reasoning and evalution of the evidence. Common sense stuff, here.

Do smashing cars head-on into brick walls improve car safety? No, of course not. Evalution of the results of the crash, and using those findings to build better cars, that is what improves car safety, and the situation is entirely analogous in the security world. The assumption is that there is always a weakest link in security, that link is the most likely one to be exploited, and the trick is finding that link and strengthening it against attacks in the future, hopefully to the point where it is more likely that other links are weaker.

Good point (1)

UID1000000 (768677) | more than 9 years ago | (#9399049)

Another good point, IMHO, is that patching holes allows for hackers to reverse engineer the patch quicker. It's easier to reverse engineer than to develop

Software companies should work to patch less. In particular Microsoft should save patches for Service Packs. This would keep worms and viruses out longer.

That's my IMHO. What do you think?

testing and finding vs cost (1)

Da_Slayer (37022) | more than 9 years ago | (#9399051)

The ideas of White Hat and Black Hat finding versus the impact and cost are valid points.

I look at it this way. It may cost to fix a security problem, whether it be time, money or both. However fixing a problem that is found is far better than just letting it dangle. So what if only 1 in 5.67 million can find the flaw. It was found once and it will be found again. Err to the side of caution in this case. I prefer to have all my bases covered with current known problems versus losing data/time/money on something that I could have fixed.

In terms of more security bugs being found and fixing the problems. The internet and this technology is still growing. Fixing and securing software/hardware is one of the growing pains for our industry. It is better to fix what is broken then to stick your head in the sand and hope no problems arise.

Besides where is the fun in not having to change things up once in a while? =P

improving security... (2, Insightful)

m.h.2 (617891) | more than 9 years ago | (#9399053)

"but there's no real evidence that it actually improves security"

OK, didn't RTFA, but is there 'real evidence' to the contrary?
You can't fix what you don't know is broken. Is ignorance a better security solution?

This is like saying... (2, Informative)

Vexler (127353) | more than 9 years ago | (#9399057)

...that hunting down thugs and thieves and terrorists is not necessarily helping the nation's security, so let's not do it. Asinine suggestion.

Re:This is like saying... (4, Insightful)

AviLazar (741826) | more than 9 years ago | (#9399143)

I'm thinking its more along the lines of - if we do not help find security holes, then we are giving less amunition to hackers. The only problem with the hypothesis is it assumes hackers only gain this "ammunition" through legitimate coders who are trying to find vulnerabilities. In fact, as all of us know, hackers do find security holes on their own, without help from other people.

It is very important (1)

MrRuslan (767128) | more than 9 years ago | (#9399061)

to find problems in any line of products. Quility and relaibility is important.Would you buy an unreliable car that would kill everyone in an accedent,U may never be in an accedent but would u risk it when it comes to a car.how about a lock for your front door?

If we don't look for security holes... (2, Insightful)

AviLazar (741826) | more than 9 years ago | (#9399068)

then someone who wants to do the real hacking will find them. If a malicious hackers finds the security hole, then he/she might utilize it, and they won't be nice enough to give us a patch to protect against it. So since the holes are there, lets find them and patch them BEFORE some malicious programmer does. Finding security holes is a good choice, making patches for security holes is a better choice, actually UTILIZING these patches for security holes is the BEST choice...unless you want to be on Citibank Identity theft commercials :)

people are the problem (1)

unknown_goth (773919) | more than 9 years ago | (#9399094)

software is like art and the only flaws in it are the ones we place there. software will never be 100% secure face it thats how the world works. even if the whole open-source community worked on a single piece of software ( wouldn't that be interesting) there would be a few that would work against it. heres a solution to the problem... have people start making software that no one wants to find holes in. just release it and everyone gets along.

Its a good idea (2)

Linus Sixpack (709619) | more than 9 years ago | (#9399095)

The people trying to abuse security may likely have the same sort of skills as the flaw hunters, though hopefully less skill. Its not just that some bugs are found but perhaps the most obvious ones are found.

Embarrassment encourages vigilence: Software firms are always looking to reduce costs (who isn't) - outside bug hunters encourage them to test more completely.

I think really bad software abuse usually has a motive connected with bad treatment or reputation for bad treatment. Even if there is a small lag time between the discovery and fixing of a hole it doesn't let the problem lie around where people who develop a grudge can use it.

Finally (and most importantly) fairness dictates that if I'm using a product you know a problem with - you should tell me about it. Consumers deserve the chance to disable systems, switch products etc.... if they feel vulnerable.

Especially if software is closed source - how do you know this bug isn't the tip of the iceberg. Companies have conviced consumers they dont get to look inside software -- they can't stop others from hearing about its flaws.


Finding the holes is only half the battle (4, Interesting)

Ed Avis (5917) | more than 9 years ago | (#9399097)

If you find a security hole then the mistake has already been made. Fix the hole, but also make sure the same bug doesn't exist in any other program. Finding the same exploits again and again (buffer overruns, format string vulnerabilities, tempfile symlink vulnerabilities) reflects very badly on free software and programmers' ability to learn from bugs found in other software. (Not that proprietary software is necessarily any better - I am just not discussing it here.)

The OpenBSD people have built up a good track record on security by finding holes and fixing them everywhere possible. I am sure they would disagree with your assertion that finding holes does not help to improve security. Finding the bugs is an important first step towards not putting them back in next time you write code.

OpenBSD (0)

Anonymous Coward | more than 9 years ago | (#9399099)

The OpenBSD folks have known that for ages.

Finding holes is good (1)

bruns (75399) | more than 9 years ago | (#9399100)

The idea is to find the security holes before the bad guys do, so you can fix the problem before its in the wild and exploiting people, without your knowledge.

Every program has bugs. There is no way around it. What makes the difference though is how you respond to bugs when they are found.

You have a choice - either be like Microsoft, try to deny that the bugs exist, or downplay the bugs, and try to stifle the person who found it - or be like a real programer, fix the bug, and get people to fix the problem, and go on with life.

"Finding flaws" is not the problem (2, Interesting)

KarmaOverDogma (681451) | more than 9 years ago | (#9399101)

Working to discover what security flaws exist in any given program, series of prgrams, Operating Systems, hardware, etc is not the real issue in my opinion: it is the idea of working to design a system that is as stable, flexible/adaptable, transparent, and clear as possible while at the same time providing a foundation that allows room for future growth. To really execute all of these concepts well can be a truly daunting task, IMO, given the often limited salaries/wages, time and other constraints (e.g. management) that progammers in particular have to face. This is just one of the reasons programs/kernels/systems, etc go through so many revisions.

I know the article doesnt imply this at all but the solution to security and stability problems does not lie in simply sticiking our collective heads in the sand. We have to answer the who/what/when/where/why elements of design. Building a better mousetrap involves many elements, as I alluded to above.

fundamental misconception perhaps (2, Interesting)

beejhuff (186291) | more than 9 years ago | (#9399105)

Is it just me or does it seem a stretch that within the first couple of paragraphs the assumption is made that there is somehow a direct relation between the number of intrusions and the cost of those intrusions:

"If a vulnerability isfound by good guys and a fix is made available, then the number of intrusions--and hence the cost of intru-sions--resulting from that vulnerability is less than if it were discovered by bad guys"

While I'm not certain that there is NO relationship between the two, I'm certainly NOT comfortable positing such a blanket assessment.

Perhaps there is a relationship between the net economic cost and the number of intrusions, but it seems equally likely that it would be possible through full disclosure the marginal cost of each instrusion could be reduced; a possible seemingly left lightly treated at best in this essay.

Think about this (0)

Anonymous Coward | more than 9 years ago | (#9399122)

Nowadays so many people are still ignorant about software security... If we were not searching for (and finding) security holes, the situation would be even worse ! The bad guys would be even more dangerous with their very well organized information channels... Think about this.

We need whitehats because there are blackhats (1)

bollow (a) NoLockIn (785367) | more than 9 years ago | (#9399132)

In an ideal world, where no-one would think about doing anything nasty, there'd be no need for anyone to study security, look for problems, write proof-of-concept exploits, etc.

However, the real world is not like that. There are nasty people (inviduals as well as organized criminal groups). We can't stop them from studying security and, as long as there are serious securtiy problems, these people will find some of them and use them to do whatever evil deeds they want to do commit (like turning PCs of innocent people into spam-spewing zombies, credit card fraud, etc etc.)

The only way to effectively counteract this is to bring problems into the light. Without security research by "whitehats" (people who look for vulnerabilities but don't use them without prior authorization from whoever is in charge of the vulnerable computer system), only the "bad guys" (blackhats) would have any in-depth knowledge of security issues. There'd be no hope of making systems secure if only the bad guys have the knowledge that matters!

Missing a big part of the conclusion (4, Insightful)

jwthompson2 (749521) | more than 9 years ago | (#9399134)

Many posters have already taken to jumping to bad conclusions having not latched on to one of the report author's best conclusions. If patches are not applied then the time and money spent on discovery are worthless. The only ways to make discovery worthwhile is if the patches are applied, otherwise discovery does not resolve the vulnerabilities.

Automatic/Forced patching is the only way to make discovery worthwhile because otherwise the number of vulnerable systems is unpredicatable over time and constitutes a large risk. Security issues must be resolved as quickly as possible in order to mitigate risks, and unless patch application is automated and enforced then discovery becomes meaningless.

Gimme a dollar (1, Insightful)

Anonymous Coward | more than 9 years ago | (#9399137)

Security holes are bound to be discovered eventually, either by unscrupulous users or professionals with honest intentions.

By hunting for flaws in software and making them public, these flaws can be fixed... Not making a vulnerability public doesn't help anything. It just gives Joe hacker his own personal backdoor that he's free to use indefinitely.


My Take... (3, Interesting)

abscondment (672321) | more than 9 years ago | (#9399140)

The value of finding security holes is totally dependant upon the company whose product has the hole.

I work at Barnes & Noble as a bookseller; within a few months of working there, I found huge security holes in their in-store search system and network. I reported these to my manager; as soon as she grasped the scope of the hole, she called tech support in New York. I spent 30 minutes explaining all the different problems to them, and that was that.

I didn't think they'd do anything about it, and that's the problem--since it costs time and money, most companies can't or won't fix holes. To my surprise, however, my store began beta testing the next version of the in-store software. What was even more surprising was that every hole I had reported was gone (so I went and found more, of course).

There's never a silver-bullet; a new vulnerability will always surface. It's really hard to stay ahead of the curve, but it's something that must be attempted.

Not Asking the Right Question (1)

cynic10508 (785816) | more than 9 years ago | (#9399149)

I don't think you can argue effectively that it's not economically viable to search for vulnerabilities. Each exploit prevented saves money on rolling out patches, legal liability (earned or otherwise), and reputation. Just because you can't measure these without their actually happening doesn't change the fact that you will save money in the long run by preventing vulnerabilities from being released. It seems to me that by economical is really meant finding the most vulnerabilities for the least investment. So the proper question to ask then becomes what is the best method for economically finding vulnerabilities?

change of perspective (1)

prgrmr (568806) | more than 9 years ago | (#9399156)

Security isn't an add-on, it's an integral part of every component of every system--whether functional or flawed--and needs to be a design consideration ranking right up there with the user interface.

One of the obvious benefits of posting secuity holes is that it gives developers the insight and the opportunity to not duplicate that same security flaw in another system. How consistently, or not, we are learning these lessons is a different issue.

You can't test quality into a system (1)

rlglende (70123) | more than 9 years ago | (#9399157)

because it costs too much.

You must design in quality from the beginning, and develop systems with a process that guarantees the quality is not lost.

This is known to be true for everything from trivial widgets through the most complex systems that humans are capable of designing.

I suppose it is nice to have it confirmed for security flaws, but it isn't surprizing.


cut&paste required vs. robbIE's whoreabull (0)

Anonymous Coward | more than 9 years ago | (#9399161)

corepirate nazi puppet PostBlock censorship devise? he must be a closet refudlicking nazi puppet? what's the point?

what a hoot, if it weren't so scarIE?

Due to excessive bad posting from this IP or Subnet, anonymous comment posting has temporarily been disabled. You can still login to post. However, if bad posting continues from your IP or Subnet that privilege could be revoked as well. If it's you, consider this a chance to sit in the timeout corner or login and improve your posting . If it's someone else, this is a chance to hunt them down. If you think this is unfair, you're the only one that cares.

'security' buy censhorship. sheesh!@#$%

Security guy? (4, Funny)

ajs (35943) | more than 9 years ago | (#9399167)

I'm confused about this guy. He claims to be a security consultant, but to quote his blog [rtfm.com] ,
"I replied to the mail and didn't check the recipients lines and my mailer helpfully sent a copy of my credit card # to everyone who had gotten the original message. Outstanding."

Really. I didn't make that up, check the link! Who is this guy, and why is he giving me software security advice?!

If the "good guys" don't find them ... (2, Insightful)

Mr.Surly (253217) | more than 9 years ago | (#9399169)

... then someone else will. Hard to say if finding and fixing is helping, because noone knows how bad it would be if we didn't do it.

Then again, MS doesn't seem to be trying to find vulnerabilities in their own code; often it's found by others. Sometimes it's the bad guys.

Point being, it's hard to say what effect something is having when you can't contrast it against "not doing it."

Sure it might help (1)

Aliencow (653119) | more than 9 years ago | (#9399179)

It might help against worms or script kiddies, but I'm keeping up with patches.

Worms are annoying, they might force you to restore backups and rebuild machines, but a real person targetting you/your enterprise in particular can be much worse.

Finding Security Holes... (0)

Anonymous Coward | more than 9 years ago | (#9399199)

Obviously doesn't help with preventing existing flaws, but if a software company or developer is any good, they'll use the information to build better software in the future and to see where most of the holes orginate and maybe spend some extra time testing. It will at least fix large holes that can be fixed, and may alert users to stay away from products that consistently have security issues.

Security is almost impossible to build into an application after the fact, especially if security wasn't even considered during the development of a product. You can't learn from past mistakes, though, if they are not documented and studied.

Woah... pretty pictures, but bad research (4, Insightful)

GreyyGuy (91753) | more than 9 years ago | (#9399204)

Just read the article and have to point out a number of flaws in the methodology. First- it assumes that if the vulnerability is only known to a few then the number of intrustions will be low. Given the number of zombie computers out there, I do not think that is a safe assumption. Look at how the last few big viruses went around. I know those were exploiting known and patched vulnerabilities, but there is nothing to say that the same thing couldn't be done with a "day-zero" exploit.

Second- it doesn't address the level of the vulnerability. If it is an exploit that lets someone take over a machine, or format a drive, the cost of even those first, possibly limited, intrusions will be astronomical.

It's important... (1)

1000101b (513049) | more than 9 years ago | (#9399217)

Is it important to find security holes? Do you care if a hacker exploits a vulnerability to make their vote count more than yours? Do you like your kids to play in the same sandbox that cats defecate in?

Unless the software is to be used only by you or is in a protected 'sandbox', it needs to be as secure as possible.

Idiotic question (0)

Anonymous Coward | more than 9 years ago | (#9399234)

What kind of fool asks such a question?

Until people learn how to write completely secure software we will have to fix the holes.
Not that it's very likely to happen anytime soon.

Sounds like one of those nutty ideas coming from psychs - did I hit you or did you run into my closed fist?

True it's relative, but you going to have to be REALLY stupid to not get it. Same thing with this question. Whoever even let it get posted is equally dumb. What a waste of time!

Is finding Security holes a good idea? (1)

festers (106163) | more than 9 years ago | (#9399248)

No, it's a pointless waste of time. We'll never catch them all, so why bother trying? I think Homer sums it up best: "ou tried your best and you failed miserably. The lesson is 'never try'."

development process (2)

happyfrogcow (708359) | more than 9 years ago | (#9399250)

Morally, finding security holes has to be done. It doesn't matter what the percieved quality improvement is.

But instead of trying to plug the holes, it's better to understand why the holes pop up and what we can do to alter the behavior that leads to holes.

[insert plug for your favorite high level language here]

But even better development tools only gets you so far. The burden has to be laid square on the shoulders of the project leader and their managerial counterparts. You cannot continue to do the business side "favors" by including some technically unnecesary component after the specs and requirements are done, and expect it to get integrated seamlessly and without effecting everything else. once you say "yes" to something, it will be harder to say "no" the next time. Maybe you need to understand the business problem you are trying to solve better before you finish the design. Maybe the business folks need to better understand the development process, so they know they can't add features late in the game.

this is just my "2 years in the system" view of things, after time and again getting an email saying "so-and-so wants such-and-such done to this thing" after the thing's design has already been settled upon. when i ask why, it always comes down to someone not being able (yay office politics) to say no to someone for some reason or another.

want fewer security holes? start with better communication between different groups. end with a written in stone spec. leave out all feature creep until the next design phase.

good luck with that! ha!

The assumptions... (2, Interesting)

sterno (16320) | more than 9 years ago | (#9399252)

The problem I can see in this paper is that it makes certain assumptions about the behaviour of black hat hackers which aren't necessarily true. The majority of vulnerabilities discovered by black hat hackers are eventually leaked from the hacker community to a white hat which will seek a solution to the problem. But there's no reason to conclude that this is true of all vulnerabilities.

I forget the terminology for it, but there's the concept of worms that are launched on a large number of vulnerable machines simultaneously. I'm not aware of an attack like this in the wild but it's theoretically possible and would be terribly destructive. If a black hat hacker plays his cards right, he can probably sneak his exploit onto a few thousand computers without anybody noticing. Then he can launch a massive attack before anybody even knows the vulnerability exists.

Having said that I think that, in the real world, the amount of effort put into finding vulnerabilities by white hats has a minimal cost. There's essentially three areas where security vulnerabilities are discovered by the friendlies:

1) QA of the product developers
2) Hobbyist white hats
3) Network security auditing

The cost of #1 is an assumed cost of any product and is part of the basics of releasing it to the public. You check for typos in the documentation and you check for security bugs.

The cost of #2 is zero because it's people doing these things on their own time for their own amusement.

The cost of #3 is substantial but it's critically important to some businesses to have zero security vulnerabilities. A security breach not only has an immediate cost in time to fix the problem, but it also has a long term cost by damaging the integrity of the company. If your bank got hacked and you lost all your savings, even if it was insured, would you keep your money in that bank?

From the same folks that brought you . . . (0)

Anonymous Coward | more than 9 years ago | (#9399253)

'confronting terrorist supporting countries only serves as a more valuable recruiting tool for organizations such as al queda'

The premise is flawed, so the logic is irrelevant (2, Insightful)

Zelig (73519) | more than 9 years ago | (#9399256)

If finding security defects is a useful security
activity,then it should have some measurable effect
on the software security defect rate.
This assertion, and the vapor about 'depleting the store of vulnerabilities' pretends that there is no new code being written. Packages under development should display some unknown rate of new vulnerability introduction.

In the long term, one might hope that the vulnerability finding would feed back into software engineering, and eventually decrease the rate of introduction, but we're clearly not there today, and I'm not holding my breath for tomorrow.

So we've got 18 pages of math measuring an irrelevancy.

Yes, it's a good idea (4, Interesting)

Sloppy (14984) | more than 9 years ago | (#9399261)

Of course it helps! But perhaps not the way you might expect it to.

Someone finds a buffer overflow problem. Someone finds another one. Someone finds another one. Someone finds another one.

Someone realizes: "what if I centralized all my buffer routines and just got one little section of code working perfectly?" Then you get projects like qmail or vsftp, which simply are more secure. Then people start using these programs, and their systems really are less vulnerable.

This paper looks keeps using the phrase "given piece of software." It's talking about (lack of) improvements at a micro-scale, but ignores improvements in the big picture that can happen due to people getting fed up or scared.

If vulnerabilities were not discovered, would anyone bother to develop secure software?

I think this paper has as an underlying assumption, the popular view that it just isn't possible to write secure software, and that every large software system is just a big leaky sieve that requires perpetual patching. I don't accept that. I understand why the belief is widely held: most popular software was not originally written with security issues in mind. But this is something that will be overcome with time, as legacies die off.

think of a cracking dam (3, Interesting)

BigBir3d (454486) | more than 9 years ago | (#9399268)

you can plug all the individual holes you want, it is still a crappy designed dam.

if it designed differently, the number of cracks is smaller...

i wish reporters understood that. flame MS for not bringing lonhorn out sooner. XP is not good enough. everyone knows this, nobody in the popular press is saying it in the right way.

*sigh* /me goes back to condeming the world to doom

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account