Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Should Vendors Close All Security Holes?

ScuttleMonkey posted more than 7 years ago | from the limited-resources dept.

Security 242

johnmeister writes to tell us that InfoWorld's Roger Grimes is finding it hard to completely discount a reader's argument to only patch minimum or low security bugs when they are publicly discovered. "The reader wrote to say that his company often sits on security bugs until they are publicly announced or until at least one customer complaint is made. Before you start disagreeing with this policy, hear out the rest of his argument. 'Our company spends significantly to root out security issues,' says the reader. 'We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem.'"

cancel ×

242 comments

Sorry! There are no comments related to the filter you selected.

i didn't rtfa (4, Insightful)

flynt (248848) | more than 7 years ago | (#19119015)

I did not RTFA, but I did read the summary. I did not hear his argument, I heard his conclusion repeated with more words.

Re:i didn't rtfa (2, Funny)

tritonman (998572) | more than 7 years ago | (#19119115)

no, they need to keep holes open for the government to spy on us.

Re:i didn't rtfa (1)

grencez (1101653) | more than 7 years ago | (#19119611)

Eh... for the most part it's not security holes that give "Big Brother" access to information. He gets it while it's traveling through the net. On topic: Probably a decent idea. Might slow down the search for other, unknown bugs.

Re:i didn't rtfa (2, Interesting)

WrongSizeGlass (838941) | more than 7 years ago | (#19119163)

I guess this guy only locks each door or window in his house and car after someone has discovered that it's unlocked? I sure hope his kids live with their mom.

Re:i didn't rtfa (2, Insightful)

Dan Ost (415913) | more than 7 years ago | (#19119797)

I guess this guy only locks each door or window in his house and car after someone has discovered that it's unlocked? I sure hope his kids live with their mom.
The problem is that if they release a patch, they draw attention to the code that had the flaw, resulting in more hacker scrutiny than if they had quietly sat on the patch until the next release.

If they could release security patches invisibly, they probably would. Unfortunately, there's no way to do that.

The summary missed those parts. (4, Insightful)

khasim (1285) | more than 7 years ago | (#19119213)

Basically ...

#1. If we spend time fixing those bugs, we won't have as much time to fix the important bugs.

Translation: we put in so many bugs that we don't have time to fix them all.

#2. We give priority to any bugs that someone's leaked to the press.

Translation: we only fix it if you force us to.

#3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

I had to post that verbatim. They're releasing new bugs in their patches.

#4. "Fourth, every disclosed bug increases the pace of battle against the hackers."

Yeah, that one too. The more bugs they fix, the faster the .... what the fuck?

#5. If we don't announce it, they won't know about it.

Great. So your customers are at risk, but don't know it.

Re:The summary missed those parts. (2)

Evil Adrian (253301) | more than 7 years ago | (#19119541)

No matter how good the QA testing is on a piece of software before it's released, it invariably has bugs and security risks. Why do you have a problem with assigning priorities to issues that need fixing?

Are you incapable of thinking reasonably, or do you just like pointing fingers?

Re:The summary missed those parts. (5, Insightful)

Lord_Slepnir (585350) | more than 7 years ago | (#19119693)

I just don't think that the GP has ever worked on a large piece of software, or has worked in a business environment. Linux has some of the best minds in the world working on it, and it still has holes. Vista could have used a few more months being polished, but I can only imagine the threats of "Release now or else" coming from the headquarters.

You missed the point about "patches"? (1)

khasim (1285) | more than 7 years ago | (#19119725)

No matter how good the QA testing is on a piece of software before it's released, it invariably has bugs and security risks.

No one is arguing that.

The discussion is about whether the attempt should be made to address ALL of those ... or not.

Why do you have a problem with assigning priorities to issues that need fixing?

Where did I say that?

They SHOULD be prioritized. No sense in trying to patch a local user, non-exploitable crash bug when you have a remote root vulnerability (with exploit).

But the system is only as secure as all of its components. While you may not believe that your vulnerability is important enough to patch, if enough people take that same approach, the vulnerabilities can be linked and your box will be cracked just as if you had left a remote root vulnerability unpatched.

That approach means that YOU are relying upon EVERYONE ELSE to cover your code by ensuring that THEIR code does not have any exploits. While you don't bother to patch your's.

Re:The summary missed those parts. (2, Informative)

644bd346996 (1012333) | more than 7 years ago | (#19119759)

The whole point of the article is that the company in question refrains from releasing a patch, even when they have a fix ready. This is not prioritization.

Re:The summary missed those parts. (3, Insightful)

Chris Burke (6130) | more than 7 years ago | (#19119851)

No matter how good the QA testing is on a piece of software before it's released, it invariably has bugs and security risks.

Trivial and meaningless statement. There is good code and bad code. Good code is code with fewer bugs. Bad code is code with many bugs. A good developer is one who designs the code to avoid bugs, and who, more importantly, fixes the bugs they find. A bad developer uses the above truism as an excuse to avoid fixing their shitty code.

Why do you have a problem with assigning priorities to issues that need fixing?

When one of those priorities is "don't fix until our customers find out, and try to keep them from finding out" then I have a problem with it.

The only thing that should distinguish a high priority bug from a low priority bug is: Do we fix it, then release the patch as an urgent hotfix? Or do we fix it, then release the patch as part of a periodic security update so that we have more time to test and so sysadmins aren't overwhelmed having to apply and test patches all the time? There is no priority that should read "Do not fix, unless we get bad P.R. for it."

The only developer who would do such a thing is a bad developer who is okay with leaving their customers exposed. Of course the reason they got into that situation, of having so many security issues that they can't afford to fix them all, is due to them being bad developers.

Are you incapable of thinking reasonably, or do you just like pointing fingers?

You need to drag your brain out of its pie-in-the-sky abstract concepts like "do you have a problem with priorities" and start actually thinking about the situation before you start saying things like this.

Re:The summary missed those parts. (5, Informative)

Lord_Slepnir (585350) | more than 7 years ago | (#19119569)

#3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

I had to post that verbatim. They're releasing new bugs in their patches.

Partially true. By doing a bindiff between the old binaries and new binaries, they can see things like "Interesting, they're now using strncmp instead of strcmp. Let's see what happens when I pass in a non-null terminated buffer..." or "they're now checking to make sure that parameter isn't null" or whatever.

The defects were there before, but the patches give hackers a pointer that basically says "Look here, this was a security hole". Since end-users are / were really bad about patching their systems in a sane time frame, this gives the hackers a window of opportunity to exploit an exploit before they all patch up.

Re:The summary missed those parts. (2, Insightful)

CastrTroy (595695) | more than 7 years ago | (#19119957)

However, if I was a systems admin, I'd much rather have the option to keep my systems secure by having the updates available then having the company sit around on fixes because they think the hackers don't already know what the bugs are. The most valuable tools a hacker has are those that nobody knows about. If people are aware of a bug, then it will be more likely that the hole isn't exploitable. If nobody knows about the bug, then you can catch a lot of people off guard, and break into a lot more systems.

Re:The summary missed those parts. (3, Insightful)

markov_chain (202465) | more than 7 years ago | (#19119617)

#3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

I had to post that verbatim. They're releasing new bugs in their patches.


No, they are fixing old bugs. Old but unknown bugs, which now become known to hackers, who can go and abuse the vulnerabilities wherever they didn't get patched yet. It's pretty old news, really.
 

Re:The summary missed those parts. (3, Informative)

Jimmy King (828214) | more than 7 years ago | (#19119661)

#3. "Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software."

I had to post that verbatim. They're releasing new bugs in their patches.

That's not how I read the response, not that how I read it is better.

What I got from reading the entire paragraph about that was that they patch the exact issue found, but do a terrible job of making sure that the same or similar bugs are not in other similar or related parts of the code. Hackers then see the bug fix report then go look for similar exploits abusing the same bug in other parts of the program. These new exploits would not be found if they hadn't fixed and published the first one.

This is not any better than causing new security issues with their security patches, but let's at least bash them for the right thing.

Yeah, I can see that. (1)

khasim (1285) | more than 7 years ago | (#19119757)

I'll go with your reading. Thanks!

Re:The summary missed those parts. (1)

644bd346996 (1012333) | more than 7 years ago | (#19119819)

This is what I thought as well. After all, this is exactly what happened with the .ANI bug. It seems pretty obvious that the company in question does a really bad job of auditing code in response to finding a new class of bugs.

Re:The summary missed those parts. (3, Interesting)

SatanicPuppy (611928) | more than 7 years ago | (#19119787)

The only argument that makes any sense to me is, "Every time we force our customers to patch systems, we run the risk of creating incompatibilities and getting slews of angry phone calls, and that'll screw up our week" and they didn't even include that one.

Ideally the stuff should be reasonably secure out the gate; sure, they're talking about all the reasons they have for not patching after the fact, and all this stuff is true...Patching is a huge pain in the ass for everyone involved. But dammit, the amount of patching that gets done is inexcusable!

The thing that burns me is, you know that the developers don't incorporate those "tentative" fixes into the next product release either, not until the bugs make it public. You know that there is middle management who is perfectly aware of significant poor design decisions that could be solved by a well-planned rewrite, who instead tell their people to patch and baby weak code, because the cost of doing it right would impact their deliverables.

Re:The summary missed those parts. (1)

cheater512 (783349) | more than 7 years ago | (#19120053)

He doesnt work for Microsoft by any chance does he?

Re:The summary missed those parts. (1)

edizzles (1029108) | more than 7 years ago | (#19120271)

You missed nuber 6# 6# If our Program is perfect we wont make any new money from upgrades and vista... i mean our new products

The Biggest U.S. Security Hole: +1, Informative (0)

Anonymous Coward | more than 7 years ago | (#19119683)


Is President-Vice Richard B. Cheney's Spider-Hole [whitehouse.org] .

I hope this helps the war criminal trial in The Hague.

Yours PatRIOTically,
Kilgore Trout

argument (summary) (0)

Anonymous Coward | more than 7 years ago | (#19119721)

1. Our programmers are trained to program perfectly
2. We spend our time fixing critical bugs, fixing medium and low security bugs slows down fixing the high priority bugs
3. Next priority (after critical bugs) is fixing publicly disclosed bugs
4. Hackers find bugs by examining patches so when we patch bugs that are not publicly disclosed hackers find out about them and then they become publicly disclosed bugs and our bug count goes up.
5. Our fixing vulnerabilities makes hackers smarter/better.
6. Most hackers only exploit publicly disclosed vulnerabilities. By not fixing bugs our customers are protected against most hackers.

Should Vendors Close All Security Holes? (2, Insightful)

WrongSizeGlass (838941) | more than 7 years ago | (#19119017)

Yes.

Re:Should Vendors Close All Security Holes? (2)

jaavaaguru (261551) | more than 7 years ago | (#19119111)

Agreed. All security holes should be fixed. I realise that with the testing effort involved in large projects that it may not be feasible to get the fixed product out instantly and may require waiting until the next planned release - if the problem is a small and unknown-to-the-public one.

If it's a problem that people know about and could be serious, then I think it should definitely be fixed ASAP.

Re:Should Vendors Close All Security Holes? (5, Insightful)

eln (21727) | more than 7 years ago | (#19119131)

Also, vendors should include a free pony with every software license they sell.

Closing all vulnerabilities is not practical. In any sufficiently complex piece of software, there will be bugs and security holes. Obviously, you need to close the nasty ones, but many of these exploits are not particularly high risk. In these cases, especially if the fix would involve a major redesign or other highly disruptive solution, it may be best to just leave them alone.

If, for example, the underlying design of your product allows for a minor, difficult to exploit security hole, it is probably not worth it to spend the time and money to redesign the product. More likely, your choices would be either a.) live with the (small) vulnerability, or b.) scrap the product entirely.

The decision to close a security hole should be dependent on the potential impact of the hole, the urgency of the issue (are there already exploits in the wild, for example), and how many resources (time and money) it will take to fix it.

Bullshit. (1, Interesting)

khasim (1285) | more than 7 years ago | (#19119547)

Closing all vulnerabilities is not practical.

Then running that software is not "practical". Any vulnerability is a vulnerability.

In any sufficiently complex piece of software, there will be bugs and security holes.

And "sufficiently complex" is defined as having "bugs and security holes". No. That just means that there is no such thing as "security". And we've been over that before.

Obviously, you need to close the nasty ones, but many of these exploits are not particularly high risk.

Right. And enough of those "not particularly high risk" vulnerabilities can be linked together to crack your machine as surely as 1 remote root exploit could be.

In these cases, especially if the fix would involve a major redesign or other highly disruptive solution, it may be best to just leave them alone.

"Best" in this case is being defined as "best for the company selling the software" and NOT "best for the people using that software".

What would be best for the users is the knowledge that there are fundamental security issues and that they might want to use a competitor's product.

If, for example, the underlying design of your product allows for a minor, difficult to exploit security hole, it is probably not worth it to spend the time and money to redesign the product.

Again, "not worth it" from the point of view of the vendor. Let's be clear on that.

The decision to close a security hole should be dependent on the potential impact of the hole, the urgency of the issue (are there already exploits in the wild, for example), and how many resources (time and money) it will take to fix it.

How would you KNOW if there were already exploits in the wild? Unless someone was advertising them.

That approach means that an exploit can sit for years at the "unimportant" level ... until one day it hits the "EMERGENCY!!! FIX IT NOW!!!" level.

Yeah, that's the kind of support I want from my vendors.

Re:Should Vendors Close All Security Holes? (1)

644bd346996 (1012333) | more than 7 years ago | (#19119953)

The headline is a bit misleading. The article is not about what you seem to think it is about. The company in question, as a standard procedure, does not release patches for many bugs that they have already created fixes for.

Once you have developed a fix, it is completely unethical to wait indefinitely to release the fix. The longest acceptable wait is until the end of the code audit to look for similar holes in other parts of the code base. This should only take a few months.

Hear Hear (1)

Khammurabi (962376) | more than 7 years ago | (#19120013)

Closing all vulnerabilities is not practical. In any sufficiently complex piece of software, there will be bugs and security holes. Obviously, you need to close the nasty ones, but many of these exploits are not particularly high risk. In these cases, especially if the fix would involve a major redesign or other highly disruptive solution, it may be best to just leave them alone.
Having worked on commercial software for a few years now, I'd have to agree with the parent. All complex programs come with bugs, period. I'd have to wager any application that is completely free of bugs is either a non-commercial product, or is in a stagnant market. The bugs that are fixed for patches are usually the ones deemed during the triage process as being the most critical and/or most time-efficient to fix.

As a developer, while I'd like to fix all the bugs in the system, the truth is there will always be a few that require architecture changes or impact large sections of code, and as such tend to persist in the application. Since these bugs tend to require a large time investment for almost no perceptible gain for the end-user, they are often ignored in favor of devoting time to new features.

Should all bugs be fixed? Yes. Should all bugs take precedence over new features? No. This is not to say developers do not try to fix long-standing issues, just that the churn rate for bugs tends to be rather constant. (Once an older bug is fixed, a new one takes its place.)

Re:Should Vendors Close All Security Holes? (1)

vertinox (846076) | more than 7 years ago | (#19120105)

Closing all vulnerabilities is not practical. In any sufficiently complex piece of software, there will be bugs and security holes.

I hate to say this, but if your software is so complex that it is impossible to fix all the security holes... Then maybe you shouldn't make it so complex.

I mean even something as complex as OS X has security holes, but not so many that it requires the developers to throw their hands in the air and say "Oh we give up!" at some point.

Seriously, if your product is so complex and possibly bloated to a point to where it would be impossible to fix something without breaking another part, then you should really consider starting over from scratch. Maybe fire and hire a few new programmers and stop listening to customers for every inane feature possible because a security breach will cost your customer's more money than lack of functionality ever will and possibly cause you to loose them as a future customer.

But I believe we might also have a difference of opinion of what a security hole is...

Even a minor hole should be tracked and at least reported. Heck even Apple patched that wifi hole that required 3rd party hardware to do. That might not even constitute 0.001% of their consumer base, but they fixed it.

Re:Should Vendors Close All Security Holes? (0)

Anonymous Coward | more than 7 years ago | (#19120327)

You are talking about two different things at the same time.

"Closing all vulnerabilities is not practical. In any sufficiently complex piece of software, there will be bugs and security holes."

Of course, because you cannot know when you are done.

"Obviously, you need to close the nasty ones, but many of these exploits are not particularly high risk."

This is about KNOWN vulnerabilities. You can fix all KNOWN vulnerabilities, if you want to.
What you "need to close", and "want", depends on what your incentives are - and cost is only one of them.
(Try to explain how OpenBSD is more secure than Microsoft using only "cost" or "resources", for example)

Re:Should Vendors Close All Security Holes? (2, Insightful)

RahoulB (178873) | more than 7 years ago | (#19119849)

Should Vendors Close All Security Holes?

NO

ACTIVEX IS A FEATURE!

AMERICAN programmers write shitty code (-1, Troll)

Anonymous Coward | more than 7 years ago | (#19120071)

hi im from delhi, u should listen when i tell you how it is.

1. Don't assume we write crappy programs. "Crappiness" of a program has nothing to do with skin colour. I have worked with american programmers who write pretty shitty code.

2.Don't assume we just write programs. India is a country of great thinkers. Our prestigious MBA program from the Indian Institutre Of Managemnent has the toughest entrance examination in the world. Just look up the number of Indians employed in senior positions of the top American comaniew..U'll be surprised.

3. Don't assume you have greater brains. We are the country that invented Mathematics (am sure a good portion of that is used while programming)

4. Don't assume you are better at communication. I know of a lot of so called English speaking people who dnt know the difference between "than" and "then", "there" and "their". And the accent..ugh...we have the most neutral of all accents. Try communicating with a Dutch, German, French or a chinese...You would wish they were Indian.

I could go on and on. Bottomline is...we are not looking to compete with the American programmer. There is no competition. We certainly do not want anyone to lose their jobs. We just want the INR to be at par if not better than the USD. We wan't a better quality of living, whether it means we work for you, or you work for us. Don't u want a better quality of life too? Isn't it why you go and bombard every country you find oil in? Be thankful of what you have and wait for the time when you would be fighting with each other to come and work in India.

Long answer yes with a but (1, Insightful)

Anonymous Coward | more than 7 years ago | (#19119019)

Short answer no with a maybe.

Bill? (0, Troll)

Dorsai65 (804760) | more than 7 years ago | (#19119053)

Is that you, Mr. Gates?

YOU'LL GET OVER IT N/T (-1, Troll)

Anonymous Coward | more than 7 years ago | (#19119057)

N/T MEANS NO TEXT, JERKY MCJERKENHEIMER

# ry to keep posts on topic.
# Try to reply to other people's comments instead of starting new threads.
# Read other people's messages before posting your own to avoid simply duplicating what has already been said.
# Use a clear subject that describes what your message is about.
# Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive comments# ry to keep posts on topic.
# Try to reply to other people's comments instead of starting new threads.
# Read other people's messages before posting your own to avoid simply duplicating what has already been said.
# Use a clear subject that describes what your message is about.
# Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive comments

Two words: Exploit Chaining (5, Interesting)

Gary W. Longsine (124661) | more than 7 years ago | (#19119077)

Exploit Chaining [blogspot.com] means that low risk holes can become high risk hole when combined. Patch them all. Patch them quickly.

ATTN: SWITCHEURS! (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#19119581)

If you don't know what Cmd-Shift-1 and Cmd-Shift-2 are for, GTFO.
If you think Firefox is a decent Mac application, GTFO.
If you're still looking for the "maximize" button, GTFO.
If the name "Clarus" means nothing to you, GTFO.

Bandwagon jumpers are not welcome among real [imageshack.us] Mac [imageshack.us] users [imageshack.us] . Keep your filthy, beige [imageshack.us] PC fingers to yourself.

Re:Two words: Exploit Chaining (0)

Anonymous Coward | more than 7 years ago | (#19119749)

..But on the flip side, not every bug can be exploited, even though it may seem severe. Every conceivable buffer overflow is not useable in practical terms.
I guess the point is, you have to prioritize.

And seriously, if you want software with guaranteed security or reliability, you probably don't want consumer grade stuff to begin with. (Just call your local nuclear power plant and ask them who wrote their process management software. )

Re:Two words: Exploit Chaining Be... (1)

davidsyes (765062) | more than 7 years ago | (#19120023)

affredd... be veddy effredd.

In code space, noone can hear you scdream...

This is a GOOD case of "chain" smoking...

Maybe they need to debug their approach? Is anyone else bugged by this?

Waste of time and money (2, Insightful)

jshriverWVU (810740) | more than 7 years ago | (#19119087)

We still research the bug and come up with tentative solutions, but we don't patch the problem. I can understand the point if it's to save time and money for other things, but if they are going to find a solution to the problem and time/money is already spent, then that is completely wasted if it isn't utilized. Plus you're risking the data by not closing a known hole or bug. Doesnt make sense.

Re:Waste of time and money (3, Informative)

Richard McBeef (1092673) | more than 7 years ago | (#19119139)

I can understand the point if it's to save time and money for other things, but if they are going to find a solution to the problem and time/money is already spent, then that is completely wasted if it isn't utilized.

Patch and next version are different things. They fix the hole but don't release a patch. The fix is released in the next version.

Words Important (1)

Ahnteis (746045) | more than 7 years ago | (#19120209)

>>We still research the bug and come up with ****tentative**** solutions, but we don't patch the problem.

They come up with a tentative solution, but they don't spend the time to do full testing on it unless it's a critical security hole. Why? As mentioned, they prefer to:
1) Use limited resources to focus on finding critical problems
2) Not introduce new code to a known system unless necessary
3) Not put their real-world, paying-customers-who-don't/can't-patch in danger.

Again, the letter makes clear that they do patch the most critical things, and anything that is public (and thus likely to be exploited).

If the hole becomes known, they can issue a fix more quickly because they already have a starting point.

Dentistry analogy (0)

Anonymous Coward | more than 7 years ago | (#19119105)

You don't have to floss all your teeth, just the ones you want to keep.

Their arguments: 1-5 (5, Informative)

Palmyst (1065142) | more than 7 years ago | (#19119149)

As is too common, the ./ summary doesn't have the relevant portions of the article under discussion, so let me try to summarize the main points of their argument.

1. It is better to focus resources on high risk security bugs.
2. We get rated better (internally and externally) for fixing publically known problems.
3. Hackers usually find additional bugs by examining patches to existing bugs, so a patch could expose more bugs than fixes are available for.
4. If we disclose a bug and fix it, it just escalates the "arms race" with the hackers. Better to keep it hidden.
5. Not all customers immediately patch. So by announcing a patch to previously unknown to the public bug, we actually exponentially increase the chances of that bug being exploited by hackers.

Re:Their arguments: 1-5 (1)

runswithd6s (65165) | more than 7 years ago | (#19119327)

4. If we disclose a bug and fix it, it just escalates the "arms race" with the hackers. Better to keep it hidden.
Exactly what Mr. Blackhat is thinking, "Keep it hidden." Chances are if the software developers know about an exploitable bug, so do the crackers. It may be that the "script-kiddies" are still in the dark, but I don't consider "script-kiddies" as being in the "arms race." Note, however, that the article didn't say the company wouldn't fix low/medium security bugs, rather they wouldn't publicize the fix when it was finally rolled out in the product.

Re:Their arguments: 1-5 (1)

larsroe (966853) | more than 7 years ago | (#19119503)

Good summary. Are hackers reading the description of what the patch fixes, or are they seeing how the binaries changed? Is there a way of patching that makes it difficult to detect what really changed? Or maybe changing some unused bytes as a red herring?

6 of one, half a dozen of the other (3, Insightful)

KitsuneSoftware (999119) | more than 7 years ago | (#19119151)

It could work as well as the normal method, but if it catches on, it will mostly be used as an excuse to not do anything until publicly shamed. Call me cynical.

Bugs should be fixed (5, Interesting)

Anarchysoft (1100393) | more than 7 years ago | (#19119167)

"Our company spends significantly to root out security issues," says the reader. "We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem."
I don't believe this is a prudent approach. Bugs often cause (or mask) more problems than the issue causing the bug to be fixed. In other words, fixing a bug causing a known issue can also fix several unknown issues. Without a significant reason to not do so (such as a product that has beene completely replaced in a company with very limited resources,) it is irresponsible to not fix bugs. The debatable point is how long small bugs should be allowed to collect before issuing a point release.

Fixing bugs can also expose other bugs (1)

EmbeddedJanitor (597831) | more than 7 years ago | (#19120161)

fixing a bug causing a known issue can also fix several unknown issues

Just as often the reverse applies. A bug often shadows other bugs. Take away the main bug and there's just another right behind it which might even be worse. This is why you don't just "shoot from the hip" when fixing bugs.

Re:Fixing bugs can also expose other bugs (1)

Anarchysoft (1100393) | more than 7 years ago | (#19120387)

|fixing a bug causing a known issue can also fix several unknown issues Just as often the reverse applies. A bug often shadows other bugs. Take away the main bug and there's just another right behind it which might even be worse. This is why you don't just "shoot from the hip" when fixing bugs.

Are you suggesting it's better to not fix the bug and thus reveal the problems it masks? When fixing bugs causes many bugs it usually indicates one of two common things:

  1. The bug fixer is being careless and there isn't sufficient regression testing happening, or
  2. the initial code was fragile to begin with.

In many ways, problem 1 is much more serious than problem 2 as the code is probably getting worse rather than better. It would seem to follow then that the fact that a person 'fixing' something can screw it up more is not a strong rationale for avoiding fixing bugs as long as the engineers' and SQA skill level is sufficient to handle the risk associated with the project. Of course, if people would just engineer things right the first time... ;)

Procrastination? (1)

Jarjarthejedi (996957) | more than 7 years ago | (#19119175)

Seems like a pretty dumb move to me. You have the choice of, A: Patching immediately, costing you a few hours of time from a couple of your employees or B: Hoping that it won't be a big risk effectively betting a few hours of time against the possibility of a huge security breach and the corresponding bad press that comes with that.

Seems like a small patch wouldn't be that much trouble and would avoid much larger problems...

Re:Procrastination? (0)

Anonymous Coward | more than 7 years ago | (#19119319)

A: Patching immediately, costing you a few hours of time from a couple of your employees

Look, sizzlechest, were not talking about the security hole in your project out of "My First PHP Website For Dummies".

Re:Procrastination? (4, Insightful)

The_Wilschon (782534) | more than 7 years ago | (#19119539)

Low-risk does not mean easy to fix. Sometimes, a bug might be a very low-risk bug, but demand immense amounts of time to find and fix. For instance, sometimes I might be writing a program, and at some point, it begins crashing unpredictably, but very rarely. I know that there is a bug, but I have no idea what the trigger is, I have no idea which part of the code contains the bug, and I have no idea how to fix it. Since the MTBF is (say) 3 months, and (say) the code is not long-running (like a daemon or a kernel), it is probably not worth finding and fixing the bug.

Now, that's bugs, which is a wider category than security holes. So, suppose that instead of crashing, it very rarely and briefly enters a state in which a particular sequence of bytes sent to it via the net can cause it to execute arbitrary code. Furthermore, suppose the program should never be running as root, so the arbitrary code is nearly useless. This is a low risk security hole, and probably not worth patching.

Could take hundreds of man-hours to find the cause, and perhaps even longer to fix. Probability of ever seeing this exploited is very very low. Should it then be patched?

Re:Procrastination? (1)

Chris Burke (6130) | more than 7 years ago | (#19120369)

Now, that's bugs, which is a wider category than security holes. So, suppose that instead of crashing, it very rarely and briefly enters a state in which a particular sequence of bytes sent to it via the net can cause it to execute arbitrary code. Furthermore, suppose the program should never be running as root, so the arbitrary code is nearly useless. This is a low risk security hole, and probably not worth patching.

Could take hundreds of man-hours to find the cause, and perhaps even longer to fix. Probability of ever seeing this exploited is very very low. Should it then be patched?


Absolutely, though I don't think that exploit should have ever been rated 'low' priority in the first place. Execution of arbitrary code not run as root is still very bad, not just because being able to do anything the service can do is bad, but also because then any local-priviledge-escalation bug combines with this bug to form a complete remote-exploit-box-pwnage bug. Since your software company probably doesn't deal with any of the multitudinous pieces of software that could result in a local escalation exploit, you may not be warned in advance when such an exploit becomes known. So you've sat on this "low" priority bug for years with no intention of fixing it, and suddenly all your customers are vulnerable, and you're still at square 0 with regards to fixing your now high priority and still high-effort-to-fix bug.

There is ultimately no question. Yes, the bug should be fixed. There is nothing wrong with assigning priorities, and working on the "high", then "medium", then finally "low" bugs. Unless your "low" bug should really be a "high" bug. But there should absolutely never be a policy of "we will not fix this bug until our customers hear of it".

Re:Procrastination? (1)

LWATCDR (28044) | more than 7 years ago | (#19119723)

Except for.
C: the fix may cause a bug or other issues. Something may stop working.

It also depends on the security problem. If it is a local exploit then it may not be worth fixing right then.

I think everyone is confused here. These are not exploits that they have closed and just haven't decided to end out the patch. These are exploits that the haven't created the patch for. A security team as limited resources. They may have x exploits so it is only logical to fix the most critical first.
 

Re:Procrastination? (2, Insightful)

TheNicestGuy (1035854) | more than 7 years ago | (#19120225)

You have the choice of, A: Patching immediately, costing you a few hours of time from a couple of your employees or B: Hoping that it won't be a big risk effectively betting a few hours of time against the possibility of a huge security breach and the corresponding bad press that comes with that.

Not that simple. Developing a patch does not fix a security hole. Releasing a patch does not fix a security hole. Applying a patch fixes a security hole, if all goes well. When you combine the fact that the number of holes in existence is multiplied by the number of installations, with the fact that the development team very rarely has any power over when patches are actually applied, security through obscurity doesn't look so cut-and-dried naïve versus publicizing your holes by releasing patches.

Notice that the person who posted the argument in the article never said they leave holes unpatched to save "a few hours of time". He didn't say they left holes unpatched at all. He said that they prioritized based on severity and publicity—who wouldn't? And he said that patches for unknown holes were developed, tested, and not released until they were needed. This gives them the advantage of not alerting the malicious user community that there are holes they can exploit if they act more quickly than the users who have to apply the patches. But it also means that when the time comes for them to release the patch, they know it's been patiently tested, not kneejerked out the door. I'd rather have no patch for an unknown hole that's not likely to be exploited than a patch that's going to make my software buggy.

The only downside to this is that you have to trust the developers to correctly assess the risk of leaving unknown holes unpatched. If they think a hole is unlikely to be randomly spotted and will only do minimal damage if it is, but script kiddies spot it in a week and manage to get remote root access from it, yes they screwed up by not patching it right away. Mistakes like that can be made the same way the mistakes that created the hole in the first place were made. But you have to make such judgments just to prioritize the holes, regardless of what you intend to do with them. And there certainly isn't anybody more qualified to make those judgments than the developers who discovered the hole.

Yup (1, Insightful)

derEikopf (624124) | more than 7 years ago | (#19119189)

Yup, it's not about quality software, it's about money. Hardly anyone makes software anymore because they want to or because they like making quality software...they'll just do the bare minimum they can to maximize profit.

Re:Yup - not really (1)

qaz2 (36148) | more than 7 years ago | (#19120147)

This is not entirely true. It is not always feasible to change your software just like that. Processing patches can be a lot of work for the customer, as their IT department might wish to test the patched version on test systems. Furthermore a patch may impact other parts of an application, especially if a bug resides deep in the framework and the application is complex.

In such a case it really is not bad to consider fixing a low-risk bug in the next "normal" release when a usual full QA test has verified that no other stuff has, or seems to be broken.

Leaving a high-risk, or highly irritating bug be will cost you customers. But daily dumping patches on your customers ICT department will also not endear you to them; and changing functionality or accidentally introducing new bugs will not endear you to the end users.

It's always a difficult decision to make, and you never make the right one:)

By the way, I like making software, and I like to make quality software. It's all about the money, but consistently delivering bad or buggy software will cost you your customers. It does not (always) pay to be cheap, and I believe most companies know that.

Author is Right (4, Interesting)

mpapet (761907) | more than 7 years ago | (#19119217)

Pre-emptive disclosure works against the typical closed source company.

Option 1:
Exploit is published, patch is delivered really quickly. Sysadmin thinks, "Those guys at company X are on top of it..." PHB will say the same.

Option 2:
Unilaterally announce fixes, make patches available. Sysadmin doesn't bother to read the whole announcement and whines because it makes work she doesn't understand or think is urgent. PHB thinks "Gee company X's software isn't very good, look at all the patches..."

The market for secure software is small even smaller if you add standards compliance. Microsoft is a shining example of how large the market is for insecure, non-standard software.

Depends on what you call a security hole.... (2, Insightful)

zappepcs (820751) | more than 7 years ago | (#19119219)

Examples:
Not likely to be fixed completely - In some ways, Windows is a security hole
Could be fixed if escalates - password strength and use
Should be fixed - Lack of any authorization requirements etc.

If you remember the Pinto car-b-ques, there is a money factor to think about. Since most standard computing systems are not life-critical, some bugs can be left for later. Some bugs you might know about but they are not in your code such as those shipped with the networking stack of the RTOS that you use for an embedded product. An insecure FTP client on an embedded machine that has no access to other machines or sensitive material is not terribly bad.

On the other hand, if the machine can be compromised and allow the attacker access to other machines... that needs to be fixed.

Re:Depends on what you call a security hole.... (1)

blindd0t (855876) | more than 7 years ago | (#19119485)

Why was this modded as troll - just because of the statement, "In some ways, Windows is a security hole"?

There is some truth in this, IMHO. An example of how one could consider this true is that Microsoft no longer offers patches for older versions of Windows. I understand it is at the users disgression to keep up to date with costly Windows updates and to keep up with patches, but I still maintain that there is some truth to this statement, even if it is only a little bit of truth. Furthermore, any OS (especially older versions) have vulnerabilities, in which case, one could clearly argue that any OS and any piece of software is a security hole. Also, a good point was made about how important it is to patch based on the possibility of escalation. Though a small device being compromised itself might be a small risk, it could escalate itself to being a high risk if it is possible to compromise another, more critical machine from the one that seemed to be a small risk. The key is to either not take any chances where possible, or be as diligent as reasonably possible to see through such risks outside of only the obvious.

A car analogy... (3, Insightful)

mi (197448) | more than 7 years ago | (#19119223)

Was not it GM, that lost millions of dollars a few years ago in a lawsuit brought by people (and their kin), whose car was rear-ended on a toll plaza and exploded in flames?

GM's arguments, that making the car's fuel-tank more protected was too expensive for the modicum of additional safety that would've provided, were — for better or worth — ignored by the jury...

In other words, you may not deem a security hole to be large compared with the expense of pushing out another patch, but if somebody gets hurt, and their lawyer subpoenas your internal e-mails on the subject, you may well be out of business.

Re:A car analogy... (0)

Anonymous Coward | more than 7 years ago | (#19119525)

No that was a Ford. I remember reading that in my Ethics class.

Re:A car analogy... (4, Interesting)

LWATCDR (28044) | more than 7 years ago | (#19119563)

It was Ford and it was the Pinto. The problem is.
1. The Pinto even before the "fix" didn't have the highest death rate in it's class. Other small cars had the same death per mile or worse.
2. The NTSA had the dollars per rate figure in the national standards for safety and Ford referenced in in their internal documentation which the lawyer used in the case.
3. Had Ford not identified the risk that a bolt posed to the fuel tank and documented it they probably wouldn't have lost so big in court.

Just thought I would try and kill a myth with some truth. Probably will not work but it is worth a shot.

Re:A car analogy... (4, Funny)

6Yankee (597075) | more than 7 years ago | (#19119803)

for better or worth

Let me guess - you're a LISP programmer?

Re:A car analogy... GM would probably fix a (1)

davidsyes (765062) | more than 7 years ago | (#19120107)

4-inch crack in a chastity belt by applying multiple layers of Armor All, or maybe Poly-glycote...., butt, on the other hands....

(CAPTCHA: DISCREET)

What do they have to gain? (1)

644bd346996 (1012333) | more than 7 years ago | (#19119251)

Why would a corporation choose to not release a patch for a known security vulnerability, even if it is minor? Wouldn't it be better PR to always release the patch before the exploit comes out? This sounds totally unethical to me. They are trying to take an ostrich approach to security: the bug doesn't exist unless the customer can see it.

Besides, aren't there liability issues with knowingly shipping a product with undisclosed defects? What if they underestimate the severity of a vulnerability? How can they be so confident that they have judged the severity correctly, when they are the ones who created the bug in the first place? This sounds a lot like brinkmanship to me, and I wouldn't want to be associated with that kind of company.

Re:What do they have to gain? (2, Insightful)

Dan Ost (415913) | more than 7 years ago | (#19120055)

Besides, aren't there liability issues with knowingly shipping a product with undisclosed defects?

The fix the problem as soon as they discover it. The next release of the product does not have the problem. If the
problem becomes public before the next release, then they immediately issue the patch for it and hope that people
patch.

As long as they release often enough that the fixes are largely in place before the problems are found, I have no
issue with this. It actually seems responsible since it posses less risk to the customers that are slow in patching
their systems that the alternative.

Re:What do they have to gain? (1)

644bd346996 (1012333) | more than 7 years ago | (#19120291)

IANAL, but doesn't this strategy leave them open to lawsuits from the customers who are affected by a zero-day exploit? What if one of these "minor" bugs actually enables a DoS? It would seem that the affected customers could sue for damages even if the company had underestimated the severity of the bug.

Re:What do they have to gain? (0)

Anonymous Coward | more than 7 years ago | (#19120393)

Why would a corporation choose to not release a patch for a known security vulnerability, even if it is minor?

Because it (backporting an existing, possibly quite complex fix
with multiple dependencies on new features,
code review of the backport, creating a patch, possibly several
patches per release if this is cross-plattform software,
running all these patches through full automatic and manual
regression test suites) costs a lot of money.
Money you can spent only once.
Money which the paying customers expect you to spend
on the patches they requested, rather than on the patches
you would want to create.

Thomas

(2) possible outcomes (2, Insightful)

DebianDog (472284) | more than 7 years ago | (#19119255)

The one you hope for: Someone finds and public announces a problem. You team looks "quick to act" deploys a solution.

That other one: Someone exploits the bug to a degree you and your team never considered and your user community is devastated.

...and... (1)

commodoresloat (172735) | more than 7 years ago | (#19120429)

That other one: Someone exploits the bug to a degree you and your team never considered and your user community is devastated.
And they initiate legal proceedings based on the policy of not fixing known security bugs.

yes (2, Insightful)

brunascle (994197) | more than 7 years ago | (#19119261)

all known security bugs should be fixed, but low-risk non-public ones can be low priority. we cant expect any vendor to send a patch each and every time they find a security bug, but once they find one the next version they release damn well better have it patched.

Reactivo! (1)

evil_Tak (964978) | more than 7 years ago | (#19119265)

The arguments only make sense if one's development methodology is purely reactive. The guy makes it sound like his entire software team is scampering around frantically putting out fires, and doesn't have any time to spare on any bug that hasn't reached three-alarm status yet.

I'd rather squash my bugs when they're unknown or barely known than race to hack together a fix when the whole world is screaming. But apparently that's just me.

Proposal (2, Interesting)

PaladinAlpha (645879) | more than 7 years ago | (#19119301)

I think developers and companies should think long and hard about how such policies would be received if the end-user were presented with them in plainspeak.

"Welcome, JoeUser. This is WidgetMax v2.0.3. As a reminder, this product may contain security holes or exploits that we already know about and don't want to spend the money to fix because we internally classify them as low-to-medium risk."

I'm not saying it's necessarily wrong -- budgets are finite -- but keeping policies internal because of how they would be viewed publicly is deceiving your customer, full stop. Also, these guys are setting themselves up as final arbiter of what's risky in code exploits. It makes one uncomfortable to think about.

Liability insurance? oh wait.. it's software... (0)

Anonymous Coward | more than 7 years ago | (#19119303)

Think of this in some other industry.

If Say a major car company found a systemic flaw in there product, they would be compelled to fix it and rework any product that had already shipped.

This is called a recall.

Something about Fight Club.... and some a * b Cost of a recall math.

Not fix bugs? Not a good idea. (1)

Todd Knarr (15451) | more than 7 years ago | (#19119375)

A security hole is a bug, plain and simple. There's no excuse for deliberately not fixing a bug. Now, you can make an argument that if the bug's minor and not causing customer problems you should hold the fix for the next regularly-scheduled release, but that's about it. The argument that unannounced holes don't seem to be being exploited is particularly disingenuous. People aren't looking for exploits of holes they don't know about. It's not surprising, then, that few people are reporting problems they aren't looking for. What's more likely is that the small subset of crackers who can find unannounced holes are quietly using them for their own gain, keeping a low profile specifically to avoid having customers raising a stink and forcing the vendor to close the hole, and only after it finally goes public do they release their secret to the wider script-kiddie population since it's no longer of use to them.

Why is software different than tangible things? (2, Interesting)

Dracos (107777) | more than 7 years ago | (#19119437)

If an automaker or toy manufacturer didn't issue a recall on a minor safety issue immediately, they'd get tons of bad press. But a software company can sit on just about any security bug indefinitely (I'm looking at you, Microsoft) and few people care.

I suspect 2 factors are at work here:

  1. The general public doesn't care about software security because it doesn't effect their daily lives
  2. There's no "think of the children!" emotional aspect to software

#2 probably won't ever happen industry wide, and until the public understands how much impact software security can have, they won't care.

100% Correct -- for many reasons (5, Interesting)

holophrastic (221104) | more than 7 years ago | (#19119453)

We do the same thing. Every company has limited resources, and every decision is a usiness priority decision. So the decision is always between new features and old bugs.

Outside of terribly serious security holes, security holes are only security holes when they become a problem. Until then, they are merely potential problems. Solving potential problems is rarely a good idea.

We're not talking about tiny functions that don't consider zero values. We're talking about complex systems where every piece of programming has the potential to add problems not only to the system logic, but also to add more security holes.

That's right, I said it -- security patches can cause security holes.

It is our standard practice not to touch things that are working. Not every application is a military application.

I'll say it again. Not every application is a military application.

Your front door has a key-lock. That's hardly secure -- they are easily picked. But it's secure enough for most neighbourhoods.

So the question with your software is: when does this security hole matter, and how does it matter. If it's only going to matter when a malicious person attacks, then the question comes down to how many attackers you have. And if those attackers are professional, you might as well make their life easier, because they'll get in eventually in one way or another -- I'd rather know how they got in and be able to recover.

How it matters. If it reveals sensitive information, it matters. If it causes your application to occasionally crash, when an operator is near-by, then it doesn't matter.

There are omre important things to work on -- and many of these minor security holes actually disappear with feature upgrades -- as code is replaced.

Re:100% Correct -- for many reasons What...? (1)

davidsyes (765062) | more than 7 years ago | (#19120195)

If it aint FOKE don't BRIX it? EESHSH

I suppose that is why some software companies use the S/W equiv of Poly-Razz-Ma-Tazz... overwhelm the user with a plethora of "features" so that metrics look better when ONE otherwise major bug would stand out in a small-feature program/app. But, then many customers (and devs/publishers) cannot see the forest for the trees.

Re:100% Correct -- for many reasons (1)

UncleTogie (1004853) | more than 7 years ago | (#19120303)

If it causes your application to occasionally crash, when an operator is near-by, then it doesn't matter.

Says you. Try deploying a restaurant point-of-sale system that only crashes "occasionally". You'll have managers just LOVING your software when it goes down during rush hour. It matters...

If you HAVE a solution, you should fix it. (2, Insightful)

gurps_npc (621217) | more than 7 years ago | (#19119521)

That is, if you have a patch, then you should fix it.

I could see not waiting till your next regular patch, so as to avoid bringing it to the attention of the hackers.

But the rest of his arguments are pretty crappy.

Uh.. A/V bad? (1)

ushering05401 (1086795) | more than 7 years ago | (#19119605)

From TFA:

"It's possible that if anti-virus software had never been created, we wouldn't be dealing with the level of worm and bot sophistication that we face today."

And if we didn't use antibiotics we wouldn't be seeing the current evolutionary pace of biological malware.

TFA presents some points for discussion, but this doesn't strike me as one of them.

Did I really just type 'biological malware?'

Regards.

Just because Microsoft acts that way... (1)

644bd346996 (1012333) | more than 7 years ago | (#19119627)

...doesn't mean it is the key to success.

I really want to know what company the "reader" works for, so I can add them to my shit list. I don't want to support such abhorrent security practices.

And remember: Friends don't let friends buy Microsoft.

Risk of litigation (2, Insightful)

Dancindan84 (1056246) | more than 7 years ago | (#19119641)

IANAL, but if a company suffers a significant financial loss due to a bug that the vendor knew about but did not patch, does that not open them up for big time law suits?

fraud? (1)

DM9290 (797337) | more than 7 years ago | (#19119689)

selling a product which you know does not meet specifications in order to derive a benefit is a crime called "fraud".

Trying to artificially inflate your bugfix ratings or trying to save money is a benefit.

I dont see how this company expects to evade legal CRIMINAL responsibility when someone is harmed because of a pre-existant security problem they knew about but did not disclose at the time of sale.

there is no taxonomy of problems (1)

gelfling (6534) | more than 7 years ago | (#19119805)

All problems are theoretical until they happen to you. Before they happen to you they fall into two categories:

It can never happen to you Probability = 0.0
It might happen to you Probability 0.0 x 100.0

The problem is that we don't know this when we hear of a problem. All we hear about is the theoretical problem and the probability of a theoretical problem being theoretically true = 100.0%. If we had a way to neatly classify vulnerabilities into both the probability of occurence and the probability that something of a given 'oh-shit'-edness happening then we could easily ignore problems that are more expensive to fix than the risks they present. Since we can't really do risk analysis we're stuck with trying to do everything, I guess, or not. I'm not sure. Maybe it doesn't matter much.

over working codes and dumb PHB's does not help (1)

Joe The Dragon (967727) | more than 7 years ago | (#19119815)

When you work people 80+ hour week you get a lot more bugs in the code.
When you rush things out to meet a clueless PHB dead line things get passed over.
When you cut funding some times you don't have the hardware to do full testing and you end up with test code on the sever or a desktop turned in to a test sever.
When you waste the codes time on crap like TPS reports useless meetings to tall about way things are running late that does not help.

Hit em in the pocket (1)

packetmon (977047) | more than 7 years ago | (#19119823)

The problem with vendors closing security holes is, they often can't keep up with the volume of it.

In the case of MS, how often do they close one hole only to open up another. I don't want to throw OS' around, but look at team OpenBSD, regardless of the smug attitudes, you have to give Theo and his group credit. They don't release for the sake of keeping up with the Jones'. They're methodical and accurately screening and scrutinizing what their OS does, what its supposed to do and how it does it.

The issue with vendors are, they're in a race to meet stockholder/shareholder demands and are often releasing whatever they can to meet sales forecasts. This leads to shoddy designs and implementations. If you ask me, some of these vendors should be getting fined by governments based on the severity of their holes. I'm sure with that kind of pressure, they'd be more reluctant to release cruddy programs as opposed to hurrying something insecure out the door.

If I were a vendor and all of the sudden I was getting fined for not taking security seriously, I would put on a big old about face and audit the hell out of my code. But what incentives, qualms, repercussions face vendors now? None. People will bitch about MS' security yet go out and buy Vista. How about people start taking things very serious and start suing some of these vendors... "My personal information was compromised because of shoddy program X class action lawsuits." Hit em where it hurts, eventually they will either listen or go broke.

I once worked for a place like that (4, Interesting)

pestilence669 (823950) | more than 7 years ago | (#19119835)

The place I worked for was a security company. They had no automated unit testing and never analyzed for intrusions. You'd be shocked to find out how many holes exist on devices people depend on to keep them safe. The employees took it upon themselves (subverted authority) to patch our product. Security problems, even on security hardware, were not "priority" issues.

We too "trained" our coders in the art of secure programming. The problem, of course, is that we were also training them in basic things like what a C pointer is and how to not leak memory. Advanced security topics were over their head. This is the case in 9 out of 10 places I've worked. The weak links, once identified, can't be fired. No... these places move them to critical projects to "get their feet wet."

At the security giant, training only covered the absolute basics: shell escaping and preventing buffer overflows with range checking. The real problem is that only half of our bugs were caused by those problems. The overwhelming majority were caused by poor design often enabled by poor encapsulation (or complete lack of it).

There were so many use cases for our product that hadn't been modeled. Strange combinations of end-user interaction had the potential to execute code our appliance. Surprisingly, our QA team's manual regression testing (click around our U.I.) never caught these issues, but did catch many typos.

I don't believe security problems are inevitable. I've been writing code for years and mine never has these problems (arrogant, but mostly true). I can say, with certainty, than any given minor-version release has had 1,000's of high-quality tests performed and verified. I use the computer, not people... so there's hardly any "cost" to do so repeatedly.

I run my code through the paces. I'm cautious whenever I access RAM directly. My permissions engines are always centralized and the most thoroughly tested. I use malformed data to ensure that my code can always gracefully handle garbage. I model my use cases. I profile my memory usage. I write automated test suites to get as close to 100% code coverage as possible. I don't stop there. I test each decision branch for every bit of functionality that modifies state.

Aside from my debug-time assertions, I use exception handling quite liberally. It helps keep my code from doing exceptional things. Buffer overflows are never a problem, because I assume that all incoming data from ANY source should be treated as if it were pure Ebola virus. I assume nothing, even if the protocol / file header is of a certain type.

Security problems exist because bad coders exist. If you code and you know your art form well, you don't build code that works in unintended ways. Proper planning, good design, code reviews, and disciplined testing is all you need.

Thats retarded (1)

JustNiz (692889) | more than 7 years ago | (#19119863)

That approach allows hackers to exploit known-about but unreported (and therefore unfixed) loopholes potentially for ages.

Please give me the name of this guy's company so I can avoid all their products.

Then there's the ethics and responsibility args (2, Informative)

postbigbang (761081) | more than 7 years ago | (#19119923)

They say: if you know about it, you're obliged to fix them. And then you kick your QA department's butts around the corridor several times. If your customers are your software testers, then you're business model is likely corrupt. And while there are a number of coders that will complain that it was the libs, or the other guy's fault, ultimately, a responsible organization takes ownership of their faults, just like humans should.

Pitiful (1)

Stumbles (602007) | more than 7 years ago | (#19119939)

Not patching security flaws no matter their "level of criticality" is like standing in the middle of some railroad tracks that's lightly used. Sure you might stand there for a long time before a train comes along but when it does your toast.

Maybe if they spent more time ... (1)

voislav98 (1004117) | more than 7 years ago | (#19120061)

... fixing their bugs instead of studying whether to fix them they would have a better product. It reminds me of a joke http://www.infojokes.com/index.php/archives/10184 [infojokes.com]

Litmus Test (2, Insightful)

Fnord666 (889225) | more than 7 years ago | (#19120089)

Are you willing to indemnify your users for any and all losses suffered by them due to a flaw/bug which you knew about but chose not to patch? If not, then patch it!

Fixing holes (2, Insightful)

kupekhaize (220804) | more than 7 years ago | (#19120143)

The problem is that if you've discovered a security hole; chances are someone else has as well. Just because a problem hasn't been reported to your company doesn't mean that it is unknown.

History shows that there are lots of black hats that will sit on security breaches/expploits/bugs/etc and exploit them for their own end rather then reporting them to a company. Breaches in security should be patched as soon as they are discovered. If 1 person found the bug/hole/exploit/whatever, that means another person can find it as well. There's nothing there about having to report it to a vendor once found by either person.

Re:Fixing holes (1)

fozzmeister (160968) | more than 7 years ago | (#19120311)

It's a whole different thing to find a hole with the source code, than without it.

But I'm not sure on the practice, what happens when a disgruntled employee leaves? Did he have access to that information?

Yes (2, Interesting)

madsheep (984404) | more than 7 years ago | (#19120283)

Yes, yes they should patch them all. Personally it'd eat away at me knowing I could spending a few minutes, hours, or days to fix a vulnerability in my software. I don't think I could take pride in what I do if I just leave crap like this around because I don't have to fix it and don't think it's important unless someone finds it publicly. I'm glad they fix the HIGHs (however they rate this.. who knows?) and the publicly disclosed ones. But why not fix the small ones as you find them? It's a little bit of embarassment every time an issue is found. This is one less piece of embarassment. However, maybe it's the quasi-perfectionist in me, I couldn't imagine not fixing this stuff.

There always is a question (1)

KZigurs (638781) | more than 7 years ago | (#19120383)

What exactly is a security hole and what exactly is a feature?

Simple truth is - for any software with sufficient amount of users there will be users that will manage to find even most strange "features" and often exploit to their advantage (as in - it benefits them actially).
As long as you are facing the unknown (users that might actuall think that any particular issue is a feature and use them) versus known ("yes, there seems to be a problem with IP parsing if using nonstandart syntax, but nobody has complained") those two has to be weighted quite carefully.

the OpenBSD approach (1)

DaMattster (977781) | more than 7 years ago | (#19120397)

They should absolutely patch bugs when discovered, regardless of classified severity. They should take the OpenBSD approach and regularly and aggressively audit their code. I think customers that have paid good money for a product, deserve one as bug free as possible. OpenBSD is an OS that you get for free that has far fewer flaws and is proactive about letting its users know when a bug does crop up. If a free/open source project with an even tighter budget, there is absolutely no reason a commerical vendor cannot do the same.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?