Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Responsible Disclosure — 16 Opinions

kdawson posted more than 7 years ago | from the one-man's-meat dept.

87

An anonymous reader writes, "Disclosure. Just a word, but in the security field it is the root of progress, sharing knowledge and getting bugs fixed. SecurityFocus published an interesting collection of quotes about the best disclosure processes. The article features 11 big vendors, 2 buyers of vulnerabilities, and 3 independent researchers. What emerges is a subtle picture of the way vendors and researchers differ over how much elapsed time constitutes 'responsible.' Whereas vendors ask for unlimited patience, independent researchers look for a real commitment to develop a patch in a short time. Nice read." Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."

cancel ×

87 comments

No quote from Cisco (1)

MECC (8478) | more than 7 years ago | (#16103808)

What - no quote from Cisco?

Re:No quote from Cisco (0)

Anonymous Coward | more than 7 years ago | (#16103845)

None from Cisco nor Cognos, I guess that means they are never planning to fix the issues.

The Invisible Hand of 'Responsible Disclosure' (0)

Anonymous Coward | more than 7 years ago | (#16107915)

I'd read this article when it first came out and responded with a blog posting [spidynamics.com] , discussing the top 10 failures of responsible disclosure.

Why shouldn't I kill myself? (0)

Anonymous Coward | more than 7 years ago | (#16103818)

I see no good reason not to.

Re:Why shouldn't I kill myself? (0)

Anonymous Coward | more than 7 years ago | (#16103909)

reasons not to kill yourself [stardrift.net] . There feel better?

Re:Why shouldn't I kill myself? (0)

Anonymous Coward | more than 7 years ago | (#16104033)

That cutesy list makes me want to kill myself. ;)

But seriously, those points are unconvincing or just plain inaccurate.

I don't see why I will get better. I don't see why I'm "critical."

The best disclosure is full (0)

Anonymous Coward | more than 7 years ago | (#16103848)

SecurityFocus published an interesting collection of quotes about the best disclosure processes.

Full Disclosure is the only way to go. It's worked brilliantly for Jason Fortuny [livejournal.com] , my new Hero!

No entry? (1)

Corbets (169101) | more than 7 years ago | (#16103869)

Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."

Well, after tens of thousands of Slashdot nerds read this, I'm sure that'll change in a few minutes. :)

Re:No entry? (1)

jc42 (318812) | more than 7 years ago | (#16105323)

Look at the page's history. It took about 7 minutes for someone to create it. Of course, it's mostly a redirect to the Full_disclosure page. That may change if a bit of discussion shows that it's worthwhile to separate the topics.

Wikipedia (3, Funny)

adavies42 (746183) | more than 7 years ago | (#16103885)

Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."

It does now.

Re:Wikipedia (5, Interesting)

Anonymous Coward | more than 7 years ago | (#16104653)

Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."
Seems like the one should just be a redirect to the other. Or, better yet, maybe "responsible disclosure" should redirect to "anonymous disclosure".

I attempted to get [multi-billion dollar company] to fix a gaping security hole that was well-known to persons using [their product] which has support & licensing fees upwards of $300,000 per year. Their response was to tell my management that I was a loose cannon and should be fired (luckily my management told them to get stuffed, but the hole still wasn't fixed).

So I sent [multi-billion dollar company] an email from my infant daughter's email account (yes, I create accounts for my kids when they are born, shut up) informing them that the details of their security problem would be published on the bugtraq mailing list in two weeks, and attached a copy of what would be posted.

In less than 48 hours, I was contacted at the "postmaster" address for the email domain by [multi-billion dollar company] who informed me that we (the domain's registered in the name of a friend of mine, so there's no visible connection to me) were harboring an evil criminal hacker at [email address of my daughter] and that I needed to give them personal information about that user. I replied "oh, gee, thanks, that account belongs to a two-year old child, somebody must have hacked it, we are shutting that account off now, have a nice day".

Three days later all customers of [multi-billion dollar company] got an urgent update that corrected the security flaw in [their product]. I never did post to bugtraq, because the point of the exercise was to get [multi-billion dollar company] to do what was best for both them and their customers, and that goal was achieved. I couldn't have made the threat, though, without the existence of anonymous full disclosure listservs.

Re:Wikipedia (1)

adavies42 (746183) | more than 7 years ago | (#16104839)

Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."
Seems like the one should just be a redirect to the other.
Why yes, that's exactly what I did when I created the "responsible disclosure" article.

Information was okay... (2, Interesting)

HatchedEggs (1002127) | more than 7 years ago | (#16103890)

It really wasn't so indepth, but it was interesting the statement by each of those companies. The one that impressed me the most was SAP, where the vendor and the researcher would agree on action to be taken at the time of the disclosure.

I do think that the ethical approach is certainly approach a vendor first. Inform them that they have a given time to apply a patch to it, and then hold them to it and release the information at the end of that time.

If it involves Microsoft.... (2, Funny)

Tanuki64 (989726) | more than 7 years ago | (#16103907)

...the decision is easy. Publish the bugs after five days. This should be enough. They proved they can deliver patches after three days.

Re:If it involves Microsoft.... (0)

Anonymous Coward | more than 7 years ago | (#16104010)

...the decision is easy. Publish the bugs after five days. This should be enough. They proved they can deliver patches after three days.

Five days for the vendor to sit on their hands, while the bad guys "pwn" your system. Five days where the user can't do anything, not disable the service, not even unplug the network cable, because they don't even know about the exploit.

Irresponsible IMHO. Get the information in the hands of the users, so that they have a chance to protect their systems, instead of giving the vendor 5 days to look good. Security above image, please.

Of course the vendors have the opposite view. They want to protect their image, and don't care how many users get attacked while they are writing the patch.

Re:If it involves Microsoft.... (2, Insightful)

MankyD (567984) | more than 7 years ago | (#16104147)

Get the information in the hands of the users, so that they have a chance to protect their systems, instead of giving the vendor 5 days to look good.
Get the information in the hands of the users? What on earth are my parents going to do with information about a buffer overflow exploit?

Maybe you mean "Get the information in the hands of people who can fix the problem." That (with regards to the grand parent post) would be Microsoft (or whatever vendor we're talking about.) My parents are never going to find some 3rd party site to download a patch from in less than 5 days.

And what 3rd party are you going to trust?

Re:If it involves Microsoft.... (1)

honkycat (249849) | more than 7 years ago | (#16104998)

Get the information in the hands of the users? What on earth are my parents going to do with information about a buffer overflow exploit?
Stop using the software until its fixed? Ask you, their technically inclined grandchild, what to do to avoid being hacked? Any number of things to ensure they don't become victims while idly waiting for the vendor to solve the problem.

But really, your grandparents aren't going to be helped much because, as you say, they aren't going to patch very often anyway. However, there are plenty of users who ARE technically sophisticated and would be able to take advanced steps to mitigate risk due to a security hole in one piece of software.

5 days with MS?! (1)

MachineShedFred (621896) | more than 7 years ago | (#16104284)

Are you sure you'd want to install anything that comes out of Microsoft with all of 5 days of coding, testing, QA, regression testing, validation, etc?

I know I wouldn't. Give em 30 days at least.

Re:5 days with MS?! (2, Funny)

Opportunist (166417) | more than 7 years ago | (#16104425)

Well, I certainly wouldn't want to install anything that comes out of MS after only 5 days. History tells us that the consumer-related bugfixes take at least 30 days, only industry-benefitting fixes come out over night.

Re:5 days with MS?! (2, Interesting)

Kirth (183) | more than 7 years ago | (#16104647)

Give em 30 days at least.

Why? MS has proven it can fix a hole which allows reading of its DRMd content in 3 days.
http://www.schneier.com/blog/archives/2006/09/micr osoft_and_f.html [schneier.com]

Re:5 days with MS?! (0)

Anonymous Coward | more than 7 years ago | (#16105237)

If you read the /. article, it was 9 days

Definitely worth the read... (2, Interesting)

tygerstripes (832644) | more than 7 years ago | (#16104032)

I'm not particularly interested in exploits and such per se, but I found the article fascinating anyway. Sure, some of what they said was interesting - especially the researchers - but the most interesting thing was the tone of the Vendors' statements

Seriously, have a look. If you're at all used to reading between the lines, their statements regarding security, disclosure etc give you a far greater insight into their real attitudes than any marketing, reviews or horror stories ever could.

Re:Definitely worth the read... (1)

nolife (233813) | more than 7 years ago | (#16104163)

Well, you are really reading a response from one person. In theory that person was hired by that company and is representing the company and is speaking and providing the company line. In reality, that one person is still an individual person. The people behind that person may not practice the same attitude. I never assume one person can represent an entire group, even if they are paid to do that.

Why there is no entry for 'responsible disclosure' (3, Insightful)

John Fulmer (5840) | more than 7 years ago | (#16104035)

Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."


It may be because 'full disclosure' has meaning in the security community, while 'responsible disclosure' does not.

All 'responsible disclosure' is is a set of general ethics and courtesy that security researchers give programmers/companies/entities in order to make an orderly repair of a vulnerability. It is a function of 'full disclosure', not something in of itself.

Slightly related: I've read things that liken 'full disclosure' to yelling "Fire!" in a crowded theater. I tend to think it of yelling "Fire!" in a theater made of flash paper doused in gasoline, while one of the jugglers is preparing to light his flaming torches.

In other words, yelling 'FIRE!' is permissible, if there is actually a high likelyhood of fire...

Re:Why there is no entry for 'responsible disclosu (1, Flamebait)

bunratty (545641) | more than 7 years ago | (#16104124)

If there is really a fire, or a likelihood of a fire, you should inform the management so they can make an announcement that doesn't set off panic, which could lead to people being trampled to death.

In the case of security announcements, publicly disclosing a vulnerability before the vendor has been given time to get a patch out actually can cause a fire, because disclosing the vulnerability also allows anyone to create an exploit for the vulnerability.

In essence, full disclosure isn't as bad as shouting "Fire!" in a crowded theater. It's like being in a theater made of flash paper doused in gasoline, and then giving an arsonist a match.

Re:Why there is no entry for 'responsible disclosu (3, Insightful)

QuantumG (50515) | more than 7 years ago | (#16104281)

Sorry, no, that's bullshit. If you wanna make stupid analogies, at least get them right. Calling "Fire!" in a crowded theatre is absolutely perfectly ok, if there is a fire. However, if you know there is a fire and know that people will, sooner or later, get burnt, going for a stroll to the front office and asking to talk to the manager, tell him there is a fire, and have him say "Yeah, we'll get to that in about 120 days, on average" is not ethical. It's not responsible. It's participating in a conspiracy that belittles the people in the theatre and hampers their ability to make a valid risk assessment.

And everyone will be dead (1)

Tony (765) | more than 7 years ago | (#16104294)

If there is really a fire, or a likelihood of a fire, you should inform the management so they can make an announcement that doesn't set off panic, which could lead to people being trampled to death.

Do that, and everyone in the theater is likely to die.

By the time you get to management and inform them (and, in many cases, convince them there really is a fire), and they can get to their PA system, the fire will probably have spread throughout the entire theater. In case of a fire, time is extremely important.

Same goes for exploit disclosure. If an exploit is found, it might be okay to keep it quiet for a little while, as there is a high probability of a fire rather than an actual fire. But the longer you wait, the more likely somebody else a little less honorable will also find the exploit.

If my operating system is vulnerable, I have a right to know it. I have paid for the OS in good faith (at least, if I use MS-Windows or Sun Solaris or Mac OSX). Like a recall on bad tires, I have the right to know my purchase is defective, and the manufacturer has a responsibility to let me know, and offer to fix it.

They *do not* have the right to allow me to remain ignorant of the flaws in their system. Period.

Re:And everyone will be dead (1)

bunratty (545641) | more than 7 years ago | (#16104407)

Same goes for exploit disclosure. If an exploit is found, it might be okay to keep it quiet for a little while, as there is a high probability of a fire rather than an actual fire. But the longer you wait, the more likely somebody else a little less honorable will also find the exploit.
I completely agree. Immediate disclosure does nothing but help the bad guys. Staying quiet about it too long helps the bad guys, too. The only question is, what is the proper amount of time to wait after a vulnerability is discovered before you fully disclose information about it? Days? Weeks?

You have to use your judgement. (1)

Medievalist (16032) | more than 7 years ago | (#16106964)

Immediate disclosure does nothing but help the bad guys. Staying quiet about it too long helps the bad guys, too. The only question is, what is the proper amount of time to wait[?]
It varies depending on the complexity of the problem that requires fixing, and the resources available for the fix. Here's a couple of concrete examples:

You found a bug in Sendmail. You contact Eric Allman, you ask him how long it will take him to get his downstreams to push fixes through the distribution channels, and you agree to withold the sploit until he says everybody's ready, and he gives you credit for finding the problems and publically thanks you for your co-operation and for helping make the internet a better place.

You found a bug in a CA product. You use a library computer and an anonymous remailer to hit every CA address that seems remotely applicable, telling them the details of the spoit, and that you will release in two weeks. Two weeks later, you full disclose (again as anonymously as possible) so that CA's customers will know they need to patch (after all, CA is unlikely to aggressively publicize their mistake).

Why the difference? Sendmail Inc. and Eric Allman have proved that they won't victimize or prosecute people who help them secure their code, and that they will do everything possible to minimize damage while providing full credit to security researchers. Many open-source projects can be counted on to act this way (I'm sure others will post examples) despite operating on tiny or non-existent budgets. In contrast, most gigantic zaibatsus with enormous programming resources simply will not fix bugs if given any other choice - and discrediting or victimizing you might seem like a viable choice to some corporate pinhead who really doesn't care about anything but profits for the quarter.

You might vary that "two weeks" figure based on how big a vendor's programming staff is, or how complex the problem is, or how serious the vendor is in regards to security, but in general you need to set a hard deadline and stick to it. It's the responsible thing to do.

Responsible Disclosure == hiding vulnerabilities (4, Interesting)

QuantumG (50515) | more than 7 years ago | (#16104056)

The responsible vendor takes time to vet the problem within their own lab. They have to develop a patch, [they] Quality Control it and then publish the patch. Microsoft and Oracle average about 120 days to do this.

So, in order to be "responsible" you have to keep the vulnerability secret for 120 days. Four months. You're kiding right? Say I'm an independant researcher. I find this vulnerability using no special skills and publically available tools. Clearly a highly skilled blackhat could just as likely have found the same vulnerability as me. Let's suppose that I've found this vulnerability in the first 2 days of a new release of the product under inspection. The blackhat could well have discovered it in the same number of days, but let's say it takes him a month longer than me, just to be generous. I'm supposed to sit on this vulnerability and let the blackhat break into systems using it for how long? 3 months? This is responsible? Wouldn't it be more responsible if I were to go public immediately? Obviously publishing tools which script kiddies can use to attack people is not a good idea, that's not what we're talking about. Surely I should at least tell people that I have found a vulnerability and that the software in question is not, in my opinion, something that you should be using if you care about security. Isn't my failure to do this just make me complacent in a conspiracy to hide that fact that people may be breaking into systems using this vulnerability?

What if I'm an IDS manufacturer? I start getting alarms that shell code has been detected in a protocol stream that has never before seen shell code in it. Analysing the incident I discover that there is a vulnerability in a particular daemon which these attackers are using to gain unauthorised access. Who should I inform? The vendor of that daemon? My customers? Or the general public? This is no longer a theoretical "the bad guys might know too" situation, this is a widespread pattern of attack that I have detected indicating that real harm is being done. If I fail to inform the public immediately, am I not complacent in helping nto more computers? Doesn't sound very responsible to me.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

bunratty (545641) | more than 7 years ago | (#16104276)

On the other hand, if a security update is only days away, full disclosure of the vulnerability won't get a fix in the next update. It takes time to write and test that a fix doesn't introduce new bugs. Additionally, what if the black hats haven't found the vulnerability yet? By announcing the vulnerability immediately after discovery, you haven't helped get the fix out sooner, and worse, you've made it more likely for an exploit to be developed.

Announcing a vulnerability immediately doesn't seem responsible, because it could likely do only harm. Why not give the vendor until the next security update, or until the security update after that if the next update is scheduled within two weeks, to write, test, and deploy the fix?

Re:Responsible Disclosure == hiding vulnerabilitie (1)

QuantumG (50515) | more than 7 years ago | (#16104333)

Two weeks ok, fine, take your time and test it, cause maybe no-one is being broken into right now. Four months? No-one should be exposed for that long. It's guarenteed to be found and exploited. And what if we know people are exploiting this vulnerability right now? Does that change our response? It should. There should be a hot fix or at least an advisory to disable the service (or the relevant portions) put out immediately. And how about releasing a signature all those people running intrusion detection systems or protocol level firewalls? Shouldn't that be done immediately in any case?

Re:Responsible Disclosure == hiding vulnerabilitie (1)

bunratty (545641) | more than 7 years ago | (#16104566)

I agree four months seems like an excessively long time to stay quiet about a vulnerability, especially a serious one. Serious vulnerabilities should be fixed in the next security patch, unless the next one is too close for adequate testing.

In the case of a vulnerability that is being exploited, I agree that immediate action is necessary. However, with full disclosure there's always the possibility the some, or even most, black hats don't know about the vulnerability. In that case, again you've just made exploits more likely.

Doesn't it seem more responsible to disclose only what's necessary, such as that a hot fix is available for a security problem, or that disabling a certain service on a certain OS will prevent your system being exploited? That's responsible disclosure, which seems more responsible that full disclosure to me.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

QuantumG (50515) | more than 7 years ago | (#16104601)

Yes, absolutely. But that's not the definition of "responsible disclosure" that the vendor would advocate. Because even the hint of a vulnerability without a patch available is bad for business.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

bunratty (545641) | more than 7 years ago | (#16104663)

It's not what some vendors would advocate, but it's what is currently listed in Wikipedia as the description for responsible disclosure [wikipedia.org] :
Some believe that in the absence of any public exploits for the problem, full and public disclosure should be preceded by disclosure of the vulnerability to the vendors or authors of the system. This private advance disclosure allows the vendor time to produce a fix or workaround. This philosophy is sometimes called "responsible disclosure".

Re:Responsible Disclosure == hiding vulnerabilitie (1)

QuantumG (50515) | more than 7 years ago | (#16104777)

I read that paragraph as saying that no public disclosure of the issue before disclosing to the vendor is acceptable, no matter how vague.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

monkeydo (173558) | more than 7 years ago | (#16105228)

You ASSume that the problem is not being solved in a reasonable timeframe, and that public disclosure will accelerate the fix. What if that isn't the case. There are many exploits that are known to the real blackhats, that they keep to themselves and don't share. If a security researcher discovers one of these (he may even know that it is in the wild) should he immediately tell the world? Then a vulnerability that was once only exploited by a small group is know exploitable by ever 1337 cr4ck3r and script kiddie in the world.
Even if the vendor takes 4 months to patch the vulnerability when you think it should take 4 weeks, do you make the public more or less safe by your disclosure?

Re:Responsible Disclosure == hiding vulnerabilitie (2, Informative)

Todd Knarr (15451) | more than 7 years ago | (#16105639)

From a study reported on in the WSJ back in January [washingtonpost.com] , and elaborated on later [washingtonpost.com] , Microsoft's time to patch vulnerabilities they classify as "critical" has risen 25% since 2003, to 134 days. Except, however, in the case of full-disclosure vulnerabilities, where details and almost always proof-of-concept code were released to the general public. For those vulnerabilities, the time to fix fell from 71 days in 2003 to 46 days in 2005. Based on the data, full disclosure does in fact accelerate the fix and the problems aren't being addressed in a reasonable timeframe without it (4 months for a self-classified critical vulnerability isn't particularly timely).

Re:Responsible Disclosure == hiding vulnerabilitie (1)

monkeydo (173558) | more than 7 years ago | (#16120211)

Without knowing more about the quality of those fixes, your statistics are worse than meaningless; they are potentially misleading.

It's not four months .. it's four MORE Months... (0)

Anonymous Coward | more than 7 years ago | (#16106388)

Unless the vulnerability you have discovered was JUST introduced in a patch or update, the vulnerability has been sitting there undiscovered for some unknown period of time (depending on the system, it could have been there for years).

Re:Responsible Disclosure == hiding vulnerabilitie (1)

GGardner (97375) | more than 7 years ago | (#16104311)

Obviously publishing tools which script kiddies can use to attack people is not a good idea, that's not what we're talking about. Surely I should at least tell people that I have found a vulnerability and that the software in question is not, in my opinion, something that you should be using if you care about security.

But if the bad guys haven't found the problem at this point, they surely will after this kind of announcement. Moreover, changing running production software can be very difficult. In this age of huge software systems composed of many pieces, you may not even know which components are running under the hood. For example, the recent OpenSSL security advisory -- can you name all the executables in your network that run OpenSSL? Can you upgrade them all? Have you?

Re:Responsible Disclosure == hiding vulnerabilitie (1)

QuantumG (50515) | more than 7 years ago | (#16104373)

Are you trying to suggest that we shouldn't provide the right information for people who effectively manage their risk because some people are incapable of doing so? If an independant security analyst can find a vulnerability with no special tools or knowledge, then it is equally likely that a blackhat has found it. In fact, it's a lot more likely, as we suspect their are a hell of a lot more "bad guys" than there are "good guys".

Re:Responsible Disclosure == hiding vulnerabilitie (1)

Aladrin (926209) | more than 7 years ago | (#16104360)

There's the opposite side of that story, you know.

You happen upon an easy vulnerability. A blackhat finds it in a month. You stay quiet for 4 months. Patch comes after a full year from when you find it. A single blackhat has used it for a year.

You happen upon an easy vulnerability. You announce it to the public. Every half-assed blackhat in the world finds it and uses it for a full year before the patch comes out.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

QuantumG (50515) | more than 7 years ago | (#16104468)

If you announce it to the public and it still takes a year to patch then those people who are using the product in question should not be using that product. From a security point of view they are a lost cause. That said, it really depends how you announce it. If you say "I have found a serious vulnerability in product X, I recommend people don't use it until a patch for this issue has been released" then you have in no way reduced the time it will take a blackhat to find the vulnerability and produce an exploit for it.. and even when he does, he is unlikely to hand out the exploit as it will reduce the amount of time he has to utilize the exploit (as widespread use will put pressure on the vendor to release a patch faster, as customers flee from their product). Obviously if you say "I have found a serious vulnerability in product X, at this file:line or at this address in the binary" that will shorten the time that it takes a blackhat to find the vulnerability and it will also encourage more blackhats to look, and as such weaken their resolve not to distribute the exploit, so don't do that. Similarly, don't give out exploit tools yourself, even if they just "demonstrate" that there is a vulnerability and need to be modified to be used as hacking tools.

But doing nothing is worse and contacting only the vendor, who will take 120 days to release a patch, just as bad.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

Aladrin (926209) | more than 7 years ago | (#16105055)

If you just say 'I found a vulnerability in Product X' nobody will listen seriously. A few will say 'where', which you can't respond to under your own example. Even those few will then ignore you without any proof of the problem.

If you say 'I found a vulnerability in Product X when you do Y', even without any details, the blackhats already know where to look and the kind of things to look for. For example, if you tell me that a program has a vulnerability related to images, I'm immediately going to think about the different types of buffer overflows in images. There are quite a few, but you have narrowed the search from the entire app to just the portions that deal directly with images.

This is what happened with the Apple wireless stuff recently. We were told 'it has to do with wireless, and you have to have it set to accept any connection available.' Oh man, that's almost POINTING at the problem. Unsurprisingly, black hats figured it out in time for their conference a couple weeks later and did a presentation on it.

And yes, it goes without saying that if you don't tell the vendor, you've done wrong.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

QuantumG (50515) | more than 7 years ago | (#16105131)

Meh. If people ignore your vague warning and then get burned, they'll listen to you next time. If they don't get burned then no problem.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

Aladrin (926209) | more than 7 years ago | (#16105261)

If my name was RMS or Linus, I could expect that, yes. But in the world of IT, I'm nobody. Anyone who heeds just anyone who screams is crazy. You have to have a way to filter out the liars, cheats, scammers, etc. And lack of proof is pretty compelling.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

QuantumG (50515) | more than 7 years ago | (#16105433)

well, obviously you have to build a reputation for yourself, but if you never proclaim anything then no-one will ever believe any of your warnings.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

Qzukk (229616) | more than 7 years ago | (#16105059)

Perhaps, then, the most responsible thing to do is to provide a proof of concept to the software company to prove that your bug is serious, and then post publically "I have found a bug in foobar. By restricting network access to the whammy service to trusted systems, you can mitigate the risk of attack. A proof of concept has been provided to the company but will not be made available to the public."

Re:Responsible Disclosure == hiding vulnerabilitie (1)

QuantumG (50515) | more than 7 years ago | (#16105180)

Sounds responsible to me.

Re:Responsible Disclosure == hiding vulnerabilitie (0)

Anonymous Coward | more than 7 years ago | (#16104421)

The bad guys might know too????????? So????? Maybe the bad guys know stuff I don't....
Unless there is a secondary reason to disclose 120 for a patch to a complicated product is not that bad...... (Sure OSS will do it much quicker, that's why there is no propriatery software left these days.....)

Obviously your experience in teh field is limited to "I'm a geek so I know these things!!!!!!!!!!!!!!!!!"......

Get some real life experience where you have to make real life decisions, that have real life consequences (yeah yeah you'l claim years of experience at company, sure I do not buy it)...

Then go back and post again.....

Talk to you in 15 years dude...

Re:Responsible Disclosure == hiding vulnerabilitie (1)

cthulhuology (746986) | more than 7 years ago | (#16104678)

Amen Amen, On top of that, just think of all the poor free software developers who can't sit on the information for 120 days and have to fix it ASAP, because the only way to disclose the bug is to post to a ::gasp:: public mailing list! Yes it sucks that people don't patch their systems, yes it is terrible that people can use publically disclosed information to attack systems, but this increased level of secrecy only gives you the illusion of security. I'm probably not the only person on /. who has had severs hacked with an exploit during the grace period of some low level exploit, and as long as people believe this nonsense about grace periods, more people will be in the same boat with me. At least with full disclosure there is a clear and immediate danger, where every individual has equal opportunity to mitigate the problem, shutting off unnecessary services, migrating to alternatives, etc. And of course that is why vendors hate the idea, you might switch vendors!

Re:Responsible Disclosure == hiding vulnerabilitie (0)

Anonymous Coward | more than 7 years ago | (#16105037)

Yeah, but I don't care about your server. If I needed to care about it then you'd be able to afford the information and people needed to keep it secure.

On the other hand, the servers I do care about belong to companies that can afford to pay for that information. And they get a jump start on the Black Hats.

No! (0)

Anonymous Coward | more than 7 years ago | (#16104752)

Clearly, a highly skilled blackhat could have found the same vulnerability as you. But the moment you go public, a highly skilled blackhat will have 'found' the same vulnerability as you.

Sure, it could take a while for the vulnerability to be fixed. But shouting publicly about it isn't necessarily going to get it fixed any quicker. That is the nature of software development - some fixes are non-trivial and need extensive engineering and testing. And if you honestly believe that companies can stop using vulnerable software then you know jack about the way the World works. Name me a database without any vulnerabilities, and I'll gladly laugh at you. Want to see how well the World gets on without databases?

I think that you misunderstood though. The vulnerability isn't disclosed loudly and publicly, but is disclosed to customers willing to pay for it. You might want to read that FSISAC (www.fsisac.com) uses critical alerts from iDefense to inform its customers (the banks that look after your hard earned cash) about critical vulnerabilities. So your bank has a chance to mitigate critical vulnerabilities before the black hats are told of their existence and start trying to hack your account details. I wouldn't like to speculate who else may be an iDefense customer, but I bet they've got budgets.

You might also want to note the disclosure policy at the bottom of http://www.idefense.com/legal.php [idefense.com] . If the vendor is non-responsive then iDefense will go public with an advisory.

As to your IDS conundrum...
Inform the vendor of the daemon and hope that they fix it. Inform iDefense, get paid, and know that every attempt will be made to get the vendor to fix it whilst organisations that most need to know about it are informed. Go public and tell the black hats too.

Your choice.

One more data point of note - Even after a vendor releases a fix, systems don't patch themselves. Black hats have been exploiting this fact of life. The recent Microsoft MS06-040 vulnerability saw more trojans released immediately after the fix was made available than before. That tells me that black hats really do respond to public disclosure.

Responsible Disclosure: I work for VeriSign. My opinions are my own and not those of VeriSign.

Re:No! (1)

QuantumG (50515) | more than 7 years ago | (#16105683)

shouting publicly about it isn't necessarily going to get it fixed any quicker.

History shows you're wrong.

Re:No! (0)

Anonymous Coward | more than 7 years ago | (#16106383)

No. I was right.

I only have to find one instance of a vulnerability that somebody publicly disclosed that didn't get fixed for my statement to be true. On the other hand, you have to prove that every vulnerability that ever got publicly disclosed got fixed faster than it would have if it hadn't been disclosed, for my statement to be untrue.

History shows that Logic wins.

Re:No! (1)

QuantumG (50515) | more than 7 years ago | (#16106487)

History shows that Logic wins.

What history is that? Mathematics history? I suppose you think the better product gets the most market share too.

The fact of the matter is that corporations will fix zero security bugs if they can get away with it. If customers are stupid enough to keep coming back they'll do nothing to improve their prod

Re:Responsible Disclosure == hiding vulnerabilitie (1)

99BottlesOfBeerInMyF (813746) | more than 7 years ago | (#16105056)

Obviously publishing tools which script kiddies can use to attack people is not a good idea, that's not what we're talking about. Surely I should at least tell people that I have found a vulnerability and that the software in question is not, in my opinion, something that you should be using if you care about security.

I don't think a hard and fast 120 days rule makes sense, but I think a researcher should look at the characteristics of the vulnerability before deciding what is responsible, as well as the motivational effect on the software vendor. Is the vulnerability likely to have been discovered by blackhats? Is the vulnerability something that can be mitigated with firewalls, ACLs, or by disabling non-essential services if people know about it. Are their realistic alternatives people can employ to avoid this vulnerability (download Firefox). Will announcing the vulnerability make it obvious to blackhats or is it hard to find even if you are looking? Will announcing the vulnerability motivate the vendor to patch it more quickly?

There is a lot to consider for any individual case.

What if I'm an IDS manufacturer? I start getting alarms that shell code has been detected in a protocol stream that has never before seen shell code in it. Analysing the incident I discover that there is a vulnerability in a particular daemon which these attackers are using to gain unauthorised access. Who should I inform? The vendor of that daemon? My customers? Or the general public?

This is different than discovering a vulnerability. You've discovered an in-the-wild exploit. It tips the balance strongly towards full disclosure. If there is only one vendor for that type of product, and no work around, and the product is critical to those using it, it might be reasonable to just inform the vendor if you think they will fix it as fast as possible. Otherwise, full disclosure will almost always be the best option.

I think what you allude to is simply that there is no one rule for what you should always do. There are a lot of factors and each researchers has to evaluate them and make a call based upon them. So long as they are trying to be responsible and do the right thing, instead of grandstanding or trying to milk it for cash, I think we will all be pretty forgiving if they turn out to be in error.

Re:Responsible Disclosure == hiding vulnerabilitie (1)

Eythian (552130) | more than 7 years ago | (#16145024)

The compromise that makes the most sense to me is RFPolicy [wiretrip.net] . Put simply, this provides a 5-day contact period, and requires the vendor to keep the reporter notified of the status of the fix. Time to actual disclosure is then based on how cooperative the vendor is being. This (in theory) ensures a fix in a reasonable time frame, from the point of view of the reporter, while suggesting that the disclosure of the vulnerability should be held back as appropriate in order to do a proper fix, and giving good timelines should the vendor not be responsive or cooperative.

If I were Microsoft (5, Interesting)

Lord Ender (156273) | more than 7 years ago | (#16104062)

If I were deciding policy for MS or any other big vendor, I would publish a "hush money" policy on security vulnerabilities.

Basically, it would go like this:

"If you discover a vlunerability and report it only to us, when we eventually release the patch, we will give you credit for discovering it (what researchers really want), and we will give you $10,000. If you report it to anyone else before we release the patch, you will get no money and no credit."

Re:If I were Microsoft (1, Interesting)

QuantumG (50515) | more than 7 years ago | (#16104180)

I happen to know some people who have exactly that relationship with Microsoft.. except for the whole credit part.. they sell the rights to that along with their soul. Of course, seeing as I'm not about to give up the names of these people, you'll just have to take my word for it (or call me a lier, whichever you prefer). Microsoft doesn't make this policy public because it is out and out unethical. People have a right to know the risk of running Microsoft software, but so many security flaws are fixed in service packs without even a mention as to their existence.. therefore customers have only a vague idea of how important it is to upgrade.

Re:If I were Microsoft (3, Funny)

SensitiveMale (155605) | more than 7 years ago | (#16104201)

"If you discover a vlunerability and report it only to us, when we eventually release the patch, we will give you credit for discovering it (what researchers really want), and we will give you $10,000.

$10,000 per bug would bankrupt microsoft.

Re:If I were Microsoft (1)

Faylone (880739) | more than 7 years ago | (#16104268)

With all the vulnerablities they have, couldn't somebody find enough to leave them bankrupt?

Re:If I were Microsoft (1)

lazarusdishwasher (968525) | more than 7 years ago | (#16104550)

Instead of a flat $10,000 why not $1000 dollars a day with the first five days free, this way if the problem is fixed in 5 days you don't get any money , but still get the credit you wanted. If microsoft decides it takes 120 days then they pay $115,000.

If $1000 a day seems a little high I would agree to a multi-tiered pay scale based on severity, with the clause that if I later find a way to use the same vulnerability to do worse than what I came up with it would retroactively move me up the scale.

Why would you trust Microsoft? (1)

Secrity (742221) | more than 7 years ago | (#16104618)

If you found a security vulnerability in Windows, why would you trust Microsoft?

In your example, Microsoft has a $10,000 incentive to NEVER release a patch or give you credit for discovering it.

Will MS claim that the vulnerability was discovered in-house DAYS before you told them of it? What happens if you tell MS about the vulnerability and another researcher publishes the vulnerability while you have been patiently waiting several months for a patch and your check? If you tell MS about the vulnerability and another researcher had already told MS of the vulnerability, does MS tell you that it has already been reported, or do you wait for your check after the patch has been released? If MS tells you that it had already been reported, how do you know that is true? What if MS tells you that they don't consider it to be a vulnerability and then silently fixes it?

Re:Why would you trust Microsoft? (1)

Lord Ender (156273) | more than 7 years ago | (#16105016)

Did you see the word "first" in there? The suggestion is that anyone who independently discovers and reports the vulnerability before the patch is released gets paid. That gives MS motivation to patch more quickly.

And if they decide to never patch, there is nothing to stop the researcher from publishing it 0-day, anyway.

But I didn't say this was what is best for everyone. I said this would be a good one for MS, because they would get all the time they need to fix the problems, and encourage people to come to them first.

Re:Why would you trust Microsoft? (1)

Secrity (742221) | more than 7 years ago | (#16105643)

To me, "we will give you credit for discovering it" implied "first". Only the FIRST person to discover something can say that they discovered it. There isn't a whole lot of use to be simply one of the people to discover a vulnerability. I agree that this would be the sort of program that MS would love to have, especially because they hold all of the cards, and they are the dealer -- until one of the players gets tired of playing Microsoft's game.

Researcher A discovers the vulnerability first and reports it to MS. Researcher B does the responsible thing after he also discovers the vulnerability, he reports it to MS. Researcher B waits however long he decides is fair to wait after notifying MS and then he publicly releases the vulnerability. Researcher B gets the credit for discovering the vulnerability; researcher A MIGHT get $10,000, someday. NOBODY is going to care IF MS claims that researcher A reported it to them first. To a great many researchers, discovering a vulnerability is worth A LOT more to them than whatever bounty MS would pay.

Re:Why would you trust Microsoft? (1)

Lord Ender (156273) | more than 7 years ago | (#16105832)

That's true. But since a lot of these things are discovered by researchers in countries with failed economies (like former USSR), $10kUS would be worth keeping quiet about.

And I am sure avoiding a 0-day exploit is worth more than $10k to MS.

already being done (1)

0olong (876791) | more than 7 years ago | (#16104844)

Checkthis out: zerodayinitiative [zerodayinitiative.com]

It's actually better than the parent's proposal, because you're not directly dependant on the company you've exploited the software of.

Re:already being done (1)

0olong (876791) | more than 7 years ago | (#16104944)

pre-emptive strike: read TFA, moron.

What if the vendor doesn't act responsibly? (1)

RAMMS+EIN (578166) | more than 7 years ago | (#16104135)

I think everybody agrees is that the first thing to do is to notify the vendor. Next, work on a fix has to be started. The question is, what comes after that?

Should users be notified ASAP, so that they are aware of the issue? There is something to be said for this. After all, if I found a vulnerability, somebody else may have found it, too. The sooner users know of the risk, the sooner they can take steps to reduce it. On the other hand, once you notify users, you can be sure the black hats know of the vulnerability and will try to exploit it. Had you not notified the users, they might have been safer.

Also, what if the vendor doesn't act responsibly? If they aren't working on a fix, or not doing it quickly enough, do you publicly disclose the vulnerability to coerce them into making haste? It may be a good idea, because the black hats may already be developing or even using exploits. On the other hand, you can be sure they will be once you disclose. Without knowing if the black hats already know of the vulnerability, or how the vendor will respond to the disclosure, it's hard to say.

Re:What if the vendor doesn't act responsibly? (2, Insightful)

maxwell demon (590494) | more than 7 years ago | (#16104587)

Well, there may be a middle ground between full disclosure and no disclosure. In certain situations you might be able to just disclose the danger and how to avoid it, without actually disclosing enough details for black hats to exploit it (although it of course gives them a hint where to search).

For example, "If you don't absolutely need it, switch off functionality X in product Y. I've found a serious vulnerabily in Y which is only effective if the option for X is set. An attacker might take control over your computer."

This would explain what the users need to know (activating X in Y currently is dangerous), without providing information which wouldn't help them (because they can't fix X anyway), but would help the black hats.

Re:What if the vendor doesn't act responsibly? (1)

Cid Highwind (9258) | more than 7 years ago | (#16107007)

This is exactly what the people who discovered the alleged Apple airport exploit did, ("We can pwn this Macbook with one click. Stop using WiFi on Apple machines") and look what it got them. Without a real, independantly verified exploit to show the world, they've been accused of rigging the demo and engaging in OS fanboyism.

If they had printed the exploit code on a T-shirt and handed it out at BlackHat, either the driver would be fixed or their specious accusations would be debunked by now. Instead, it's a month later and I've got an iMac (and probably other machines too, they say it's not specific to OSX) that may or may not be wide open to attack from any clown within WiFi range, and there's no pressure on Apple or Intel or anyone else to fix a flaw that may not be a real-world threat or may be the worst thing to happen to computer security since Redhat 5.0. Yay.

Re:What comes after that ? (1)

udippel (562132) | more than 7 years ago | (#16104708)

The question is, what comes after that?

Exactly. Though there is no clearcut answer to this behalf.
It depends largely on the character of the exploit, I'd suggest. If you can stop it at the parameter firewall, at a non-standard port, tell the sysadmins to close that bloody port for security reasons.
If it can DoS your DNS by sending a specially crafted request, I'd suggest to leave it unpublished for a reasonable time. Not to invite the kids to find out what it is and DoS half of the DNSes for contempt.
Somehow it seems to boil down to the question if the user can actually do something reasonable about it (other than closing down the boxen), or not.
If there was a vulnerability in Apache self (not in some module), I'd personally prefer no immediate disclosure. Reasonably, almost nobody will simply be able to 'httpd stop' for several days, until the patch is available, tested and proven.

We Need More Information (1)

RAMMS+EIN (578166) | more than 7 years ago | (#16104207)

I think we (meaning current and future users) need more information. More information on what sorts of vulnerabilities are most commonly exploited, and the consequences of these exploits. Information about what sort of vulnerabilities are most commonly found. Information about what we, independently from the vendors, can do to protect ourselves. And, importantly, about the security track record of vendors: which vendors get the most and severest vulnerabilities reported against them, and how do they deal with such reports?

Especially the last part is important, because it allows users to make choices based on what sort of security can be expected. This will make security something to compete on, which will lead to more secure choices being available.

Unfortunately, this sort of information will always paint a distorted picture: what if, for example, there are many more people looking for issues in MSIE than in Firefox? Firefox might seem more secure, while actually MSIE is the more secure of the two! Or what if Microsoft hires some brilliant minds to find holes in Ubuntu, whereas the people examining Windows have a hard time because (1) they don't have source code, (2) they don't have as much expertise, and (3) they have other things to do?

need more logic .. (1)

rs232 (849320) | more than 7 years ago | (#16104401)

"what if, for example, there are many more people looking for issues in MSIE than in Firefox?"

"Firefox might seem more secure, while actually MSIE is the more secure of the two!"

"Or what if Microsoft hires some brilliant minds to find holes in Ubuntu"

"whereas the people examining Windows have a hard time because (1) they don't have source code"

IF I can rephrase that .. MSIE is more secure because people don't have access to the source and less people are looking for 'issues' in Firefox. Even if the above were true it defies logic that a browser is more/less secure because of the number of people examining it. And how do you explain the high number of bugs found in MSIE if they don't have access to the source code.

My opinion is that Firefox is more secure because unlike MSIE, it isn't welded to the OS. Firefox running as standard user under Linux is even more secure. A bug in Firefox usually leads to the users home dir being compromised. A bug in MSIE leads to the whole computer being compromised.

was Re:We Need More Information

Re:need more logic .. (1)

Tim C (15259) | more than 7 years ago | (#16104784)

Even if the above were true it defies logic that a browser is more/less secure because of the number of people examining it.

"Many eyes make shallow bugs" is the relevant quote - the idea is that with more people looking at something, you have a greater chance of spotting flaws.

A bug in MSIE leads to the whole computer being compromised.

Only if you run as admin, which admittedly is depressingly common in the Windows world.

Truth in advertising (0)

Anonymous Coward | more than 7 years ago | (#16104310)

Wow, what could be a more compelling headline than one containing the words "16 Opinions"? Now that I think about it, the full excitement that awaits the reader isn't fully expressed unless it's "16 Opinions!!!".

Snoooooooore.

What about the customers? (0)

Anonymous Coward | more than 7 years ago | (#16104329)

They've got quotes from the others, but what about the customers *of* the vendors?

Noticed Microsoft's response (1)

guruevi (827432) | more than 7 years ago | (#16104410)

that is if you read TFA of course. The average response is (except from IBM and Microsoft): oh, tell us, here is the e-mailaddress, we will do this, this and that, make a patch and then disclose the information. IBM is like: oh, tell us, we put it in the database and try finding a resolution. MIcrosoft says: Tell us, euhm... yeah, that's it, don't go tell anyone else, just us.

I mean, at least describe what an average process looks like and possible timeframes etc.

Full disclosure == Responsible disclosure (1)

Opportunist (166417) | more than 7 years ago | (#16104480)

Blunt and honest. "Responsible disclosure" sounds like a new buzzword to suggest that full disclosure is irresponsible. It isn't. The only reasonable and sensible way to inform about a bug is by providing all information necessary to

see it
identify it
reproduce it
fix it

Yes, I may not be able to fix a bug in Windows. But with full disclosure I can reproduce it and find a stopgag that provides a temp fix 'til MS comes out of their rears.

If you only provide limited information, I cannot test my workaround against a valid test set, thus leaving me vulnerable.

Responsible Disclosure == Legal Liability?? (1)

cthulhuology (746986) | more than 7 years ago | (#16104765)

Wouldn't knowing about a security exploit and failing to mitigate any damage that may result by failing to disclose to your customers this information make a company liable for any damages done to the customer's systems during the "grace period"?

I mean if I buy something, say a car, and the manufacturer knows about the defect, I can sue them for any damages that may occur as a result of their design flaw. Companies perform recalls because the cost of such suits exceeds the cost of replacing the goods in question. Additionally, if there is a flaw which they know about that results in death or injury, thay may also be found criminally negligent.

Can you sue Microsoft or Oracle for damages if someone exploits a bug they knew about, but didn't tell you about? And what protection could a EULA be for a company if they are found criminally negligent? You don't wave your rights not to be killed or injured or have your house not burnt down buy clicking the EULA.

full disclosure (0)

Anonymous Coward | more than 7 years ago | (#16105123)

i'm for full disclosure.
the reason is simple. for at least 50%
of software flaws there's a workaround.
if admins get full disclosure, they can shut-down
the faulty piece of code or filter it, or work around
it whatnot.
but just reporting it back to the vendor and then
sitting back, knowing that there are >250'000 computers
"out-there" with a glaring hole ... tz-tz-tz.
full disclosure also "forces" vendors to check their
code better before releasing it to the public.
vendors should be held resonsible for flaws.
also $$$ wise.

"hey tom! did you see that red explorer that just overtook
us?"
"yep, defenetly speeding."
"i know for a fact that that model got bad brakes."
"..."

tag this "oldnews" (1)

Eil (82413) | more than 7 years ago | (#16105426)

Good morning, Mr. Deadhorse. I'm here to beat you.

As an expert in computer security (1)

Catamaran (106796) | more than 7 years ago | (#16106061)

and after careful consideration of the issues, I have concluded that the appropriate amount of time to wait before going public is 10.853 days.

The Invisible Hand of 'Responsible Disclosure' (0)

Anonymous Coward | more than 7 years ago | (#16110101)

I read the article last week with great interest, having previous worked for iDefense.

I've posted my thoughts on the top ten failures of 'responsible disclosure' to my blog [spidynamics.com] .

Michael Sutton
Security Evangelist
SPI Dynamics
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...