Responsible Disclosure — 16 Opinions 87
An anonymous reader writes, "Disclosure. Just a word, but in the security field it is the root of progress, sharing knowledge and getting bugs fixed. SecurityFocus published an interesting collection of quotes about the best disclosure processes. The article features 11 big vendors, 2 buyers of vulnerabilities, and 3 independent researchers. What emerges is a subtle picture of the way vendors and researchers differ over how much elapsed time constitutes 'responsible.' Whereas vendors ask for unlimited patience, independent researchers look for a real commitment to develop a patch in a short time. Nice read." Wikipedia has an entry for "full disclosure" but none for "responsible disclosure."
No quote from Cisco (Score:2)
No entry? (Score:2)
Well, after tens of thousands of Slashdot nerds read this, I'm sure that'll change in a few minutes.
Re: (Score:2)
Wikipedia (Score:4, Funny)
It does now.
Re:Wikipedia (Score:5, Interesting)
I attempted to get [multi-billion dollar company] to fix a gaping security hole that was well-known to persons using [their product] which has support & licensing fees upwards of $300,000 per year. Their response was to tell my management that I was a loose cannon and should be fired (luckily my management told them to get stuffed, but the hole still wasn't fixed).
So I sent [multi-billion dollar company] an email from my infant daughter's email account (yes, I create accounts for my kids when they are born, shut up) informing them that the details of their security problem would be published on the bugtraq mailing list in two weeks, and attached a copy of what would be posted.
In less than 48 hours, I was contacted at the "postmaster" address for the email domain by [multi-billion dollar company] who informed me that we (the domain's registered in the name of a friend of mine, so there's no visible connection to me) were harboring an evil criminal hacker at [email address of my daughter] and that I needed to give them personal information about that user. I replied "oh, gee, thanks, that account belongs to a two-year old child, somebody must have hacked it, we are shutting that account off now, have a nice day".
Three days later all customers of [multi-billion dollar company] got an urgent update that corrected the security flaw in [their product]. I never did post to bugtraq, because the point of the exercise was to get [multi-billion dollar company] to do what was best for both them and their customers, and that goal was achieved. I couldn't have made the threat, though, without the existence of anonymous full disclosure listservs.
Re: (Score:2)
Information was okay... (Score:2, Interesting)
I do think that the ethical approach is certainly approach a vendor first. Inform them that they have a given time to apply a patch to it, and then hold them to it and release the information at the end of that time.
If it involves Microsoft.... (Score:2, Funny)
Re: (Score:3, Insightful)
Get the information in the hands of the users? What on earth are my parents going to do with information about a buffer overflow exploit?
Maybe you mean "Get the information in the hands of people who can fix the problem." That (with regards to the grand parent post) would be Microsoft (or whatever vendor we're talking about.) My parents are never going to f
Re: (Score:2)
Stop using the software until its fixed? Ask you, their technically inclined grandchild, what to do to avoid being hacked? Any number of things to ensure they don't become victims while idly waiting for the vendor to solve the problem.
But really, your grandparents aren't going to be helped much because, as you say, they aren't going to patch very often anyway. However,
5 days with MS?! (Score:2)
I know I wouldn't. Give em 30 days at least.
Re: (Score:3, Funny)
Re: (Score:3, Interesting)
Why? MS has proven it can fix a hole which allows reading of its DRMd content in 3 days.
http://www.schneier.com/blog/archives/2006/09/mic
Definitely worth the read... (Score:3, Interesting)
Seriously, have a look. If you're at all used to reading between the lines, their statements regarding security, disclosure etc give you a far greater insight into their real attitudes than any marketing, reviews or horror stories ever could.
Re: (Score:1)
Why there is no entry for 'responsible disclosure' (Score:4, Insightful)
It may be because 'full disclosure' has meaning in the security community, while 'responsible disclosure' does not.
All 'responsible disclosure' is is a set of general ethics and courtesy that security researchers give programmers/companies/entities in order to make an orderly repair of a vulnerability. It is a function of 'full disclosure', not something in of itself.
Slightly related: I've read things that liken 'full disclosure' to yelling "Fire!" in a crowded theater. I tend to think it of yelling "Fire!" in a theater made of flash paper doused in gasoline, while one of the jugglers is preparing to light his flaming torches.
In other words, yelling 'FIRE!' is permissible, if there is actually a high likelyhood of fire...
Re:Why there is no entry for 'responsible disclosu (Score:2, Flamebait)
If there is really a fire, or a likelihood of a fire, you should inform the management so they can make an announcement that doesn't set off panic, which could lead to people being trampled to death.
In the case of security announcements, publicly disclosing a vulnerability before the vendor has been given time to get a patch out actually can cause a fire, because disclosing the vulnerability also allows anyone to create an exploit for the vulnerability.
In essence, full disclosure isn't as bad as shoutin
Re:Why there is no entry for 'responsible disclosu (Score:4, Insightful)
And everyone will be dead (Score:2)
Do that, and everyone in the theater is likely to die.
By the time you get to management and inform them (and, in many cases, convince them there really is a fire), and they can get to their PA system, the fire will probably have spread throughout the entire theater. In case of a fire, time is extremely important
Re: (Score:2)
I completely agree. Immediate disclosure does nothing but help the bad guys. Staying quiet about it too long helps the bad guys, too. The only question is, what is the proper amount of time to wait after a vulnerability is
You have to use your judgement. (Score:1)
It varies depending on the complexity of the problem that requires fixing, and the resources available for the fix. Here's a couple of concrete examples:
You found a bug in Sendmail. You contact Eric Allman, you ask him how long it will take him to get his downstreams to push fixes through the distribution channels, and you agr
Responsible Disclosure == hiding vulnerabilities (Score:5, Interesting)
So, in order to be "responsible" you have to keep the vulnerability secret for 120 days. Four months. You're kiding right? Say I'm an independant researcher. I find this vulnerability using no special skills and publically available tools. Clearly a highly skilled blackhat could just as likely have found the same vulnerability as me. Let's suppose that I've found this vulnerability in the first 2 days of a new release of the product under inspection. The blackhat could well have discovered it in the same number of days, but let's say it takes him a month longer than me, just to be generous. I'm supposed to sit on this vulnerability and let the blackhat break into systems using it for how long? 3 months? This is responsible? Wouldn't it be more responsible if I were to go public immediately? Obviously publishing tools which script kiddies can use to attack people is not a good idea, that's not what we're talking about. Surely I should at least tell people that I have found a vulnerability and that the software in question is not, in my opinion, something that you should be using if you care about security. Isn't my failure to do this just make me complacent in a conspiracy to hide that fact that people may be breaking into systems using this vulnerability?
What if I'm an IDS manufacturer? I start getting alarms that shell code has been detected in a protocol stream that has never before seen shell code in it. Analysing the incident I discover that there is a vulnerability in a particular daemon which these attackers are using to gain unauthorised access. Who should I inform? The vendor of that daemon? My customers? Or the general public? This is no longer a theoretical "the bad guys might know too" situation, this is a widespread pattern of attack that I have detected indicating that real harm is being done. If I fail to inform the public immediately, am I not complacent in helping nto more computers? Doesn't sound very responsible to me.
Re:Responsible Disclosure == hiding vulnerabilitie (Score:2)
On the other hand, if a security update is only days away, full disclosure of the vulnerability won't get a fix in the next update. It takes time to write and test that a fix doesn't introduce new bugs. Additionally, what if the black hats haven't found the vulnerability yet? By announcing the vulnerability immediately after discovery, you haven't helped get the fix out sooner, and worse, you've made it more likely for an exploit to be developed.
Announcing a vulnerability immediately doesn't seem responsib
Re: (Score:2)
Re: (Score:2)
I agree four months seems like an excessively long time to stay quiet about a vulnerability, especially a serious one. Serious vulnerabilities should be fixed in the next security patch, unless the next one is too close for adequate testing.
In the case of a vulnerability that is being exploited, I agree that immediate action is necessary. However, with full disclosure there's always the possibility the some, or even most, black hats don't know about the vulnerability. In that case, again you've just made
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
From a study reported on in the WSJ back in January [washingtonpost.com], and elaborated on later [washingtonpost.com], Microsoft's time to patch vulnerabilities they classify as "critical" has risen 25% since 2003, to 134 days. Except, however, in the case of full-disclosure vulnerabilities, where details and almost always proof-of-concept code were released to the general public. For those vulnerabilities, the time to fix fell from 71 days in 2003 to 46 days in 2005. Based on the data, full disclosure does in fact accelerate the fix and the probl
Re: (Score:2)
Re:Responsible Disclosure == hiding vulnerabilitie (Score:2)
But if the bad guys haven't found the problem at this point, they surely will after this kind of announcement. Moreover, changing running production software can be very difficult. In thi
Re: (Score:2)
Re:Responsible Disclosure == hiding vulnerabilitie (Score:2)
You happen upon an easy vulnerability. A blackhat finds it in a month. You stay quiet for 4 months. Patch comes after a full year from when you find it. A single blackhat has used it for a year.
You happen upon an easy vulnerability. You announce it to the public. Every half-assed blackhat in the world finds it and uses it for a full year before the patch comes out.
Re: (Score:2)
Re: (Score:2)
If you say 'I found a vulnerability in Product X when you do Y', even without any details, the blackhats already know where to look and the kind of things to look for. For example, if you tell me that a program has a vulnerability related to images, I'm immediately going to think
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:Responsible Disclosure == hiding vulnerabilitie (Score:1)
Re: (Score:2)
History shows you're wrong.
Re: (Score:2)
What history is that? Mathematics history? I suppose you think the better product gets the most market share too.
The fact of the matter is that corporations will fix zero security bugs if they can get away with it. If customers are stupid enough to keep coming back they'll do nothing to improve their prod
Re: (Score:2)
Obviously publishing tools which script kiddies can use to attack people is not a good idea, that's not what we're talking about. Surely I should at least tell people that I have found a vulnerability and that the software in question is not, in my opinion, something that you should be using if you care about security.
I don't think a hard and fast 120 days rule makes sense, but I think a researcher should look at the characteristics of the vulnerability before deciding what is responsible, as well as th
Re:Responsible Disclosure == hiding vulnerabilitie (Score:2)
If I were Microsoft (Score:5, Interesting)
Basically, it would go like this:
"If you discover a vlunerability and report it only to us, when we eventually release the patch, we will give you credit for discovering it (what researchers really want), and we will give you $10,000. If you report it to anyone else before we release the patch, you will get no money and no credit."
Re: (Score:2, Interesting)
Re:If I were Microsoft (Score:4, Funny)
$10,000 per bug would bankrupt microsoft.
Re: (Score:1)
Re: (Score:1)
If $1000 a day seems a little high I would agree to a multi-tiered pay scale based on severity, with the clause that if I later find a way to use the same vulnerability to do worse than what I came up with it would retroactively move me up the scale.
Why would you trust Microsoft? (Score:2)
In your example, Microsoft has a $10,000 incentive to NEVER release a patch or give you credit for discovering it.
Will MS claim that the vulnerability was discovered in-house DAYS before you told them of it? What happens if you tell MS about the vulnerability and another researcher publishes the vulnerability while you have been patiently waiting several months for a patch and your check? If you tell MS about the vulnerability
Re: (Score:2)
And if they decide to never patch, there is nothing to stop the researcher from publishing it 0-day, anyway.
But I didn't say this was what is best for everyone. I said this would be a good one for MS, because they would get all the time they need to fix the problems, and encourage people to come t
Re: (Score:2)
Researcher A discovers the vulnerability first and repor
Re: (Score:2)
And I am sure avoiding a 0-day exploit is worth more than $10k to MS.
already being done (Score:1)
It's actually better than the parent's proposal, because you're not directly dependant on the company you've exploited the software of.
Re: (Score:1)
What if the vendor doesn't act responsibly? (Score:2)
Should users be notified ASAP, so that they are aware of the issue? There is something to be said for this. After all, if I found a vulnerability, somebody else may have found it, too. The sooner users know of the risk, the sooner they can take steps to reduce it. On the other hand, once you notify users, you can be sure the black hats know of the vuln
Re: (Score:3, Insightful)
For example, "If you don't absolutely need it, switch off functionality X in product Y. I've found a serious vulnerabily in Y which is only effective if the option for X is set. An attacker might take control over y
Re: (Score:2)
If they had printed the exploit code on a T-shirt and handed it out at BlackHat, either the driver would be fixed or their specious accusations would be debunked by now. Instead, it'
Re:What comes after that ? (Score:1)
Exactly. Though there is no clearcut answer to this behalf.
It depends largely on the character of the exploit, I'd suggest. If you can stop it at the parameter firewall, at a non-standard port, tell the sysadmins to close that bloody port for security reasons.
If it can DoS your DNS by sending a specially crafted request, I'd suggest to leave it unpublished for a reasonable time. Not to invite the kids to find out what it is and DoS half of the DNSes for contempt.
We Need More Information (Score:2)
need more logic .. (Score:2)
"Firefox might seem more secure, while actually MSIE is the more secure of the two!"
"Or what if Microsoft hires some brilliant minds to find holes in Ubuntu"
"whereas the people examining Windows have a hard time because (1) they don't have source code"
IF I can rephrase that
Re: (Score:2)
"Many eyes make shallow bugs" is the relevant quote - the idea is that with more people looking at something, you have a greater chance of spotting flaws.
A bug in MSIE leads to the whole computer being compromised.
Only if you run as admin, which admittedly is depressingly common in the Windows world.
Noticed Microsoft's response (Score:2)
I mean, at least describe what an average process looks like and possible timeframes etc.
Full disclosure == Responsible disclosure (Score:2)
see it
identify it
reproduce it
fix it
Yes, I may not be able to fix a bug in Windows. But with full disclosure I can reproduce it and find a stopgag that provides a temp fix 'til MS comes out of their rears.
If you only provide limited information, I cannot test my workaround agains
Responsible Disclosure == Legal Liability?? (Score:1)
I mean if I buy something, say a car, and the manufacturer knows about the defect, I can sue them for any damages that may occur as a result of their design flaw. Companies perform recalls because the cost of such suits exceeds the cost of replacing the goods in
tag this "oldnews" (Score:2)
As an expert in computer security (Score:2)
Even more opinions (Score:2)