×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

When Is It Right To Go Public With Security Flaws?

CmdrTaco posted more than 3 years ago | from the yesterday dept.

Google 126

nk497 writes "When it comes to security flaws, who should be warned first: users or software vendors? The debate has flared up again, after Google researcher Tavis Ormandy published a flaw in Windows Support. As previously noted on Slashdot, Google has since promised to back researchers that give vendors at least 60-days to sort out a solution to reported flaws, while Microsoft has responded by renaming responsible disclosure as 'coordinated vulnerability disclosure.' Microsoft is set to announce something related to community-based defense at Black Hat, but it's not likely to be a bug bounty, as the firm has again said it won't pay for vulnerabilities. So what other methods for managing disclosures could the security industry develop, that balance vendors need for time to develop a solution and researchers' needs to work together and publish?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

126 comments

when.. (-1, Offtopic)

Vectormatic (1759674) | more than 3 years ago | (#33044560)

it gets you a first post?

Re:when.. (0)

Anonymous Coward | more than 3 years ago | (#33044628)

When it makes microsoft look bad so we can trash them on slashdot?

Re:when.. (2, Interesting)

Anonymous Coward | more than 3 years ago | (#33044894)

When it makes microsoft look bad so we can trash them on slashdot?

How about giving the vendor time to issue a patch if said vendor has earned the goodwill of the community or at least not earned the ill will of the community? Abuse of monopoly as found in various courts of law? Immediately go public. Vendor lock-in practices? Immediately go public. Silly patent lawsuits over ideas that are not really original? Immediately go public. Public statements about how they now take security very seriously and it is a top priority for them and then no substantial improvement? Immediately go public. Using their power and influence to bribe standards committees? Immediately go public. Deceptive marketing practices? Immediately go public. Building strict DRM as an integral and non-removal component of the OS? Immediately go public. This list is not exhaustive and would apply to all vendors.

Found a vendor that does not engage in these practices? Work with them. Give them time to develop a patch. Help them fix the flaw, if you are so inclined and have the skill. Note that there is no such urge to make them look bad when they don't use all the plotting and planning and manipulation and control, and decide to make thier money by producing good products that people want to buy. Crazy concept, I know. For those companies, it would be wrong to immediately go public in order to make them look bad. Microsoft is not one of those companies.

And if you say "but what about the users who suffer exploits" I have an easy answer. You mean the users who reward abusive companies with their money and continue to fund more of the same? You mean those users? Heaven forbid if doing business with abusive companies might not be entirely free of negative repercussions for them...

Re:when.. (0, Offtopic)

Dishevel (1105119) | more than 3 years ago | (#33046108)

Never said this before. Heard it many times but never felt the need to say it.

I wish I had Mod points for you.

Re:when.. (1)

morgan_greywolf (835522) | more than 3 years ago | (#33046552)

How about giving the vendor time to issue a patch if said vendor has earned the goodwill of the community or at least not earned the ill will of the community? Abuse of monopoly as found in various courts of law? Immediately go public. Vendor lock-in practices? Immediately go public. Silly patent lawsuits over ideas that are not really original? Immediately go public. Public statements about how they now take security very seriously and it is a top priority for them and then no substantial improvement? Immediately go public. Using their power and influence to bribe standards committees? Immediately go public. Deceptive marketing practices? Immediately go public. Building strict DRM as an integral and non-removal component of the OS? Immediately go public. This list is not exhaustive and would apply to all vendors.

That list applies to Microsoft, obviously, but it also, in part, applies to Apple. Especially the parts abiyt "silly patent lawsuits" and "building strict DRM as an integral and non-removable component of the OS."

Overall, Apple plays nice, however, so I wouldn't be as quick to punish them as you, I guess.

Re:when.. (1)

HermMunster (972336) | more than 3 years ago | (#33047854)

Microsoft should not be the ones that set the writing to law when it comes to any security issue. They have an extremely poor reputation as it is. What Microsoft defines should have little weight in the community where these issues are discovered and discussed. Over the years those that have uncovered these security issues have been well restrained. They seem significantly better equipped to deal with how and when they should be disclosed. If not for them, discovered issues would likely never be disclosed to the public, and the public would not be exerting enough pressure to get them fixed (let alone prioritized).

Re:when.. (1)

RobertM1968 (951074) | more than 3 years ago | (#33047956)

Microsoft should not be the ones that set the writing to law when it comes to any security issue. They have an extremely poor reputation as it is. What Microsoft defines should have little weight in the community where these issues are discovered and discussed. Over the years those that have uncovered these security issues have been well restrained. They seem significantly better equipped to deal with how and when they should be disclosed. If not for them, discovered issues would likely never be disclosed to the public, and the public would not be exerting enough pressure to get them fixed (let alone prioritized).

Agreed, it would be called a "conflict of interests" since their efforts in such regards would solely be to protect their interests, and not their locked in users to any extent not required to maintain their monopoly.

But, that has never stopped companies from buying laws in the past... sadly. :-(

I wrote my quick thoughts up the other day .... (5, Interesting)

Kalidor (94097) | more than 3 years ago | (#33044634)

... and posted them elsewhere. So here's a quick copy paste and what my thoughts are.
======================
Procedure :
Step 1) notify manufacturer of flaw

Step 2) Wait an appropriate time for response. This depends on the product. OS could be as much as months depending on how deep the flaw is. Web-browsers probably 2-3 weeks.
Corollary 2a) If manufacturer responds and says its a will-not-fix you have some decisions, see 3a.

Step 3) If no response, make an announcement of doing a proof of concept exhibition with a very vague description. People asking for details say it was probably as vague as possible. The company has already been contacted, so they know the issue or can contact you from the announcement. Schedule it with enough time for the company to release a fix.
Corollary 3a) How critical is the flaw. If marked as will-not-fix and its very detrimental you might have to sit on it.

Step 4) Do exhibit. With luck flaw has been fixed and last slide is about how well manufacturer did.

Step 5) ...Profit!!!! (While this is the obligatory joke post, Check out E-Eye security to see how it's happened before)
===============
WRT to 3a: You'd be surprised how often this is done. There are two long-standing issues against a certain software that, while being uncommon and not often thought of attack vectors, are less than trivial to exploit and gain full access. Manufacturer has, in fact, responded with a "works as designed, will not fix." People in the information securities industry have found the flaws so detrimental that they've imposed a self-embargo about openly discussing it. Without manufacturer buy-in, a fix just can't come in time if that particular information was released and the effect would be significantly widespread. The only thing releasing the information would do is cause a massive Zero Day event that would only harm consumers or leave them without the services of the software for several months. With no evidence that the exploit is being used in the wild, save for handful of anecdotal reports, the issue has become a bi-annual prodding of the manufacturer.

Re:I wrote my quick thoughts up the other day .... (4, Insightful)

Anonymous Coward | more than 3 years ago | (#33044880)

WRT WRT 3a: So the industry and the manufacturer are basically patting each other on the back, happy in the knowledge that if no-one from the club talks about the problem, it's impossible to discover otherwise? It's going to be slightly icky to say "we told you so" when this is discovered independently and causes "a massive Zero Day event that would only harm consumers or leave them without the services of the software for several months." (Note that I used "when this is discovered", not "if". As you may be aware, if something could be done, it's only a matter of time until somebody does it)

Re:I wrote my quick thoughts up the other day .... (3, Interesting)

Anonymous Coward | more than 3 years ago | (#33044982)

I like especially how this ignores the human angle and assumes that all involved parties are even able to shut up for years (well, I don't know, maybe they receive... err... gratitude to shut up).

Re:I wrote my quick thoughts up the other day .... (1)

TimSSG (1068536) | more than 3 years ago | (#33045770)

I sure hope they were NOT paid;
it would make them part of an
conspiracy to cover up flaws.
And, when someone uses that flaw
would make them and the Companies
they work for possibly liable to
large amount of damages and
possible jail time.

IANAL, but like to play one on the web.

Tim S.

Re:I wrote my quick thoughts up the other day .... (1)

mea37 (1201159) | more than 3 years ago | (#33045536)

It's probably worse than that. GP didn't give us much to go on about the nature of the attack, but generally a flaw described in such severe terms either (1) offers a foot in the door for the attacker to go after other systems on the network, or (2) exposes sensitive information. By contrast with flaws that allow DoS (for example), it isn't typically obvious when a flaw of that type is exploited.

So the question isn't "how do you know someone won't discover the flaw? what will you do when you notice it being exploited?" The question is "how do you know someone hasn't discovered the flaw? would you notice if it's already been exploited?"

If the flaw is that severe and the vendor won't budge, silence is not an appropriate response. I question whether GP has the close ties to the security community that he insinuates.

Re:I wrote my quick thoughts up the other day .... (3, Insightful)

Nadaka (224565) | more than 3 years ago | (#33044932)

This is standard operating procedure and responsible disclosure as far as I can tell.

The problem is that the company is likely to file an injunction to stop the presentation and possibly file blackmail charges against you.

You need to amend the above procedure with anonymous notification and demonstration in order to protect the safety of those following responsible disclosure.

Re:I wrote my quick thoughts up the other day .... (1)

couchslug (175151) | more than 3 years ago | (#33045548)

Companies are always a threat. Disclose anonymously to the company, wait, then disclose to public without warning.

There is no reason to offer your neck to your enemies. There is no reason to want recognition from them.

Remember basic security, tell no one who you are, and don't go attention-whoring after you release.

Re:I wrote my quick thoughts up the other day .... (1)

fishbowl (7759) | more than 3 years ago | (#33046266)

>Remember basic security, tell no one who you are, and don't go attention-whoring after you release.

You've identified the real issue, but this is often ignored. The problem isn't the disclosure itself. The problem is that so many people with such disclosures to make, seem to want credit/attention for their efforts, but also want to be free of the risks associated with seeking that attention. Anonymous channels exist. Release information via one of those, and then if somebody is upset about it, they can do as my Uncle Bill always said: "Complain in one hand ... and see which one fills up first."

Re:I wrote my quick thoughts up the other day .... (1)

gorzek (647352) | more than 3 years ago | (#33047240)

This is very true. Whistleblowers have some protection but it is still dangerous to be one, and especially dangerous to be a very public one.

Re:I wrote my quick thoughts up the other day .... (0)

Anonymous Coward | more than 3 years ago | (#33045070)

There are two long-standing issues against a certain software that, while being uncommon and not often thought of attack vectors, are less than trivial to exploit and gain full access.

The problem with this approach is that if a hole exists, you can script/code something that can automatically take the necessary steps to exploit it. I remember years ago a problem existed with the startup of a certain service. At one point when it bound itself to a port, it was vulnerable to a DoS attack. The window was just a second or so and then the software would lock itself down. Unfortunately, if the machine was rebooted or otherwise knocked off the network (say by flooding the switch with bogus arp packets) you could hold open that port for as long as you wanted. Then you'd have another machine connect on that port and use the exploit to crash the service, get a root session with the same userid, and then wreak merry havoc (or maybe overwrite a few files). All of this could be scripted.

Software gets more and more complex. If a user is running a particular filesharing client it's possible to view names of files that are not explicitly shared by exploiting an issue with, of all things, a popular virus scanning program. The name and size of these files will tell you what versions of software are installed. These can then be sent up to the Internet (luckily enough because the filesharing client is allowed through the firewall) and collected in a database. Some months later and a new exploit is found and there's already a list of vulnerable machines all ready to be farmed. And because the farm is targeted to those machines, it's far less likely to be detected.

All software is crap. Windows, MacOSX, Linux.. All crap. So many holes, so many unpatched systems. It's all security theatre.

Good, but you missed a step (5, Interesting)

hAckz0r (989977) | more than 3 years ago | (#33045104)

You need to notify CERT, and then they have the ability to apply more pressure on the manufacturer, as they simultaneously publish a very vague notice to the community of a flaw being worked on. If CERT is involved you have a much higher probability of not being ignored or told "will-not-fix" because it is already public knowledge that there is an exploit that needs fixing. Its in the record. The official "report cards" for the vendors then have the clock start ticking the minute you report the flaw, and the vendor can not deny that they were notified and/or aware of the problem. In other words, they can't sweep it under the rug very easily, and you have done the best you can do without causing mass pandemonium.

Re:Good, but you missed a step (1)

Lehk228 (705449) | more than 3 years ago | (#33045178)

it works better if you just post details in an image on 4chan's /b/ and let nature take it's course

Re:Good, but you missed a step (1)

Kalidor (94097) | more than 3 years ago | (#33045650)

What makes you think some of the people that know aren't at CERT...

That said you are correct; Cert should be notified at the same time as the manufactuer IMHO.

Re:I wrote my quick thoughts up the other day .... (1)

Erikderzweite (1146485) | more than 3 years ago | (#33045274)

So, your "people in the information security" are basically helping the vendor selling faulty software while withholding crucial information from users of said software at the same time? If the issues you mention are indeed "less than trivial" you help the vendor to cheat people into thinking that they are safe with the software.

"People in the information security" have the job of making the IT environment safer. You must force the vendor to fix these holes even if it takes a vulnerability disclosure and a lot of bad publicity for the vendor. The vendor's job is to make good software. If the job is being done poorly the vendor deserves all the losses it will get. As for users of the software -- how do you know this vulnerability isn't used right now? It is not a widespread issue but maybe it is used to target particular users? Such behavior screws up informed users who care about security at the cost that clueless masses will be temporary safer for the time being.

It is a good idea IMO to stop being short-sighted. In the long term such a disclosure will do more good than current sitting on the issue.

Re:I wrote my quick thoughts up the other day .... (1)

Kalidor (94097) | more than 3 years ago | (#33045542)

Never said it was an easy position. Not one of them likes the situation they are in, but no one has been able to come up with a good solution.

To be fair I think as wide spread, detremental, and unknown as these two problems are it's very unlikely that there are more than a handful of such cases in software out there today and it's only done in the most extreme of situations. At least, that's my sincere hope.

As for it being used right now. To be honest we don't know that it isn't. But generally, it's a widely enough software that I'd expect people there to be press despite the company getting a lot of passes in the press.

Re:I wrote my quick thoughts up the other day .... (1)

cynyr (703126) | more than 3 years ago | (#33046016)

lets see, time to patch any major portion of GNU/linux, probably less than 2 weeks. thats from bug report to update in distros repos. If OSS can do it with free labor in 2 weeks, paid for devs should be able to do it in less, say half, 1 week. I know my apple keyboard wasn't fully supported when i bought it, less than a week later there was a patch applied to mainline stable kernels that corrected the issue. and that was just for some of the FN keys not working as advertised. So a month max sounds good. I would note that without things progressing in a month i feel bound to release this info so that admins may take steps to mitigate the issue.

Re:I wrote my quick thoughts up the other day .... (3, Insightful)

Hatta (162192) | more than 3 years ago | (#33046410)

Huh? If there's a severe vulnerability and the manufacturer refuses to fix it, you should release it immediately. Then at least those affected can mitigate their vulnerability. Otherwise, the black hats have free reign.

When the company will not listen (5, Interesting)

RJarett (114128) | more than 3 years ago | (#33044640)

I discovered a large DoS within VMware 3.5-4.0 last march. I opened up a support case on it to at least find a workaround. The engineer closed the ticket after an hour or 2 as "unsupported OS".

The DoS reboots ESX/ESXI out from under the VM when you power the VM on.

This leads to serious issues, and the closed the ticket quick. No further investigation. This is a perfect example of releasing details and source to force the company to fix the issue.

Re:When the company will not listen (0)

Anonymous Coward | more than 3 years ago | (#33046628)

It's pretty clear that he thought you were talking about Disk-operating system as opposed to denial-of-service. Your bug report was probably as vague as your comment (took me a few re-reads to figure out what you were trying to say). For instance, the word attack is missing after DoS.

Re:When the company will not listen (0)

Anonymous Coward | more than 3 years ago | (#33047138)

If you've found a security issue and it was resolved incorrectly, you should push back rather than just say "company X is clueless, so I'm going to make everyone who uses their software suffer".

I assure you that any company producing system software on a large scale contains people who take such issues very seriously, and if a security issue is closed without proper investigaiton, it happened because the wrong person looked at the bug. Remember that just because you told a person at company X, that doesn't really mean you told company X about it. There are clueless people working even at the best companies, especially on the first line (and sometimes second or third line) of support.

Clueless support engineer != clueless company

Re:When the company will not listen (2, Insightful)

gorzek (647352) | more than 3 years ago | (#33047362)

"Unsupported OS" means "unsupported OS." The vendor disavows any responsibility for bad things that happen when using their software on your unsupported platform.

This is a common thing for software vendors to do to close out tickets quickly. If it's an unsupported scenario (hardware, software, use case, etc.) then they can close it and keep their average ticket lifetime down.

A little shady, I guess, but if they never claimed to support your platform I don't see what you could really complain about.

Re:When the company will not listen (0, Redundant)

ImprovOmega (744717) | more than 3 years ago | (#33048546)

I don't see that as being a very easy attack vector to exploit. If the attacker is to the point where he can install a guest OS on your VMWare server, you were already completely owned. If it's a disgruntled sysadmin then the solution is "fire him/her and change the passwords". So...yeah, unsupported OS.

Now, if you find a way to exploit that from a *guest* OS that was support and got owned (like from within Windows Server 2008 or something) and you can run something that blows up ESX, then that may be a legit concern. As worded it sounds like a non-issue.

Anonymous Coward (0)

Anonymous Coward | more than 3 years ago | (#33044678)

"as the firm has again said it won't pay for vulnerabilities"

Don't you think that one way or another... all these companies pay for their vulnerabilities?

-a. coward

Re:Anonymous Coward (0, Troll)

ekgringo (693136) | more than 3 years ago | (#33044926)

Microsoft will probably offer to give them free Windows Phone 7 devices. But only if they write software for them.

WHENEVER YOU CAN GET THE MOST ATTENTION (-1, Flamebait)

Anonymous Coward | more than 3 years ago | (#33044682)

You guyz call it karma whoring. I call it attention getting. See, I'm a republican.

Re:WHENEVER YOU CAN GET THE MOST ATTENTION (1)

Attila Dimedici (1036002) | more than 3 years ago | (#33044732)

You may be a republican, but you are clearly not a conservative, conservatives call it being a spoiled brat.

Never (5, Funny)

SeriouslyNoClue (1842116) | more than 3 years ago | (#33044686)

Time after time it's been proven that the safest security is the security that is shrouded in the most mystery. Why can't anyone hack Windows 7? Because it's new and no one knows how it works. People like Ormandy are a bane to the community because they steal code from Microsoft (there is no other way they could know about these flaws) and then once they stolen it, they release it for virus writers to hurt the common man. They are a public enemy and I'd suspect he has contacts inside Microsoft (if you're reading this Steve Ballmer, I suggest you begin purging those who doubt you and those closest to you).

I cannot believe Google would show support to someone who is most obviously a criminal aiding and abetting other criminals.

Nobody wants their source code shown to malware writers for obvious reasons so let Microsoft have its privacy. Why do individuals get privacy rights but not Microsoft? Did you ever stop to think about that? No, you didn't, because you were too busy helping the bad guys.

You should never reveal a security flaw. It's called common sense about saftey and protecting everyone around you.

Re:Never (0)

Anonymous Coward | more than 3 years ago | (#33044934)

Never has a username been so appropriate to a post.

What you are espousing is "security by obscurity", and it's been proven time and time and time again NOT TO WORK.

If Windows 7 is unhackable, which I doubt, it's because people who use it have discovered flaws, reported them, and have occasionally had to put pressure on Microsoft to actually fix them.

Re:Never (0)

bluefoxlucid (723572) | more than 3 years ago | (#33045228)

I can't tell if this is sarcasm or not. The US never revealed the security flaw for ENIGMA because they were using it against the Germans, while the Germans believed ENIGMA was secure and unhackable. We had them by the balls.

Re:Never (2, Insightful)

jgtg32a (1173373) | more than 3 years ago | (#33045658)

I thought the Brits cracked all of that?

Re:Never (0)

Anonymous Coward | more than 3 years ago | (#33046064)

Indeed it was, however I believe the work undertaken by Alan Turing et al. was originally started by the Polish before the war.

Re:Never (2, Insightful)

John Hasler (414242) | more than 3 years ago | (#33046102)

Actually it was the Poles and the Brits who broke Enigma: the USA broke the Japanese codes. Irrelevant in any case though. The Germans had developed Enigma themselves and were using it only internally: there were no trusting "users" at risk.

Re:Never (2, Insightful)

bluefoxlucid (723572) | more than 3 years ago | (#33046422)

The vulnerabilities are the same regardless of who is at risk. The argument is that only 'good guys' are able to find vulnerabilities, and that 'bad guys' don't find or can't keep hold of such information, or just can't use it. The GP purports that keeping problems a secret will never result in secret underground cults developing a cohesive, structured approach to abusing those problems.

Tell both (1)

xxxJonBoyxxx (565205) | more than 3 years ago | (#33044698)

"Who should be warned first: users or software vendors?"

Tell both. But if you announce something, please doc how you did it and don't brush off the vendor. (Email from users and press can get pretty thick after you announce something - if you're ethical and really want to fix the problem all that noise should be lo pri...)

Re:Tell both (1)

Monkeedude1212 (1560403) | more than 3 years ago | (#33044756)

Simultaneously you mean? That leaves the vendor no time to fix the flaw.

The question basically boils down to: "I want to be an ethical person. How much time is appropriate to wait after reporting a flaw to the vendor?"

Re:Tell both (1)

John Hasler (414242) | more than 3 years ago | (#33045468)

If you want to be an "ethical person" you will want to warn the users ASAP.

Re:Tell both (1)

Monkeedude1212 (1560403) | more than 3 years ago | (#33045706)

But by warning users you are also informing the people who would use it against users. That's the double edged-ness of the situation.

Re:Tell both (1)

cynyr (703126) | more than 3 years ago | (#33046168)

yep, thats under the assumption though the bad guys need the good guys to tell them the holes. But even so, if win7 can be killed by a packet on port X, It's simple for the users to mitigate upstream by blocking port X at the firewall(you do have a separate one right?)

If there is no way to mitigate it outside of a vendor patch, let the vendor know first, and tell them they have say 2 weeks to be making progress...

Also this all reinforces my belief that commercial software development needs to work like engineering, you are liable for the problems your software causes and cannot be sold "As-is, and without warranty".

Re:Tell both (0)

Anonymous Coward | more than 3 years ago | (#33046388)

So you should tell everybody to block port X.
But you should not tell everybody how exactly the killing packet looks before the manufacturer had time to fix the flaw.

Re:Tell both (1)

RobertM1968 (951074) | more than 3 years ago | (#33048772)

Simultaneously you mean? That leaves the vendor no time to fix the flaw.

Simultaneously you mean? That forces Microsoft to fix the flaw, instead of letting it stew for years or decades.

Fixed that for ya! :-)

Sarcasm (even if true) aside, the simple fact is, the largest problem with any of these scenarios is the ill will Microsoft has caused in the security community. Regardless of who wants to argue about it being caused by the complexity of the products, or the lack of willingness of Microsoft to fix issues, or a combination of both, the simple fact is, Microsoft has, in the past, made too many statements about fixing things that were never fixed, or ignoring things that needed to be fixed (until an exploit was made public and got a lot of exposure - at which time, it suddenly was possible to fix the exploit in a few days to a week.

There lies the problem. We've been promised a number of times that .NET was fixed, only to find it wasnt really (and only a patch was written to mitigate a particular issue in a gaping hole that wasnt really patched) - and that was after it took months or years. We've been promised over the years similar things related to the various buffer overflow issues in Windows and IE, with similar abysmal results. We've watched things like the Click Once exploit be ignored until it made the mainstream media (and then get patched in a couple weeks).

Perception, thus, is the key. And in that respect, the public's perception of Microsoft's flaws in dealing with such things is quite horribly tarnished - which has no bearing on whether it's simply because Microsoft has had no intentions of fixing things timely or because the complexity of the underlying code is the cause of the issue - or a combination thereof.

What makes things worse is that, even after Microsoft's claims of having replaced 80% plus of the code in Windows Vista (and thus the same amount or more in Windows 7) when compared to Windows XP, many of the exploits that have recently came out target all variants of Windows from 2000 onwards - which would indicate that they key, exploitable components are still largely the same ones that have existed since the Windows 2000 days (or inotherwords, of the 80%+ code replaced, the key exploitable/broken sections are not in that 80% of code). That makes public perception even worse.

Problem is, none of us will know the true problems behind this, and whether the perceptions out there are warranted. Problem is, it's been proven that once these exploits are released to the public, Microsoft somehow manages patches in a very quick fashion - while on the other hand (when not announced, or in their responses to Google's 60 day time frame) they claim such is not possible without more time due to OS complexity. Both situations are at odds and on the opposite end of the spectrum. That makes it seem like the reason more time is needed is solely because (a) Microsoft does not want to spend the money (ie: developer and research time) to release such things timely, or (b) Microsoft simply has no interest in spending the money (ie: time, etc) on fixing any issue that isnt widely known and affecting their (already tarnished) reputation in this area. Again, that may not be the reality of the situation, but it is the perceptions that their actions cause.

The fact that they CAN release a patch very quickly, when being pressured because the exploit was released to the public furthers this scenario, and creates even more ill will - especially when various researchers who submit such information (see earlier /. stories on these subjects from the last week or two), are not given a mitigation date, or arent even given an indication that Microsoft intends to fix things.

That begs the question of whether Microsoft deserves different treatment in an effort to force them to get the ball rolling on such things. By the perceptions they have caused, yes, most definitely. By the reality of what is really going on behind the scenes and how difficult making and testing these patches may be? Who knows?

Regardless, as Microsoft KEEPS claiming that each new OS is "the most secure" version of Windows yet, and that the previous ones all sucked in that regard, coupled with the fact that many of these exploits affect those "previous sucky" versions, I do think their desires to hide exploit information - and then fix them at their leisure - is an absurd desire. They have not proven in any way, that they deserve such treatment. The converse is actually true - they seem to have proven it is the only method of motivating them to release patches in a timely fashion. They seem to have proven that, with such motivation, they can do what they claim is otherwise impossible by creating such patches in much shorter timeframes.

Changing the name of their otherwise the same disclosure guidelines, in no way helps that situation either. It may actually hurt it, as to me, what it says is "you all are a bunch of idiots... if we change the name of this, while not changing the policy/guideline, we will trick you into changing your actions" - I am sure, from reading slashdot and comments in a plethora of other places, I am not the only one who took their recent change in the same way. That means that they have alienated or pissed off even more security researchers and may in effect be causing an increase in the exact thing they are trying to stop - researchers determining their own guidelines for how and when they will release such information to the public with no consideration for what Microsoft wants in the matter.

Re:Tell both (0)

Anonymous Coward | more than 3 years ago | (#33045264)

Neither. Organize an auction instead - this is the only way somebody will ever derive something good from a vuln.

Deadline always isn't feasbile (2, Interesting)

Anonymous Coward | more than 3 years ago | (#33044710)

I agree with MS on this, deadline always isn't feasible. They have to test on many different levels before they could release the update. Google just used Ormandy to have some positive PR on themselves. Frankly, from my point of view, Google screwed this one up and Ormandy or any other researcher cannot hold companies at gun point to release fix asap. If he had given them 60 day disclosure and even after that, if MS had not provided any response then releasing the bug details would make sense. The way Ormandy and Google acted on this was cheap.

Re:Deadline always isn't feasbile (1)

Todd Knarr (15451) | more than 3 years ago | (#33046564)

A deadline's always feasible. It may not be possible to come up with a clean fix in a short timeframe, but you can always come up with either a workaround or something the users can do to mitigate the damage. This may not be ideal from the vendor's point of view, but it's not the vendor who's in danger of having their systems attacked so I'm not overly concerned about their public-relations heartburn.

Re:Deadline always isn't feasbile (1)

John Hasler (414242) | more than 3 years ago | (#33047322)

A deadline's always feasible. It may not be possible to come up with a clean fix in a short timeframe, but you can always come up with either a workaround or something the users can do to mitigate the damage.

So publish the workaround along with the vulnerability.

Re:Deadline always isn't feasbile (3, Interesting)

Rockoon (1252108) | more than 3 years ago | (#33047700)

This may not be ideal from the vendor's point of view, but it's not the vendor who's in danger of having their systems attacked so I'm not overly concerned about their public-relations heartburn.

If you are not concerned about the vendors public-relations, then why release at all? It seems to me that the justification for release is precisely that the researchers ARE concerned about the vendors public-relations.. intent on harming it.

Its end users that dont follow security issues that are most at risk, where the releasing of exploits hurts them pretty much directly and immediately.

If its a critical bug in software that a typical grandma (and other non-geeks) uses, I claim that it is ALWAYS irresponsible to release the details of the exploit into the wild. Every single time, no matter how much time has passed waiting for a fix. This belief is formulated on the premise that the vendor's public-relations dont mean shit either way , that its the end users that mean something.

It is always right (-1, Flamebait)

Anonymous Coward | more than 3 years ago | (#33044716)

Grow some /b/alls. Computer nerds are fucking bastards who need to be taught a lesson for eating cheatos and fapping to evangleion hentai all day. By releasing flaws to their crappy software they will have to get off their fat asses or they will be sent back to their basement in shame.

Re:It is always right (1)

couchslug (175151) | more than 3 years ago | (#33045910)

"they will have to get off their fat asses or they will be sent back to their basement in shame."

What drivel! I'd never leave Mumsy's sweet, sweet basement in the first place.

It's not fair (3, Interesting)

Anonymous Coward | more than 3 years ago | (#33044762)

to threaten the guys who find vulnerabilities with jail time or fees. I uncovered a major security flaw in a piece of software (allowed an attacker to spawn a shell as root with extreme ease) and also found a way to circumvent the DRM and what happened.... I got stiffed. Instead of FIXING the problem (which is still intact to this day) the company attempted to sue for copyright infringement, among a few other "charges". Luckily, I had a great lawyer and I had documented EVERYTHING from 0 to 60. I was lucky.

This makes me sick. One minute, corporations are talking about providing "rewards" for unearthing flaws/vulnerabilities and then the next, they are trying to sue for every penny. If it wasn't for us, their systems wouldn't last a week without some script kiddie coming along and bringing the whole thing to it's knees.

Who is at fault? (1, Interesting)

digitalhermit (113459) | more than 3 years ago | (#33044776)

It's interesting that the talks center around the responsibility of the researcher and the vendor, but often little attention is paid to the responsibility of the user. Are they as liable? For example, if a manufacturer sells a door lock with flaws but the user keeps the windows (ha) open and someone on the street shouts, "Dude, you're using a Schock Pax H23 and it can be opened with a loud scream!" who is responsible?

As primarily a Linux user, I used to think that the tools just didn't exist on Windows to see what the system is doing. On my Linux box, I can do a "netstat -tlnw" or an "iptables -L" or "fuser -n tcp xxx" and get lots of information. Using that I can disable services, lock them off with TCP Wrappers or IPTABLES, or even sandbox them very easily.

When it was necessary to use a Windows XP system in a relatively hostile network, I was worried. Then I started poking around. Netstat is available on Windows and does the same thing. There's a process listing. There's even a grep workalike ('find' of all things). With those tools it's possible to get a good picture of what's happening on the system.

The gist of this post is that though I enjoy the expanding marketshare of Linux, I am worried that it brings hordes of users that do not make the effort to know their systems. Should they? I think so. It's similar to carrying a firearm. It's great to be able to do so, but you must be responsible about it when you do carry.

Re:Who is at fault? (0)

Anonymous Coward | more than 3 years ago | (#33045716)

The gist of this post is that though I enjoy the expanding marketshare of Linux, I am worried that it brings hordes of users that do not make the effort to know their systems. Should they? I think so. It's similar to carrying a firearm. It's great to be able to do so, but you must be responsible about it when you do carry.

Why is this particularly true of Linux? If running a networked OS is like carrying a firearm, then Linux comes with all sorts of trigger locks and other safety features that are missing from Windows. Of course a stupid or irresponsible user is a security hole regardless of OS. But "users that do not make the effort to know their systems" are automatically a bigger risk when the system is made by Microsoft.

Re:Who is at fault? (1)

digitalhermit (113459) | more than 3 years ago | (#33045892)

Hell, that's exactly my point. Doesn't matter a goddamn bit what OS you're on and that was the point of the post. There's no excuse.

Re:Who is at fault? (0)

Anonymous Coward | more than 3 years ago | (#33047442)

...Linux comes with all sorts of trigger locks and other safety features that are missing from Windows.

Like?

Delayed disclosure is a courtesy (3, Insightful)

Rogerborg (306625) | more than 3 years ago | (#33044778)

Never, ever a responsibility. You didn't write the bug, you didn't miss it in testing, you didn't release it. You owe the developer nothing.

The only ethical consideration should be your sole judgement about the best method to get a fix in the hands of vulnerable users.

You don't like that, Microsoft? Then do you own vulnerability testing and don't release software with vulnerabilities: the problem goes away overnight. Until then, sit down, shut up, grow up, and quit your bitching about being caught with your pants down.

Re:Delayed disclosure is a courtesy (1)

thePowerOfGrayskull (905905) | more than 3 years ago | (#33044972)

You owe the developer nothing.

The flaw in this thinking is that it's not the developer who is ultimately harmed by a disclosure... and I rather doubt that the x-million users of the software will appreciate that you released the information for their own ultimate good.

Re:Delayed disclosure is a courtesy (2, Insightful)

mOdQuArK! (87332) | more than 3 years ago | (#33045294)

Technically speaking, you don't owe the other users anything either - it's still a matter of courtesy.

Re:Delayed disclosure is a courtesy (1, Interesting)

Anonymous Coward | more than 3 years ago | (#33045366)

That depends on your life philosophy.

In my opinion you owe your fellow human beings a lot more than mere courtesy, but it appears I am quickly joining a minority.

Re:Delayed disclosure is a courtesy (1)

mea37 (1201159) | more than 3 years ago | (#33045886)

Not owing someone something, doesn't mean you can act without regard to that person. I don't owe you anything, but I still have to stop at a crosswalk if you're walking through it.

The question isn't "do I owe you anything?" as though disclosure were inaction and delaying disclosure were action I might undertake as a favor. Disclosure itself is an action, and the question is "if I do this, am I liable for resulting harm that may befall you?"

I know you want to say "no, it's the fault of whoever wrote the software in the first place, and of the guy who actually committed the attack". Welcome to the real world of shared blame. If a court were to find that a specific attack occured because of your disclosure and would not have occured otherwise, you may be held partially liable to that attack's victim even if your disclosure ultimlately prevented many more attacks.

If you want to pressure a company to fix vulnerabilities, educate the public about the risk potential associated with their product. Put weapons in the enemy's hands at your own peril; you can assume they'd eventually figure it out without your help, but I wouldn't assume a court will agree with that assessment.

Re:Delayed disclosure is a courtesy (1)

John Hasler (414242) | more than 3 years ago | (#33046356)

> If a court were to find that a specific attack occured because of your
> disclosure and would not have occured otherwise, you may be held partially
> liable to that attack's victim even if your disclosure ultimlately prevented
> many more attacks.

Not likely in the USA. Absent a contract you have no duty not to utter true statements.

Re:Delayed disclosure is a courtesy (1)

mea37 (1201159) | more than 3 years ago | (#33046636)

Interesting... are you talking about how things are, or how you want them to be?

The reason I ask is, if such a blanket statement were a true description of civil liability, I don't think the EFF would spend so much time talking about how to limit your liability when you publish a vulnerability (i.e. utter true statements).

For example... [eff.org]

What I'd really like to see is a citation to some case history, since little else is meaningful in predicting how civil liability will play out; but I've been unable to find one.

Re:Delayed disclosure is a courtesy (1)

John Hasler (414242) | more than 3 years ago | (#33047154)

I'm talking about disclosing vulnerabilities, not publishing exploit code. From your link:

Publication of truthful information is protected by the First Amendment. Both source code and object code are also protected speech. Therefore truthful vulnerability information or proof of concept code are constitutionally protected. This protection, however, is not absolute. Rather, it means that legal restrictions on publishing vulnerability reports must be viewpoint-neutral and narrowly tailored. Practically speaking, this means it is very rare for the publication of non-code information lead to legal liability.

Re:Delayed disclosure is a courtesy (0)

Anonymous Coward | more than 3 years ago | (#33046366)

"Not owing someone something, doesn't mean you can act without regard to that person. I don't owe you anything, but I still have to stop at a crosswalk if you're walking through it."

This is not because of courtesy or regard toward the person in most cases. In most cases this is the law, or to put a fine point on it, the law that says if you run down a person in a crosswalk you will probably spend much of your life locked up in a small room with other people willing to anally rape you on a daily basis.

There's no such law regarding the disclosures which are the subject of this thread, and if there were such laws, the discussion would be about whether those laws are just.

Re:Delayed disclosure is a courtesy (0)

Anonymous Coward | more than 3 years ago | (#33045342)

There is no flaw.

It is not just "I have no responsibilities, Microsoft can suffer because they made a product full of bugs". You think the users are innocent? I have no responsibilities towards users who buy known buggy products from Microsoft either. (Or whoever they buy buggy products from.)

They didn't know about this particular bug of course. But Microsoft products have a long history of being vulnerable, to all sorts of viruses and malware. Vulnerabilities that other products mostly doesn't have. I have no sympathy for those who decide to buy the faulty stuff over and over and over again. Making it a "standard" affecting everyone.

If you find a bug - tell the black hats first. It is more fun to watch. :-/

Re:Delayed disclosure is a courtesy (1)

Weezul (52464) | more than 3 years ago | (#33045726)

A security researcher has no particular duty to users either, but some may assume one for themselves. If so, releasing depends upon whether you're suspicious that exploits exist in the wild.

If bugs are actively being exploited, they are most likely being exploited by the worst people, so publicly enabling all mostly harmless the script kiddies will help matters by forcing the developer to issue faster fixes, possible in multiple stages. If a bug isn't be exploited, fine just tell the developer, and publish in 60 days.

Re:Delayed disclosure is a courtesy (0)

Anonymous Coward | more than 3 years ago | (#33045620)

Never, ever a responsibility. You didn't write the bug, you didn't miss it in testing, you didn't release it. You owe the developer nothing.

You don't like that, Microsoft?....

Hey Mr Arrogant Asshat,
Microsoft aren't the only ones which have let bugs slip thru, pretty much every software vendor has. If you don't atleast flag a warning before you make a disclosure, you are exposing a greater audience to a given problem. Get off your ego trip, and do the 'right' thing.

Re:Delayed disclosure is a courtesy (0)

Anonymous Coward | more than 3 years ago | (#33045952)

Right, and when someone cuts you off in traffic, the responsible thing to do is to follow them to their house, wait until they aren't looking and then cut their brake lines. I mean, it's not your fault. You didn't raise them. You didn't teach them about responsible driving. You didn't give them a licence when they weren't up to your impossibly high standards. You owe them nothing, right?

This isn't about being courteous. This is about the harm you do to others. While the above example deviates from the point, you aren't just hurting MS in your quest to screw them; you're hurting everyone who uses MS software. You saying "They deserved it" is like saying the people who get hit by the car with the severed brake lines that you cut deserve to be hit because they chose to live in the same neighbourhood as the BadDriver. If you release details of an exploitable flaw irresponsibly, you should be held as culpable as anyone who exploits it.

If being responsible is too hard for you, sit down, shut up, and let the grown ups do their work.

Re:Delayed disclosure is a courtesy (1)

Sheik Yerbouti (96423) | more than 3 years ago | (#33046230)

If you find a brand new vulnerability and go straight to IRC with it you are not just hurting Microsoft or sticking it to the man. Your hurting everyone that runs that software. You are also creating bigger botnets which can then be further used in DDOS attacks and extortion attempts etc... So in effect you are damaging the Internet and making it a bigger cesspool. There are ethical issues around vulnerability disclosure. You strike me as the type that collects bots and so probably don't care but the rest of us do.

Re:Delayed disclosure is a courtesy (1)

Todd Knarr (15451) | more than 3 years ago | (#33046496)

There's also another ethical issue: keeping me (as an administrator of vulnerable systems) in the dark about the vulnerabiility puts my systems at risk and prevents me from protecting my systems. You are hurting me in a very direct way by not disclosing the problem to me. If I know the problem exists I can for instance shut down the vulnerable services (if they aren't necessary for my systems to operate), block access to those services at the firewall and/or replace the vulnerable software with equivalent software that isn't vulnerable. I can't do this unless I know the vulnerability exists. People who want me kept in the dark about the problem strike me as the type who at least don't care how much damage the rest of us suffer, and possibly have a vested interest in having vulnerable systems exist.

Re:Delayed disclosure is a courtesy (1)

jeff4747 (256583) | more than 3 years ago | (#33047486)

I can for instance shut down the vulnerable services (if they aren't necessary for my systems to operate),

Why are the services running if they aren't necessary?

Someone should have presented a business case for every process running on the server. Some of these are trivial ("without a kernel, the server won't run"). But there shouldn't be any 'nice to have' or 'may come in handy one day' services running.

As soon as you can use e-mail (1)

bluefoxlucid (723572) | more than 3 years ago | (#33044848)

Once you suspect a security flaw, flare a public mailing list with developers on it. Ask them for help tracking down the issue, until you as a group determine if you've discovered a hole and get a proof of concept running, all in public discussion.

Re:As soon as you can use e-mail (1)

SwashbucklingCowboy (727629) | more than 3 years ago | (#33045258)

Yeah, help the malware writers by telling them where to look for issues.

Re:As soon as you can use e-mail (1)

Zerth (26112) | more than 3 years ago | (#33047554)

You are assuming that malware writers don't already know.

You only know that the public is ignorant of it and thus can't take measures to prevent it, such as uninstalling the broken software or not opening vulnerable file types.

tell it to the public (1, Insightful)

Anonymous Coward | more than 3 years ago | (#33044862)

No one is bright enough to find a security whole that couldn't have been discovered elsewhere before. So it's pretty likely the flaw is either known to the vendor who might not have had seen the need for fixing this, or it is known to an attacker, who already uses the flaw and just didn't appear (yet) on the radar of any researcher or the vendor. As it might be possible that you yourself are monitored by anybody else, your finding might be in the open that way. So it makes no sense in keeping the public uninformed.

cb

Bad guys are happy with delays (2, Insightful)

koinu (472851) | more than 3 years ago | (#33045302)

Do not give bad guys the possibility to learn about a flaw earlier than the users who are affected. If you don't publish the flaw, there is a certain possibility that it will be sold at black markets and kept secret to be able to use against customers. You can see that full disclosure groups are targets of commercial crackers. Full disclosure is like destroying business of criminals.

A customer should always be aware of a flaw and know how to protect himself against it.

There is no need for exploit code. You should publish it BEFORE having a PoC to warn as early as possible (but this is pretty rare, because having a PoC is usually the first indication that a flaw exists). It would also help to give as much information as possible how to protect against attacks (fixes/patches, what to avoid, what to disable, how to minimize the risk).

Being allowed (1, Interesting)

Anonymous Coward | more than 3 years ago | (#33045310)

The problem with "responsible disclosure" is being allowed to do it. Reporting a bug to a vendor might get you a "fix" response (best case), might get you ignored (average case), or might get you hit with a Gag Order lawsuit (worst case). Disclosing the bug after the worst case can get you arrested and even if you manage to avoid jail, you have spent a lot of money in defending yourself. This is the reason behind the full disclosure movement, to prevent vendors from gagging researchers who discovered bugs (security by obscurity), and allow Administrators to work around bugs by disabling the service(s), firewalling, sandboxing, etc. It's been said many times, but is worth repeating, that just because the bug hasn't been put on bugtrac, doesn't mean the black hats don't know about it. It's also worth noting that bugs that have been in existance for years, were only addressed once they were disclosed to the community.

When Is It Right To Go Public With Security Flaws? (1)

John Hasler (414242) | more than 3 years ago | (#33045402)

Whenever you damn well please unless you are contractually obligated to do otherwise.

Re:When Is It Right To Go Public With Security Fla (1)

fishbowl (7759) | more than 3 years ago | (#33046424)

>Whenever you damn well please unless you are contractually obligated to do otherwise.

And "contractually obligated" necessarily involves an exchange of valuable consideration (e.g., they give you money in return for your agreement to keep your mouth shut). In general, software EULAs are not contracts for exactly this reason.

Depends on the security risk (1)

houghi (78078) | more than 3 years ago | (#33045574)

As others already mentioned, first see what the people who released the software react.

On how long you need to wait depends greatly on their response and the security risk involved. If you found it, someone else might have found it as well.
If you find a security loophole in ssh, and people do not react, I would say a week after notifying the ssh people with a proof of concept and there was not response.

Importand security risks generally take 2-3 days. The reason for this is that way all distro's can get the patch out at the same moment, reducing the vurlnerability of all.
If the makers answer, it will depend on how they react and again the security risk.

considering the state of those in power (1)

FudRucker (866063) | more than 3 years ago | (#33045852)

if i ever run across a vulnerability in any closed source software i will submit that information anonymously to prevent the authorities from treating me as if i was a criminal or terrorist, the only exception to that rule would be if i found a vulnerability in something licensed under the GNU/GPL then i will simply submit a bug report through the regular channels or email the author of the software directly.

Full disclosure (1)

miffo.swe (547642) | more than 3 years ago | (#33045890)

As long as the vendors get a grace period or as in some cases forever as a timeframe the incentive to fix the real issue wont go away.

The discussion about full disclosure/responsible disclosure is a side issue to the real questions. Why dont the vendors do proper testing before releasing software? Why do they refrain from fixing bugs they fully know about? Why should researchers take any responsibility for the vendors customers when its obvious the vendors wont think twice about security or Q & A?

Its not like there is any shortage of tools for finding security holes today. Its apparent many of the biggest vendors wont even bother testing their stuff with simple fuzzers.

The reason vendors wont mind releasing bad crap is that they get away with it. If helping the users is the goal information is power. When people start to see just how crappy some products are they can make an informed decision. Then and only then will people start shopping for better alternatives / demand better security.

Right now most people have swallowed the propaganda that todays systems have the best security money can buy when in reality we are not much better of than before. As security has risen from abysmal to crappy so have the hackers. The problem is that the hackers, the real ones, are miles ahead of the security community and the vendors.

PDF + wikileaks (0)

Anonymous Coward | more than 3 years ago | (#33046280)

Stop being pansy and just release the flaw on Wikileaks or some similar site.

If everyone didn't give a crap about the company with the flaw in the first place, and took a open-source viewpoint of releasing exploits, maybe they would be fixed a bit quicker when MS (or other Vendor) sees that they are patching 3 critical flaws a week, but people are releasing 25 a month.

maybe these "shitty" software companies will start to realize that they need to spend more money on fixing flaws than putting cash into the marketing slush fund.

Vendor, then users (2, Interesting)

Todd Knarr (15451) | more than 3 years ago | (#33046306)

In most cases you warn the vendor first, providing complete details including exploit code so they have no excuse for not being able to duplicate the problem. If the vendor won't acknowledge your report within a reasonable time (say 7 days), will not commit to a timeline for having either a fix, a workaround or a mitigation strategy for users within a reasonable time (say 14 days from acknowledgement, with the deadline being 30-90 days out depending on severity) or fails to meet the deadline, then you disclose to users including full details, exploit code (so the problem can be independently verified without having to rely on your word that it exists) and a recommended mitigation strategy. Demanding payment for the report is never appropriate unless the vendor has publicly committed to a "bug bounty" and your demand is what they've publicly committed to.

There'd be occasional exceptions to the above. If for instance the vulnerability is theoretical and you can't create actual exploit code for it, demanding the vendor fix it is inappropriate (by the same token, though, it's far less of a problem to discuss the problem in public if it truly can't be feasibly exploited). The deadline should be more flexible for less severe vulnerabilities. If the vendor has a track record of responding inappropriately to reports (eg. by threatening legal action against the researcher), immediate anonymous disclosure may be a better approach.

What if the exploit is complicated? (1)

Myria (562655) | more than 3 years ago | (#33047044)

I found a ring 0 exploit in a popular operating system, whereby any unprivileged user-mode process could get ring 0 access. It's been about a month since I told the developer, and they haven't said when a fix would be coming.

It's a ring 0 exploit, but actually turning it into a a root exploit is annoyingly complex due to the design of this operating system. There is nothing computer-theoretic stopping it, just complexity regarding the way page tables work. The exploit gives ring 0 in your control very easily, but the process of getting user-mode root from ring 0 like this is difficult.

Notify - Force Verify (1)

sarkeizen (106737) | more than 3 years ago | (#33046334)

The No Code Publish approach seems reasonable to me: Publish flaw to everyone - including CERT, Vendors but include no code or exploit for anything publicly readable. Give the vendor your exploit code an a deadline after which you will publish the exploit. If no fix appears by the deadline then you publish.

"Experts" (0)

Anonymous Coward | more than 3 years ago | (#33046994)

I love all the people posting as if they ever discovered an exploit, or will ever discover one. Here's my advice: if you discover the exploit, you're enough of a pro that your instincts on how to deal with it are a lot more informed than the advice you're getting.

My 2 bits (1)

blair1q (305137) | more than 3 years ago | (#33047252)

If you're really a whitehat, tell the vendor first. This will keep the exploit away from blackhats while the vendor fixes the hole. Security through obscurity works, up until the time it doesn't. So if the vendor does not fix the hole quickly, and you suspect the blackhats are about to discover it, then you need to inform the people who are vulnerable to it. If possible without broadcasting it to the blackhats and script-kiddies. Yes, that's rarely possible, but if it's possible it's the right thing to do first. The only reason to broadcast a hole is that you know the blackhats are using it and it affects people you can not contact any other way.

Any other protocol leaves you open to questions of character and intent. Either you're an attention whore or you're a blackhat looking to cause trouble.

I'm hopeful this post will get the shit modded out of it, preferably as "-1 Redundant".

Ethics and practicality (1)

shentino (1139071) | more than 3 years ago | (#33047400)

Giving the vendor an opportunity to apply a fix is all good and dandy, but any researcher must remember this:

Real blackhats don't wait around for a patch before they go on the prowl for systems to exploit. And they don't announce their discoveries in public.

Vendors are not only racing the "egotistic researcher" looking to score points by pulling their pants down, but also against the crackers looking to not only pull their pants down, but rape them in the ass.

No matter who is on what side of the security debate, the clock is always ticking.

Easy decsion (1)

warGod3 (198094) | more than 3 years ago | (#33047654)

Let FOX News know and tell them that it will open up millions of computers to potential identity theft scams, destroy the integrity of national security, and could cause you to be impotent.

Then see how fast it gets fixed.

Release it Immediately (1)

npsimons (32752) | more than 3 years ago | (#33047884)

Publish details about the bug as soon as you find it; publish an exploit as soon as possible. If every discoverer of security flaws did this, software devs would learn very quickly to have second thoughts about releasing unchecked code. I say that as a software dev.

Seriously, you think you're smarter than everyone else? That you're the only one who discovered a flaw? Puh-lease. The Chinese government alone is probably throwing more manpower at finding flaws in US software than there are developers in the US. By releasing the info, you're doing the world a favor. If the vendor doesn't fix it, the customers will move to something better. End of problem.

NOT AGAIN (0)

Anonymous Coward | more than 3 years ago | (#33048160)

Do we have to have this pointless conversation again and again and again here. Blah blah release blah blah white hate blah blah vendor blah.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...