Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Microsoft Makes Major Shift In Disclosure Policy

timothy posted more than 4 years ago | from the tread-water-faster dept.

Microsoft 65

Trailrunner7 writes "Microsoft is changing the way in which it handles vulnerability disclosures, now moving to a model it calls coordinated vulnerability disclosure, in which the researcher and the vendor work together to verify a vulnerability and allow ample time for a patch. However, the new philosophy also recognizes that if there are attacks already happening, it may be necessary to release details of the flaw even before a patch is ready. The new CVD strategy relies on researchers to report vulnerabilities either directly to a vendor or to a trusted third party, such as a CERT-CC, who will then report it to the vendor. The finder and the vendor would then try to agree on a disclosure timeline and work from there." Here's Microsoft's announcement of the new strategy.

Sorry! There are no comments related to the filter you selected.

Paging Tavis Ormandy, Paging Tavis Ormandy! (5, Insightful)

eldavojohn (898314) | more than 4 years ago | (#32994146)

In response to the second step in the Coordinated Vulnerability Disclosure ("Step 2: Hurry Up and Wait"), I've printed several copies of the CVD on quadruple ply tissue paper and stocked all the restrooms with it. I've also prepared a special four course meal for Mr. Ormandy [slashdot.org] consisting of Taco Bell, a cup of coffee, a cigarette and a spoonful of castor oil.

Mr. Ormandy, I think you know what to do. I really found it amusing that they called the blog posting "Bringing Balance to the Force" when it looks to be completely defined by Microsoft with little or no input from the community.

In other words... (1)

Lead Butthead (321013) | more than 4 years ago | (#32994420)

"Same old sh_t, different day."

Re:In other words... (0)

Anonymous Coward | more than 4 years ago | (#32996422)

A whole press release just to reiterate "Its not a bug its a feature"

Re:Paging Tavis Ormandy, Paging Tavis Ormandy! (1)

BatGnat (1568391) | more than 4 years ago | (#32997424)

How may Microsoft technicians does it take to change a light bulb?

None, They just redefine darkness as the new standard.....

I love that one...

Re:Paging Tavis Ormandy, Paging Tavis Ormandy! (1)

darkpixel2k (623900) | more than 4 years ago | (#32998764)

How may Microsoft technicians does it take to change a light bulb?
None, They just redefine darkness as the new standard.....
I love that one...

The day Microsoft builds a product that doesn't suck is the day they build a vacuum cleaner.

I love that one more...

I don't get it? (0, Troll)

Charliemopps (1157495) | more than 4 years ago | (#32994288)

Why would anyone report a vulnerability to Microsoft? Unless they start paying for the info, I say post it online the second you find it and to hell with Microsoft.

Re:I don't get it? (1)

mark72005 (1233572) | more than 4 years ago | (#32994314)

A general sense of moral obligation not to aid and abet criminal activity?

Re:I don't get it? (1)

Monkeedude1212 (1560403) | more than 4 years ago | (#32994414)

On the internet? That'd be a first.

Re:I don't get it? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#32994622)

Right... so that is motivation NOT to help M$...
what is the motivation to report to them?

Microsoft has an obligation to protect their customers from security vulnerabilities by responding to them, one they abdicate constantly.

Security researchers have the obligation that ANY academics have. Tell the truth, show your work.

Re:I don't get it? (1)

mark72005 (1233572) | more than 4 years ago | (#32994710)

I'm not saying it's the public's job to troubleshoot their shoddy code and develop fixes.

I'm just saying I feel it IS the public's responsibility not to make potentially dangerous information available to people with malicious intent.

I have no love for MS. I just feel everyone is better off with "Hey you morons, look at the latest exploit" instead of "Hey, general public including innumerable black hats, look at the latest exploit"

Re:I don't get it? (2, Insightful)

Charliemopps (1157495) | more than 4 years ago | (#32994850)

The quickest way to protect the public from malicious intent would be to get them to all stop using Microsoft products immediately. Everyone's sitting in a sinking lifeboat and you're quietly warning the captain about each leak you find so he can stick some chewing gum on it. What you really should be doing is screaming "Look at all the Fing holes in this boat!! Everyone get in that other, non-sinking boat called Linux over there!!!"

Re:I don't get it? (1, Insightful)

Mister Whirly (964219) | more than 4 years ago | (#32995334)

Yes, then the target would be the next biggest OS down the chain. The problem isn't "solved", it is just moved. Much like how surveillance cameras don't really cut down the crime rates, they just move them to a different area. If Linux had more of a presence it would be as big of a target as Microsoft. MS is just the current "low hanging fruit". Sorry, but the solution to security problems should never be "switch your operating system and every piece of software you currently use".

Re:I don't get it? (3, Interesting)

agrif (960591) | more than 4 years ago | (#32995662)

Switching the majority OS to GNU/Linux would have one immediate and obvious benefit: the source is widely available and widely modifiable. If we find a vulnerability, it can be diagnosed and patched immediately, without having to wait for a corporation's blessing. Hell, you don't even have to wait for the kernel team's blessing, or any other governing entity. Just post the patch and tell people about it!

It used to be clear that *nix systems were more secure, because they were actual multi-user systems. Nowadays, it's less clear. I'm certain a properly set up SELinux system is still miles more secure than Windows 7, but it's unlikely a common user will have that. However, even if there is no security advantage, I know this: Linux may not be more secure, but it is certainly easier to keep secure.

Re:I don't get it? (0, Troll)

Anonymous Coward | more than 4 years ago | (#32996472)

first off the majority of people wouldn't be able to immediately diagnose and patch because they have no idea how to do that. second because linux is open source you would be less secure because it is easier to find flaws and backdoors in a system that you can view its source code. and since linux uses a general public License if they request to see your source you have to give it to them because it requires that derivative works also fall under GNU's general public license. the only way to truly secure yourself is to disconnect.

Re:I don't get it? (2, Insightful)

agrif (960591) | more than 4 years ago | (#32996720)

I fear that you are a troll. Nonetheless...

first off the majority of people wouldn't be able to immediately diagnose and patch because they have no idea how to do that.

Yes, but this does not negate the fact that there are many more eyes looking for flaws. A minority of a ton of people can still be a ton of people. The fact that anybody could diagnose and patch immediately is the important part.

second because linux is open source you would be less secure because it is easier to find flaws and backdoors in a system that you can view its source code.

Yes, and not all of those who find these flaws would exploit them. Many would fix them. Also, as pointed out many times on Slashdot, security through obscurity is not security at all.

and since linux uses a general public License if they request to see your source you have to give it to them because it requires that derivative works also fall under GNU's general public license.

This is a misinformed statement. The GPL requires that any publicly distributed derivative works be distributed under the GPL, but not privately-used derivative works. Moreover, the GPL only requires that you provide source code to those who have purchased the work. It's just a happy coincidence that most free (GPL) software also happens to be free (money).

the only way to truly secure yourself is to disconnect.

Truer words have never been spoken. Why is it, again, that we need a cybersecurity policy when we can just disconnect the freaking high-risk computers from the freaking internet?

You won't be so sure of SELinux vs. Windows here (0)

Anonymous Coward | more than 4 years ago | (#33000084)

I'm certain a properly set up SELinux system is still miles more secure than Windows 7, - by agrif (960591) writes: on Thursday July 22, @04:44PM (#32995662) Homepage

Per my subject-line above, your quote, and the results in this URL (which uses the multiplatform CIS Tool as its security performance gauge), your "surety" will be shaken ("not stirred") badly -> http://forums.theplanet.com/index.php?showtopic=89123

Re:I don't get it? (1)

Charliemopps (1157495) | more than 4 years ago | (#32997674)

Except for that fact that Linux is simply more secure than windows by several orders of magnitude. The fact that you can setup a windows based machine without a login and still have full admin rights is proof enough of the serious and deep rooted conceptual problems with it's design. Windows is built, from the ground up, to sell windows. Nothing more.

Re:I don't get it? (1)

Mister Whirly (964219) | more than 4 years ago | (#32997928)

If you are relying on your operating system for security, you are taking the wrong approach to security. All major OS have had exploitable flaws. Security is not software nor anything you can buy or install - it is a set of policies, procedures,and practices. The actual software involved is irrelevant.

Re:I don't get it? (0)

Anonymous Coward | more than 4 years ago | (#32997980)

This overall idea is a great idea in theory. Probably the way we should be looking at it, in fact. However, it has not been proven. It certainly is possible that Linux would not have as much problems as a security target as Windows. It's also possible that it could have more. To just blindly assume it's going to be the same is stupid, though. Really, I would expect it to be worse but hope it would be better.

With Linux, however, I think there's good reason to have high hopes that it would be better. Just it's open source nature is a pretty damn good reason. Yes, people with malicious intent could search through the source code for holes, but because the source is open, there's potentially more people able to fix it. It's also therefore easier for people with good intent to search for those security problems and get them patched before anyone with malicious intents even gets to it. With Windows, you're just kind of shooting in the dark on both sides and trusting Microsoft to patch anything you do find. That obviously isn't a fantastic model, as we have obvious problems with it. Has the open source model even been tried on such a large scale? I suspect not, which leaves only speculation, and that speculation can easily go multiple different directions.

Ultimately, Windows is going to stick around for a while. But I'm really sick of this argument that's just as much bullshit speculation as the one you're arguing against (which I'm equally sick of hearing, to be honest). However, if you're sitting there struggling with one model and finding it to not work well over and over again, wanting to try a different model is an obvious attempted solution. The only real alternative we have to Microsoft's model, however, is Linux, BSD, etc. There's Mac and probably a couple of others too, but that's got the same closed problems (in large part anyway; I realize they have some more open stuff, but so does Microsoft) as Microsoft's model.

Re:I don't get it? (1)

Mister Whirly (964219) | more than 4 years ago | (#32998064)

I never claimed the two would be the same security-wise, I just said if Linux was the top market share OS, it would be the biggest target. How well it would fare compared to Windows is something I was not speculating on, or blindly assuming.

Nooo not this shit again! (1)

GameboyRMH (1153867) | more than 4 years ago | (#33003294)

Linux machines are often the servers that have everyone's credit card numbers, trade/military/government secrets, massive processing power and commercial-grade Internet connections, VoIP servers, and all the other real goodies. Each Linux machine is a potential Fort Knox in a world of 7/11s.

And even though these are the minority these days with most Linux machines being home PCs and geek tinker toys, if any Linux machine is accessible from the Internet on port 22 it will be hit with ssh brute force attempts 24/7/365 - because that's typically the easiest way to break into one. To brute force a password.

There's no lack of interest or effort, just a lack of success. What does that tell you?

Re:Nooo not this shit again! (1)

Mister Whirly (964219) | more than 4 years ago | (#33004010)

The exact same thing can be said about Windows servers. Any properly configured box stands a much better chance of fending off brute force attacks. On my old Windows server that was running ssh I would log several thousand brute force attempts daily, with nobody every successfully breaking in. What does that tell you? Basically that I know how to configure a server, and that I used a really long pass phrase instead of a simple password. Security 101 stuff.

Re:I don't get it? (1)

Your.Master (1088569) | more than 4 years ago | (#32996186)

This is true only in the same sense that the surest way to world peace is to kill everyone that threatens world piece.

Re:I don't get it? (2, Interesting)

JohnBailey (1092697) | more than 4 years ago | (#32996176)

I'm not saying it's the public's job to troubleshoot their shoddy code and develop fixes.

I'm just saying I feel it IS the public's responsibility not to make potentially dangerous information available to people with malicious intent.

I have no love for MS. I just feel everyone is better off with "Hey you morons, look at the latest exploit" instead of "Hey, general public including innumerable black hats, look at the latest exploit"

That does kind of depend quite heavily on the researcher being the first to find the vulnerability, and the vendor allocating enough people to adequately deal with fixing it in a timely manner.

Can you say with any real supportable evidence that either statement is a safe assumption? Because I know I can't. And to be honest, I doubt any researcher worth their title can either. Including the guy who I imagine kicked this new policy off by disclosing one he discovered when Microsoft were palming him off with vague answers for a week.

If the "people with malicious intent" already know about a vulnerability, which is a much safer assumption to make, and Microsoft are dragging their feet, because hiring enough good security people is expensive, is it not the researcher's duty to inform the general public? Who can then take steps to protect themselves while waiting for Microsoft to get around to making the patch available the next Patch Tuesday? After all.. We are vulnerable every second of every day to a host of unknown unreported vulnerabilities that any "black hat" could discover by themselves, and exploit for fun and profit. We can't be wary about exploits we are not aware of.

If a vulnerability is discovered, which do you think is faster to react? A company who knows the finder is not going to tell anybody, so they can take their time, or even ignore them completely.. Or a company who knows they better get right on it, or have a pretty nasty PR mess to clean up?
  Who do you think has the bigger and more authoritative security team? One who has perhaps got the authority to say to marketing.. " No you bloody well will not do that. And I don't care how much easier it makes sharing your whole hard drive over the internet with aunty Gladys and her bridge team"!

As you sit there worrying about Microsoft possibly losing money, or having their reputation tarnished.. Or worst of all.. Having to increase the size of the security team.. Ask yourself this question..

"What would BP have done differently if the warnings they had earlier been given about the safety of the gulf rig were a matter of public record"?

http://www.nzherald.co.nz/business/news/article.cfm?c_id=3&objectid=10652032 [nzherald.co.nz] (first one I came across on Google, not the first one I have read)

Re:I don't get it? (1)

sexconker (1179573) | more than 4 years ago | (#32994790)

A general sense of moral obligation not to aid and abet criminal activity?

Fuck that.
How about "Oh shit, this affects us. OH SHIT!"?

Microsoft-Spurned Researcher Collective (1)

kwabbles (259554) | more than 4 years ago | (#32994326)

I guess they achieved their ends and I wonder if Microsoft will be collaborating with the MSRC in the future. :rolleyes

Following Google (3, Insightful)

SiChemist (575005) | more than 4 years ago | (#32994330)

Looks like Google's policy announcement from July 20 [slashdot.org] rattled some MS cages.

Re:Following Google (1)

b4dc0d3r (1268512) | more than 4 years ago | (#32994868)

Still no apology to Tavis Ormandy. Even though they basically admitted he was right.

Re:Following Google (1, Insightful)

Anonymous Coward | more than 4 years ago | (#32995416)

How does giving a company 5 days to fix an exploit right? If anything this looks like an effort by MS to get the researchers to agree to work with MS so that the details aren't released before a patch is ready. What possible reason is there for releasing this stuff anyway? Does it make anyone safer? Unlikely. Most people don't care enough about security in the first place. All the early release of the exploit does is give lazy hackers more ammunition. Cause let's face it even if MS fixed these within 24 hours we would still see computers get bitten by it because people don't always update their computers.

Re:Following Google (0)

Anonymous Coward | more than 4 years ago | (#32996438)

Not everyone is a helpless home user. It makes large enterprises safer, as their in-house team can make an informed decision on how to deal with the situation.

Conversely, it doesn't make anyone less safe, as those who would exploit the vulnerability likely already have

Seems to me the proper approach is:

1) researcher stumbles on a flaw and notifies the vendor, arranging a release date coordinated with a patch.
1b) Meanwhile, someone is tasked with checking for available exploits As soon as they are found, full disclosure is presented.
2) if the discovery is via a malicious payload or other public abuse of the flaw, disclose immediately and often.

Re:Following Google (1)

innocent_white_lamb (151825) | more than 4 years ago | (#32999980)

What possible reason is there for releasing this stuff anyway? Does it make anyone safer? Unlikely.
 
On the contrary, the answer is "possibly". If I know the nature of a security hole in Program X, I might be able to find a way to substitute, sandbox or discontinue Program X in my own workflow and thereby become safer.

Re:Following Google (1)

bloodhawk (813939) | more than 4 years ago | (#32997828)

Quite the reverse, both googles and Microsoft's policy announcements basically condemn the prick act performed by Ormandy.

Twilight of the Goulds (-1, Troll)

Anonymous Coward | more than 4 years ago | (#32994336)

Liberals are clear-eyed, cool-headed rationalists, implacably opposed to dogma and superstition. That’s why they reject the fairy-tales of the creationists. Like this one: The Universe was created in six days and is now only 6,000 years old. Laughable. Or this one: Noah’s ark rode out a world-wide flood for forty days and nights with a huge collection of animals on board. Ludicrous. Or this one: Mass immigration by non-whites into White societies will produce peace, prosperity, and happiness for all. Ridic– Whoops, sorry, my mistake. I’m mixing my fairy-tales up. That last one belongs to the liberals, not the creationists.

Yes, the truth is that liberals don’t really object to dogma, superstition, and fairy-tales at all, they just object to the wrong kind: the old Christian kind. They’re perfectly happy with the new kind – their kind – and they hate science just as much as creationists when it threatens to contradict their irrational dogmas. Race does not exist. IQ tests measure nothing but the prejudices of IQ testers. Differences in the psychology and behavior of men and women are solely the product of social conditioning. Those are three of the biggest liberal dogmas, and for the past forty years, led by pseudo-scientists like Stephen Jay Gould (Jew), Richard Lewontin (Jew), Leon Kamin (Jew), Steven Rose (Jew), and Jared Diamond (guess), they’ve fought tooth-and-nail against the ever-growing scientific evidence that all three are completely wrong. Race does exist, IQ tests do measure something real, and men and women are innately different in psychology and behavior.

More evidence of how liberals can’t tolerate true science comes from their ignorance about one of the most important of all scientific tools: the controlled experiment. When you have an idea or invention to test, use a small space to start with and compare what happens with a control where you don’t do anything. One of the advantages of this method is that if something goes wrong, you can easily contain the problem. Suppose you have a new chemical that might help crops grow faster and feed more people, but might have unwelcome side-effects too. You need to test it to make sure it’s safe, so the obvious thing to do is manufacture huge amounts of the stuff and use it on every farm in the country. That way, if every plant turns yellow and dies after two weeks, shortly before farmers and their families start developing strange and deadly new cancers, you’re up shit creek without a paddle. But you can at least say that your heart was in the right place.

If you think that sounds wrong, you’re obviously not a liberal, because that is actually a good description of how liberals have been testing the effects of race mixing. Mass immigration by non-whites is an experiment on a huge scale with no controls whatsoever, and if it all goes horribly wrong the ordinary Whites of Europe and America, who never asked for or wanted the experiment to take place, will be left up shit creek without a paddle. It will be no consolation that many liberals will be sharing the canoe with them. Other liberals, with the money to buy their way out of a self-created disaster, may be able to flee somewhere still safe like Iceland or the far north of Canada. If so, then maybe after a few years, when the memories of massacre and rape by non-whites have begun to fade, their crazy liberal religion will re-assert itself and they’ll begin agitating for more “diversity” in the hideously White societies that surround them.

That’s why the native Whites of Iceland and northern Canada, if they have any sense, will arrest those fleeing liberals as soon as they step off the plane and deport them straight back where they came from: the racially mixed hell-holes their criminal ideas and actions helped create. After all, there’s no way the refugees could plead innocence or ignorance. The disastrous effects of mass immigration are already obvious now in the experiment that took place in the Pacific on the tiny island of Fiji. Europe and America are big places with many millions of White inhabitants. It will take a long time to destroy them completely with mass immigration, and the process has only just started. Fiji isn’t a big place and that’s why it’s already been destroyed. The old Fiji is now gone for ever, because the native Fijians are outnumbered by the offspring of Indian laborers imported under the British Empire. There’s huge racial trouble there and for once the old liberal whine is right: The disaster in Fiji is Whitey’s fault.

Or rather, it’s the fault of the ignorant, short-sighted White colonial politicians who ran Fiji and imported Hindu Indians without the consent of the island’s rightful owners. The same kind of politician imported Hindus from mainland India into Buddhist Sri Lanka and created another intractable racial conflict. Sri Lanka is where suicide bombing was invented before it was picked up by the Palestinians in their racial conflict with the Jews and then sent on to the London subway and the racial conflict between Whites and non-whites in Britain. In each case – Fiji, Sri Lanka, Palestine, Britain – a small group of politicians have ignored common sense and the lessons of history by allowing different races and religions to mix. In the case of Britain, their task was made easier by the lies of Jew-corrupted science and psychology about the realities of race and racial differences.

But there is some good news: Those lies are starting to crumble fast. Many of my readers will have heard about the new research into gene-variants underlying brain development. There are highly significant genetic differences between Whites and sub-Saharan blacks, for example, and those differences support race realism about differences between White and black intelligence. I’ve been reading liberal papers and watching liberal websites and very little has been said about this research, which is a sure sign of its significance. Liberals can’t attack the researcher as a racist because he’s Chinese, and though they may be able to delay the even more significant findings he’s said to have made, it really is only a matter of time before the religion of modern liberalism becomes extinct.

That’s because its cherished dogmas about race are being destroyed one by one. Science is on our side, not theirs, and even the most deluded of white liberals are starting to realize it. Those Jewish pseudo-scientists like Gould and Diamond, who knew the truth all along, are being exposed as the liars and charlatans they always were. They should thank their lucky stars that this scientific war won’t end in a trial for war crimes, because they’re guilty as hell and share a heavy responsibility for all the Whites raped, murdered, and beaten by non-whites in Europe and America since the crazy and criminal experiment of race mixing and mass immigration started back in the last century.

Re:Twilight of the Goulds (0)

Anonymous Coward | more than 4 years ago | (#32999438)

Here dude, I think you dropped your swastika.

motivation (4, Insightful)

Lord Ender (156273) | more than 4 years ago | (#32994342)

What is the researcher's motivation to spend the extra time working with Microsoft? They certainly have no obligation to do anything Microsoft asks...

Personally, I prefer the Google and Mozilla method whereby researchers are paid a bounty of a few thousand dollars for reporting vulnerabilities in the manner the vendor prefers. Microsoft would be wise to follow the leaders rather than invent their own convoluted process.

Re:motivation (3, Funny)

Anonymous Coward | more than 4 years ago | (#32994636)

Even with $40+ billion in the bank, MS would go broke really quickly with that model...

[/snarky]

Apples to Oranges (5, Funny)

Anonymous Coward | more than 4 years ago | (#32994660)

Personally, I prefer the Google and Mozilla method whereby researchers are paid a bounty of a few thousand dollars for reporting vulnerabilities in the manner the vendor prefers. Microsoft would be wise to follow the leaders rather than invent their own convoluted process.

There's a fundamental problem with your comparisons. When a security bug is released in Firefox you see the Mozilla Foundation marvel at the cleverness of the attack. Then a distributed net of individuals quickly work together in an agile way to get the hotfix out and then sometime is spent testing and hardening that fix. When a security bug is released targeting Chrome or any of Google's products, you see Google developers that are comfortable on their campuses swing long hours and work together to push out a fix as quickly as possible. These are all sensible approaches to security bugs.

With Microsoft, however, you see the heavy thudding of a big corporation. You see a complex inner working of management slow things down. Somebody might ask for an estimate on how much money this is going to cost and that estimate comes back a week later. Senior management starts shredding documents. Engineers start falling from helicopters in Redmond. A tornado of chairs leaves several injured. Microsoft's campus looks like the superdome following Katrina. People are chained to their desks. The reason they ask for 60 days is because that's how long it takes FEMA aid to reach Microsoft ...

You just can't compare the two ...

Re:Apples to Oranges (1)

theskipper (461997) | more than 4 years ago | (#32994742)

funny + insightful = +1 funful

Re:Apples to Oranges (1)

denis-The-menace (471988) | more than 4 years ago | (#32994820)

IOW: MS is too big to turn on a dime.
MS has become what they were striving to replace: IBM.

Re:Apples to Oranges (1)

VGPowerlord (621254) | more than 4 years ago | (#32994896)

MS has become what they were striving to replace: IBM.

They've done what they've set out to do, then?

Or did you mean to throw in that they didn't want to be like IBM.

Re:Apples to Oranges (2, Insightful)

tlhIngan (30335) | more than 4 years ago | (#32995502)

IOW: MS is too big to turn on a dime.
MS has become what they were striving to replace: IBM.

More like they can't. A problem may be a simple fix inside a problem module, but it's also got to go through rounds of testing to make sure that simple fix actually doesn't break anything. After all, even doing stuff like implementing LUA showed how badly things broke (see Vista).

The problem when you're the giant is you attract all the developers. The problem is, most developers write crap for code, and do things they really shouldn't. If you remember back in the DOS days, people hacked inside DOS data structures all the time - so much so that Microsoft was stuck in that they couldn't move its place in memory or alter it. Or even assume that its values haven't changed. The same thing's happened with Windows. The desktop "window" actually has a title called "Program Manager". The icons and other resources inside explorer.exe and other shell DLL's can never, ever be touched, removed, replaced or altered because apps actually "steal" the icons from within. (Things broke horribly during the XP betas because they renamed the window classes (not to be confused with a C++ class)). Or why "Documents and Settings" is a hardlink on Vista and Windows 7.

I think they're also a short way away from recognizing that if you type "C:\Program Files" to actually take you to %PROGRAMFILES% because people assume that it will always be called "Program Files". (Not "Program Files (x86)", not localized, etc.).

It's a miracle Windows works at all.

Re:Apples to Oranges (2, Insightful)

DragonWriter (970822) | more than 4 years ago | (#32996382)

IOW: MS is too big to turn on a dime.

Except that scale is not the fundamental problem, organizational culture is.

Re:Apples to Oranges (1)

sharkey (16670) | more than 4 years ago | (#32997168)

IOW: MS is too big to turn on a dime.

That's right. MS sells software FOR the agile business, not software WRITTEN BY an agile business.

Re:Apples to Oranges (1)

caluml (551744) | more than 4 years ago | (#32996760)

With Microsoft, however, you see the heavy thudding of a big corporation. You see a complex inner working of management slow things down. Somebody might ask for an estimate on how much money this is going to cost and that estimate comes back a week later. Senior management starts shredding documents

Honestly? Really? You don't think they have high/critical priority bugs, which get instant visibility right up the escalation tree, managers pushing the rest of the people to get a fix quickly? I've worked for some "big corporations", and when the shit hits the fan, the pressure from above increases immensely. Everyone mucks in, works long hours, gets stuff done.

Big companies can sometimes take a long time to change direction, or "get it" - but when it's something as fundamental as a very large security hole, all the machinery will click into place.

Re:motivation (1)

pgn674 (995941) | more than 4 years ago | (#32996154)

This video may provide some insight:

YouTube - Clay Shirky: How cognitive surplus will change the world [youtube.com]

Bug finders are both producers and consumers of the entire actions and consequences in the process. Finding and reporting security bugs is a civic action (as opposed to a communal one). Having the bargain for this action be based on economics instead of social rewards and punishments may have an adverse affect. So, it may be possible that people who get paid for reporting the bugs may feel that they have performed their entire duty that may have been present, and not go above and beyond in helping the community. Or, they may even keep the bug to themselves, believing the bug is worth more money than what's being offered, and forgetting entirely about the community. I think it's worth considering, at least.

Sudden outbreak of common sense (3, Insightful)

Local ID10T (790134) | more than 4 years ago | (#32994346)

So they are formalizing common sense into a policy.

It is a lot better than the previous formal policy of bat-shit crazy.

Anything's fine, as long as they communicate (1, Insightful)

AdmiralXyz (1378985) | more than 4 years ago | (#32994388)

I've never discovered a vulnerability in Windows or anything else, but if I did I'd be fine to sit it for as long as needed, as long as Microsoft got back to me and said "Yeah, we're working on it, here's when you can expect a fix." What's maddening (and actually Microsoft seems to be good about this, it's Apple and Oracle that are the worst offenders) is when someone sends a bug report into a black hole, never hearing anything from the company for months and months. At that point, I see no reason why the researcher shouldn't just publish to the world. The company clearly doesn't take security seriously, why should he?

Re:Anything's fine, as long as they communicate (1)

FormOfActionBanana (966779) | more than 4 years ago | (#32998162)

amen. Ahem, why is this flamebait?

Re:Anything's fine, as long as they communicate (1)

Yvanhoe (564877) | more than 4 years ago | (#33000934)

You found a vulnerability.
You know your bank, your hospital, your tax center has it.
You know that there is an option to deactivate as a workaround.
You know that many people are actively searcging for this kind of vulnerabilities and it may be exploited right now.
And you see Microsoft claiming their product is the best and the most secure everywhere.

You can wait, yes, but I am unsure of the more responsible way of acting.

Good luck getting Apple to agree (5, Informative)

Anonymous Coward | more than 4 years ago | (#32994486)

Posting anonymously for obvious reasons. What happens today if one emails Apple's product security team (product-security@apple.com)? A few things. First, you get a generic pre-generated email that acknowledges that Apple received your email. Next, if you're lucky, you get an email from an analyst who has reviewed your vulnerability. What happens next? 1) No updates are provided. Ever. 2) If you ask for an update as to when the vulnerability will be fixed, you will not get a detailed response. 3) Apple waits several months. 4) Apple waits several months. 5) Apple fixes the bug, possibly. 6) You get an email from Apple asking how you want to be credited. 7) If you're lucky, Apple will send you an email with notification on when they're planning to fix the issue, along with the exact wording of the specific advisory. 8) If you're lucky, Apple will fix the advisory in the week they say they will. 9) Normally, the date will slip a few weeks. Or maybe a month. I applaud Microsoft for doing this. Hopefully Apple will follow suit and move out from the stone ages.

Re:Good luck getting Apple to agree (0)

Anonymous Coward | more than 4 years ago | (#32994740)

Your reasons are not obvious to me... obviously though, I will post AC.

Re:Good luck getting Apple to agree (3, Insightful)

Anonymous Coward | more than 4 years ago | (#32994844)

I will clarify this for you.

Apple is an insular and paranoid company. They are built upon the myth that the Mac/iPhone/iPad/iPod platform is "safe". They are selling an image: of computing platforms that are safe and secure for the end-user. Reality does not agree with Apple.

Most responsible researchers will play Apple's game, and part of their game is sending out inaccurate and vague responses as to when they may (or may not) fix what vulnerabilities have been found. I think it's helpful for people to know how Apple really works.

I'm at stage 3 (0)

Anonymous Coward | more than 4 years ago | (#32999050)

I still haven't gotten anything but the automated emails from them from a kernel privilege escalation bug I found in Snow Leopard. It's been about a month now.

Sometimes it's tempting to go put the bug on a blog somewhere.

My first thought (1)

MadGeek007 (1332293) | more than 4 years ago | (#32994730)

About time...

Exploit Release Procedure (0)

Anonymous Coward | more than 4 years ago | (#32994760)

Personally I think it would be wise for all large companies to maybe setup a standardized email something like exploit@company.com that is manned by someone some level of management, not some script reading help desk jockey.

Finder of an exploit can send to this address giving the company 1 week to acknowledge receipt of the information and that they are looking into it.

If they respond within that 1 week time period then it starts the clock on a 90day countdown to full disclosure.

Within 1 week of acknowledgment of receipt of the exploit, which should be enough time to evaluate the exploit the company needs to at least release work around steps for customers to protect themselves, ie ports that should be blocked at a firewall, non critical services that can be stopped, etc as long has it offers some level of protection to the customer without breaking core functionality of the software or releasing info on the full exploit

Within 1 month of acknowledgment of receipt of the exploit the company should provide the finder of the exploit with a preliminary patch under NDA if need be, this obviously doesn't have to be ready for shipping, nor go though tons of testing. but it would at least let the finder apply it to a test system to see that they are actually working on a patch.

If a preliminary patch isn't given in the 1 month window then the exploit can be fully disclosed.

If followed This would give the company a month to make the patch and then a further 2 months to do any kind of regression testing before shipping to customers

If not followed, then there are 2 scenarios. If they don't even acknowledge receipt of the exploit then full disclosure in a week. If they do acknowledge receipt but don't provide a preliminary patch in a month then full disclosure in a month

Not the title I thought it was (0)

Anonymous Coward | more than 4 years ago | (#32994936)

The title I read was Microsoft Takes Major Shit on Disclosure Policy

Assuming I care (1)

codepunk (167897) | more than 4 years ago | (#32995710)

If I happened to run across a vulnerability tomorrow I might be inclined and would likely publish it that very day. Microsoft assumes I care for the well being of them and their customers when really I don't. I know this is aimed more at security researchers but then again they may very well feel the same way.

Here's a radical idea: (2, Funny)

Ancient_Hacker (751168) | more than 4 years ago | (#32996024)

Here's a radical idea: How's about they don't release code tons of fresh code every cycle, and instead maybe check the code over first for buffer overflows, NULL pointer abuse, heap munging, and all the other obvious ways of executing code?

Just sayin'

Re:Here's a radical idea: (0)

Anonymous Coward | more than 4 years ago | (#32996330)

If you think they don't do the latter (or Adobe, or Apple, or Google, or any other big-name software company) then you're just plain impossible. And for that matter, buffer overflows are rarely the problem these days.

Not releasing as much fresh code is a potential mitigation for all parties, but the dominant request is for the exact opposite -- support HTML5, SVG, H.264, etc., etc..

Re:Here's a radical idea: (0)

Anonymous Coward | more than 4 years ago | (#33047326)

You have clearly never done any work on a large-scale project containing multiple dependencies that you didn't write had to sift through more than 800,000 lines of code to get to the one function you want to change. Actually, it sounds like you've never written much beyond hello world. There is not a single Operating System on the market that is 100% safe and 100% bug-free. Not one. Not Mac OS, Not Windows, Not Vax, Not VMS, Not Os/2, not DOS, not Amiga, not a single *nix variant, not Cisco IOS, not SunOS, not the AS/400, nada. Not a single Operating system without bugs and flaws.

All operating systems go through a testing phase where developers try to weed out and fix as many bugs, flaws, vulnerabilities and security issues as possible. This starts in Alpha and moves to beta. When nearing the end of the beta phase you'll see release candidates popping up and finally a release that will later be patched, then patched some more, and then continue to be patched as it's it's successor is developed, it'll subsequently continue to be patched as it's sucessor's successor is developed, and so on until support for it is cut entirely because of it's complete and utter lack of market share(or it's developers refusal to sell an old OS that still works better than their current offerings)

Of course, if you think you can write a completely bug-free 100% secure 100% compatible OS by yourself you go right ahead Mr.Perfect

Microsuck as usual has demonstrated their grandiose stupidity with this, though my thoughts on the matter have already been expressed by another Anon:

"Personally, I prefer the Google and Mozilla method whereby researchers are paid a bounty of a few thousand dollars for reporting vulnerabilities in the manner the vendor prefers. Microsoft would be wise to follow the leaders rather than invent their own convoluted process.

There's a fundamental problem with your comparisons. When a security bug is released in Firefox you see the Mozilla Foundation marvel at the cleverness of the attack. Then a distributed net of individuals quickly work together in an agile way to get the hotfix out and then sometime is spent testing and hardening that fix. When a security bug is released targeting Chrome or any of Google's products, you see Google developers that are comfortable on their campuses swing long hours and work together to push out a fix as quickly as possible. These are all sensible approaches to security bugs.

With Microsoft, however, you see the heavy thudding of a big corporation. You see a complex inner working of management slow things down. Somebody might ask for an estimate on how much money this is going to cost and that estimate comes back a week later. Senior management starts shredding documents. Engineers start falling from helicopters in Redmond. A tornado of chairs leaves several injured. Microsoft's campus looks like the superdome following Katrina. People are chained to their desks. The reason they ask for 60 days is because that's how long it takes FEMA aid to reach Microsoft ...

You just can't compare the two ..."

OSS vs CSS vulnerability reporting (3, Insightful)

AlgorithMan (937244) | more than 4 years ago | (#32996386)

OSS: find a bug, fix it (because you can), submit code changes

CSS: find a bug, see a lawyer, contact a CERT, wait several weeks for a response, sign an NDA, share vulnerability informations, wait 2 months, ask for status, wait for an answer for 4 more months, realize that the vendor will do squat about the vulnerability as long as his customers don't know how threatened they are, release the infos to the public to put pressure on the vendor, be threatened by the vendors lawyers, be called a criminal by the vendors customers and the press and politics, have a house-search, wait 2 more months, get patch, realize that it doesn't fix the problem, rinse and repeat

Please define "ample time" (1, Troll)

RobertM1968 (951074) | more than 4 years ago | (#32997132)

I am very curious how Microsoft defines "ample time" especially considering some of their vulnerabilities (like the one recently "patched" in the DOS subsystem) have existed for years or decades.

This isn't a slam at Microsoft, it's a hope that someone has some clarification that can be used as a context to determine if this statement means anything. Even when the terms of their statements are less ambiguous, they seem to find ways of backpedalling - thus greater clarity on something so very ambiguous is warranted (even if it turns out to be pointless in the long run per whatever practices they actually employ).

Oh wait, the summary is not correct (of course) - but the reality of the statement is worse:

Microsoft:

CVD's core principles are simple: vendors and finders need to work closely toward a resolution; extensive efforts should be made to make a timely response; and only in the event of active attacks is public disclosure, focused on mitigations and workarounds, likely the best course of action -- and even then it should be coordinated as closely as possible.

Inotherwords, this statement really says "You should never tell anyone but us, unless active attacks are taking place - but even then, you should coordinate such with us" (at which point, they will probably say "dont tell anyone" as has been the current and previous cases.

Also, who are they to dictate how (and to who) researchers disclose such information? Is there some legal basis for this, or is (will) it be under the threat of using their financial muscle and influence to try to get the person charged with some sort of online security or terrorist crime? Yes... for those who don't know, the Patriot Act does indeed cover such things.

Additionally, the spin group at Microsoft said this, which is misleading in the grand context of this problem:

Microsoft:
However, we fundamentally believe (and our experience over the last 10 years has shown) that once vulnerability details are released publicly, the probability of exploitation rises significantly. Without coordination in place to provide a security update or tested workarounds, risk to customers is greatly amplified.

The truth is, once a vulnerability is released to the public and exploited, Microsoft is somewhat forced to fix it in a more timely fashion - as opposed to ignoring it for years (the numerous .NET exploits that still aren't fully patched) or decades (the DOS exploit recently patched).

This is really a non-news item as this is business as usual, carefully worded to seem like Microsoft is changing their stance on things (while the reality is, they are not).

Re:Please define "ample time" (1)

RobertM1968 (951074) | more than 4 years ago | (#32999834)

LoL, someone who doesn't know much about computers got mod points. One can choose not to like the truth, but, as even Microsoft themselves admitted, this is NOT a change in policy - it's a change in NAME only.

Re:Please define "ample time" (1)

Dan Ost (415913) | more than 4 years ago | (#33002170)

Why is this modded "troll"?

"Insightful" is more appropriate. Near as I can tell, this post is dead on.

REN "responsible disclosure" "CVD" (1)

gh0s7r4v3n (936281) | more than 4 years ago | (#32997474)

All they did was rename it:

"[CVD] is the same thing as responsible disclosure, just renamed," repeated Reavey. "When folks use charged words, a lot of the focus then is on the disclosure, and not on the problem at hand, which is to make sure customers are protected, and that attacks are not amplified."

http://www.computerworld.com/s/article/9179546/Drop_responsible_from_bug_disclosures_Microsoft_urges [computerworld.com]

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?