×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Questioning Google's Disclosure Timeline Motivations

timothy posted about a year ago | from the comparative-advantage dept.

Google 73

An anonymous reader writes "The presence of 0-day vulnerability exploitation is often a real and considerable threat to the Internet — particularly when very popular consumer-level software is the target. Google's stance on a 60day turnaround of vulnerability fixes from discovery, and a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality. As a web services company it is much easier for Google to develop and roll out fixes promptly — but for 95+% of the rest of the world's software development companies making thick-client, server and device-specific software this is unrealistic. Statements like these from Google clearly serve their business objectives. As predominantly a web services company with many of the world's best software engineers and researchers working for them. One could argue that Google's applications and software should already be impervious to vulnerabilities (i.e. they should have discovered them themselves through internal QA processes) — rather than relying upon external researchers and bug hunters stumbling over them."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

73 comments

You suck (4, Insightful)

AmiMoJo (196126) | about a year ago | (#43882743)

If your company is producing security critical software and doesn't have a plan for quickly dealing with bugs you suck.

Re:You suck (5, Insightful)

JMJimmy (2036122) | about a year ago | (#43882827)

Also, even if they can't patch it quickly the point is to inform users so they can take appropriate precautions.

Then most of the world sucks (1)

Anonymous Coward | about a year ago | (#43883105)

"Security critical software" is anything on the Internet.

Let's say I write a clever new web game, called "Cyber Tic Tac Toe." It's based on the classic idea of tic tac toe, but with the novel twist that it's on the Internet! As fun as my game is, and despite the fact that you are totally addicted to my awesome game and play it about 20 hours every week, questing to get your 3 Xs in a row, some people would say this is completely mundane and unimportant. (Killjoy bastards!) They would say my game, no matter how kickass it is, isn't in the same realm as your electronic payment verification app.

But I screwed up. My online game was rather poorly written, and someone can rather easily use my poor code to get the server it runs on to become a spambot. So cybertictactoe.com is sending out a ten million Viagra mails per day. Someone can also use this hole which lets them take over my underlying server, to have my game attempt to use any known browser exploits to attack my game players. So now the people who play Cyber Tic Tac Toe are also sending spam, and also running keyloggers (installed from my server) whenever they're talking to your electronic payment verification app.

"Security critical software" == EVERYTHING THAT EXISTS

Re:You suck (0)

Anonymous Coward | about a year ago | (#43883199)

Obviously, Slashdotters all work at startups and not for real, viable businesses.

Re:You suck (0, Insightful)

Anonymous Coward | about a year ago | (#43883421)

Explain that to a PHB project manager [1] who has a mantra: "Security has no ROI... Security has no ROI", and refuses to pay for anything other than the basic QA when it comes to security. If there is a flaw discovered, usually it gets flagged as "FNR", or fixed in next rev or it just gets downchecked as "nobody will find that" or "too expensive in man-hours to fix right now."

A lot of companies I encounter are purely reactive. Sales/Marketing wants "x" feature list for the next rev, which cause the managers to crack the whip, so it can be gotten out on some arbitrary release schedule, and if it builds, it is a lucky thing, much less actual security be used.

Lawsuits are not going to work -- EULAs have been upheld in court. The only thing that really does is OSS projects because (and this is in general) there is not a PHB demanding all devs get 10,000 lines of code written a day or else they will be chucked for a H-1B, or the whole dev divison offshored.

It is only going to get worse. There is -zero- penalty for failing to have secure code. This by Google is the -only- thing out there that might spur vendors to actually do stuff. Otherwise 0-days will never be fixed, period. We saw this shit in the 1990s with UNIX vendors who were too lazy to patch holes, forcing sysadmins to use BSD binaries, and it only got worse.

[1]: A manager that likely has staffed the team with $30,000 a year H-1Bs because of the payroll tax bonus, so in actuality, there might be 1-2 people actually pulling their load, the rest are watching Bollywood flicks.

Re:You suck (0)

Anonymous Coward | about a year ago | (#43884805)

Where I come from, Watching Bollywood flicks IS mission critical.

If a H-1B gets you into America, think where a PHB can get you!

Acronym or die!

-1 Flamebait (4, Insightful)

a_n_d_e_r_s (136412) | about a year ago | (#43882745)

For article. Sadly you can't moderate an article.

Re:-1 Flamebait (1)

Anonymous Coward | about a year ago | (#43882839)

What article? All I see is some idiot's comment that somehow got posted as if it were an article. Timothy must be drunk again.

Exactly... (4, Informative)

theshowmecanuck (703852) | about a year ago | (#43882895)

Testing can find the presence of bugs, not the absence of them. An ages old adage that has withstood the test of time because it is true. Only the new to the game or naive would think otherwise.

Re: -1 Flamebait (2, Insightful)

Anonymous Coward | about a year ago | (#43882913)

Agreed. Submitter sputtered the industry line, but if the industry wanted different standards they should try setting *some* standards instead of pretending that it's OK to ignore vulnerabilities.

Good catch (2, Interesting)

Anonymous Coward | about a year ago | (#43882759)

Google has a decent point, but they're also trying to nudge the industry in a direction where more businesses will leave the driving to them [wikipedia.org] , or to cloud competitors like Amazon.

Just Google? (5, Insightful)

chrylis (262281) | about a year ago | (#43882767)

Why single out Google? Shouldn't traditional software vendors have also run programs through QA?

Re:Just Google? Not at all (1)

NReitzel (77941) | about a year ago | (#43883121)

Having worked in software development for 30 years, let me tell you a dark little secret.

QA programs are never enough.

It matters not how good your QA section is, things will get through. Why? Because the QA people have looked at the code, because QA managers are fond of believing that filling out paperwork is the same thing as careful retrospection, because the QA people work for a company whose vested interest is in getting that chunk of software out the door.

In order to do a thorough job, one needs Different Eyes.

I cannot tell you how many times I've had an intractable problem with a piece of software, and shown it to a coworker not associated with the product, only to have them turn to me and say (in effect) "You dumbass, it's right here!" Which, in fact it was. I'd have looked at it countless times, and I might well never have seen the problem.

Running software through extensive QA is an excellent idea, but in and of itself is not sufficient. Having someone not connected with the product, someone with "different eyes" look at it, is a time proven method to find things that your excellent and well managed QA department will never see.

Re:Just Google? Not at all (2)

chrylis (262281) | about a year ago | (#43883663)

Oh, I'm completely with you: Developers should run QA, but there will always be bugs that slip through. I was responding to the AP's insinuation that Google should be catching all their security problems in internal QA but that Microsoft should get a pass for some reason.

Re:Just Google? Not at all (0)

Anonymous Coward | about a year ago | (#43884667)

Well, I think the AP's point is that Google has many of the worlds best software engineers working for them, and Microsoft does not. :-)

QA should NEVER be ran by the developer (0)

Anonymous Coward | about a year ago | (#43887063)

QA should be performed by an independent 3rd party that didn't have any direct knowledge about the development. Companies should have a team dedicated to nothing SQA, even if that annoys the hell out of developers. A team who's main purpose is to find ways to break stuff.

Unfortunately, that rarely happens and the same people who wrote the software are the ones writing the test procedures and running the quality test. And even the most dedicated and honest professional has blinders turned on and will write the procedures to support the way the software was designed and not the way the software is intended to be used.

In the case of Google, they just don't do anything beyond your basic automated unit test .... which rarely catches anything mayor.

Re:Just Google? Not at all (0)

Anonymous Coward | about a year ago | (#43884707)

Sadly there's a lot of code that sees neither QA people or QA robots.

Re:Just Google? Not at all (0)

Anonymous Coward | about a year ago | (#43886925)

I may be wrong, but unless you hire attackers and hackers to truly find what can be exploited you can do all the QA you want with standard minded folks and find the obvious holes that your programmers didn't look at more carefully, or hire attackers that are going to find those holes, it will never be enough or there will never be "full proof" software but even with QA they seem to almost intentionally allow or overlook holes.

I find the comments in the article to be naive, while Gaagle should be able to find holes in its software or any ware, it is often the holes that most do not see or bother to notice that get blown up, ironically they (other besides Gaagle) also have a habit of allowing such blatant holes, and always come up with some good PR to excuse such stupidity.

I would add the obvious, it costs money to hire full time attackers, and even in doing that they may not be able to find much if anything, or you can just use the wild environment to get younger folks, and outside researchers to find them for you. Gaggle has a habit of cheap skating when it comes to making others do all the work for little to nothing while they take credit for it.

Re:Just Google? Not at all (1)

metaforest (685350) | about a year ago | (#43902167)

Having worked in software development for 30 years, let me tell you a dark little secret.

QA programs are never enough.

Dropping previous moderation to comment here:

Having worked in SQA for over 17 years I can tell you PM NEVER listens to SQA unless the app would be DOA on more than 20% of the target customer's machines. On occasion a XXO would override PM and tell them no... you cannot RTM until you fix *THIS* issue... but security, data loss, customer satisfaction were never significant guidance to PM or XXO decisions.

And these issues are not deep... this is low hanging fruit, more often than not. PMs and DEVs hate SQA and wish we would DIAF. They see us as a headache to be mitigated, not an instrument to measure quality.

Re:Just Google? (2)

Vintermann (400722) | about a year ago | (#43885363)

It's Google who are pushing a 7-day deadline for vulnerability disclosure. These companies are in effect saying "That's not fair, we can't fix our software that quickly!". It would be a real PR foot bullet if people understood what they're saying.

Deceptive summary (1)

Anonymous Coward | about a year ago | (#43882769)

Just because you commit to a certain turnaround, doesn't mean you won't beat it. Anyone in business knows not to produce warranties or guarantees that you can't meet every time. People that over promise and under deliver typically aren't around too long.

Perhaps Google does tell companies... (4, Informative)

hsmith (818216) | about a year ago | (#43882857)

And they simply don't do anything? I've contacted companies about security flaws I've found in their products and was met with deafening silence.

Until I publicly announce them on platforms like Twitter, then you have their full attention.

Re:Perhaps Google does tell companies... (1, Insightful)

AmiMoJo (196126) | about a year ago | (#43883859)

If you say "I'm telling you privately but going public in 7 days" it sounds like a threat and the most likely response if a lawsuit or the police hammering on your door. Even if you don't say anything about going public there is a fair chance of that happening. Your actions are pretty risky.

Unfortunately unless the company has a history of dealing with security bug reports in a timely and proper manner the most responsible thing to do is just post about it anonymously to one of the full disclosure mailing lists. Morally you can't sit on it but shouldn't be expected to risk your own liberty and employment either.

Critical vulnerabilities under active exploitation (5, Informative)

mattiaza (2567891) | about a year ago | (#43882909)

Google recommends 7 days for "critical vulnerabilities under active exploitation", and 60 days for vulnerabilities that are assumed to not yet be known to attackers.

Frankly, even 7 days is too long for active attacks. Publishing the vulnerability lets users to use a workaround or shut down the service or app entirely until a fix is released.

Publishing a warning about a vulnerability ... (0)

Anonymous Coward | about a year ago | (#43887079)

.... is one thing. Publishing the code that exploits it is another.

What Google is saying is you have 7 days before I release the code that will screw your users.

Re:Publishing a warning about a vulnerability ... (1)

micheas (231635) | about a year ago | (#43887169)

You miss read.

What Google is saying is "You have seven days before I tall people how you customers are being screwed."

Once you have an active malware attack exploiting a vulnerability, what is the harm of full disclosure? They only thing I can see this buying is time for the PR flacks to get the story together.

You're right, 7 days isn't good enough (3, Insightful)

Anonymous Coward | about a year ago | (#43882917)

A vulnerability that is already being exploited needs to be fixed right away. It's called 0-day for a reason, not 7-day. It should be disclosed immediately to force the vendor to do something about it.

Re:You're right, 7 days isn't good enough (1)

NotBorg (829820) | about a year ago | (#43884753)

The whole reason for restricting disclosure is to prevent that situation that is already happening for anything in the "already being exploited" category. Waiting 7 days is pointless.

Google naive? (0)

Anonymous Coward | about a year ago | (#43882919)

You call Google practices naive - and then you say:

"One could argue that Google's applications and software should already be impervious to vulnerabilities (i.e. they should have discovered them themselves through internal QA processes) — rather than relying upon external researchers and bug hunters stumbling over them."

I don't think it's relevant in the context of the article in the first place, but. Are you maybe aware of any engineering process that would find and eliminate all bugs in software? No - so why are you even proposing or referencing such idiotic ideas.

The argument "Google can do it, because it is web software company and others are not" is not true. Google Chrome, Android, Google Earth.....

Already answered own question (0)

Anonymous Coward | about a year ago | (#43882935)

Sounds like 95+% of the rest of the world's software development companies should not be making thick-client, server and device-specific software as their security detail is unrealistic. Many businesses are rather naive and devoid of exploit reality.

What?! (5, Insightful)

CanEHdian (1098955) | about a year ago | (#43882957)

a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality. As a web services company it is much easier for Google to develop and roll out fixes promptly — but for 95+% of the rest of the world's software development companies making thick-client, server and device-specific software this is unrealistic

Hello there, mr/ms/mrs anonymous COWARD, what are you saying there? It COSTS TOO MUCH to prompty (as in a week) fix ACTIVELY EXPLOITED vulnerabilities? When you get the actual problem handed to you on a silver platter? What company do you work for?

Re:What?! (1)

fuzzyfuzzyfungus (1223518) | about a year ago | (#43884405)

It also seems to rely on the(severely dubious) assumption that Google's disclosure will have much of an effect, on anything aside from how slow the vendor looks, for 'actively exploited unpatched vulnerabilities'.

This isn't a 'Hey, we found a bug that nobody else knows about yet, we are going to release it just for giggles!' situation. Clock. Is. Already. Ticking. Google's 'deadline' may be the one at which you start looking increasingly incompetent in public; but the deadline at which everyone running your software started to be in danger has already passed, and was set by somebody else, over which neither you nor Google have any control.

That's what I don't understand about this 'argument'. Sure, in a hypothetical world where Google alone controls the time at which a vulnerability becomes known and exploited, there might be room for argument about how much time they should grant. This isn't that world.

Re:What?! (0)

Anonymous Coward | about a year ago | (#43887649)

It also seems to rely on the(severely dubious) assumption that Google's disclosure will have much of an effect, on anything aside from how slow the vendor looks, for 'actively exploited unpatched vulnerabilities'.

This isn't a 'Hey, we found a bug that nobody else knows about yet, we are going to release it just for giggles!' situation. Clock. Is. Already. Ticking. Google's 'deadline' may be the one at which you start looking increasingly incompetent in public; but the deadline at which everyone running your software started to be in danger has already passed, and was set by somebody else, over which neither you nor Google have any control.

That's what I don't understand about this 'argument'. Sure, in a hypothetical world where Google alone controls the time at which a vulnerability becomes known and exploited, there might be room for argument about how much time they should grant. This isn't that world.

the problem is google having a different standard in real life for themselves.

This is an adult? (0)

Anonymous Coward | about a year ago | (#43882967)

I thought for sure this was written by a teenager. You generally don't hear this kind of naievety from adults.
"That's too fast, it's unrealistic, by the way, Google should write perfect choose and fix vulnerabilities 10 times faster than the unrealistic timeframe I just complained about."
Yes, there's a contradiction in that sentence. It's not my fault.

Slower? He's Saying Slower?!? (3, Insightful)

Bob9113 (14996) | about a year ago | (#43882985)

a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality.

I read that and I was thinking, "Well, yeah, sure - I shoot for one hour and can't recall the last time it took more than a day to get a critical bug patch out, but that's not really reasonable for everyone. The team I work on is pretty focused on keeping the tracks polished so we can get high priority things through. I think 7 days is OK. It could be better, but it's OK. And Google isn't even saying it will take 7 days, they're saying 7 days is the max. But, whatever, I guess -- ultimately agitating for faster patches is something I support."

for 95+% of the rest of the world's software development companies making thick-client, server and device-specific software this is unrealistic.

What?!? You mean it's not realistic to get the patch available within 7 days? I mean, obviously you can't expect users to have their systems patched immediately, and sometimes a third party (like a walled garden approval path) can lock you out. But is the writer saying 95% of companies can't even have a patch pushed for release in 7 days?

If that is true, we, as a society, need to drop what we're doing and focus on security, build management, QA workflow, whatever it is that is making that a reality. 7 days is acceptable. 95% of companies can't hit 7 days? First, that is not true in my experience. But if it is? That is not acceptable, if it is true. There really are bad people out there trying to root our electronics. Seven days to get a patch out for an actively exploited in the wild vulnerability is enough. Work the problem. Figure out why you can't hit that number, and fix it.

Re:Slower? He's Saying Slower?!? (0)

Anonymous Coward | about a year ago | (#43884835)

But is the writer saying 95% of companies can't even have a patch pushed for release in 7 days?

95% of companies can't even order a pizza in 7 days - and dont even think about doing anything involving money related software in 7 days. - I thoroughly recommend spending at least 7 days looking for someone else to blame before even opening a source file.

Re:Slower? He's Saying Slower?!? (1)

DJRumpy (1345787) | about a year ago | (#43885567)

I think the disconnect here is not in writing the patch, but rather certifying it doesn't break anything else. Any time you touch code, you risk breaking something else. Our QA department requires a regular 20 day lead time, although you can escalate emergencies for production fixes and get them into QA almost immediately, but it still often takes 2-3 days to get a minimal once-over from QA. They do not evaluate all of the code line by line. They test all functionality, ensure that it works as documented, and they also verify the fallback package to ensure if something does break production, you can quickly fall back off.

7 Days for a business is do-able, but expecting a patch at the drop of a hat is not realistic. Anyone doing so would most likely introduce any number of new issues that could be just as likely to expose new vulnerabilities. It's easy to react. Much harder to react with required caution if that makes sense. All patches should still undergo as much QA as is feasible to ensure stability and a fix for the existing issue.

Re:Slower? He's Saying Slower?!? (1)

ColaBlizzard (2870167) | about a year ago | (#43885353)

It is true. 7 days for 95% companies is unrealistic. If you make big enterprise software to be sold to big vendors (SAP?), the clauses are simple: Any regression bug that is noticed and someone screams too loud => heads roll. So they have test-cycles that take weeks, for anything, however small the change is. The problem is not with the development companies, the problem is with the user-companies. They buy software with a 90's mindset. Which software today doesn't have bugs, security bugs, and regression bugs (something used to work, no longer works)? As long as the vendor is agreeing to quickly fix the regressions and the regressions are not data eating, they you should be okay.

I smell a flack (2)

linuxwrangler (582055) | about a year ago | (#43883173)

It's no wonder this article was posted anonymously. The whole tone and writing style is exactly what one would expect in a position statement cranked out by a corporate PR flack. I wonder whose flack it is.

Re:I smell a flack (0)

Anonymous Coward | about a year ago | (#43886237)

You're probably right. I have some anti-flack handy so I'll share it.

If one truly wants to solve the security issues everyone should go blackhat with their 0-days. Always.

It is the only way to exert maximum evolutionary pressure on the code and whoever has responsibility for the flaw.

What if it makes software unusable? Good because the software was already unusable, people just didn't realize.
What if it ruins companies? Good because those companies were negligent, fraudulent, or incompetent.
What if it ruins computing? Good because such computing must be replaced with something better if that's the case.
What if it provokes new laws and regulations? That will happen anyway and will in general make things even worse to the extent that anyone heeds any of it, if that whole process is accelerated into an earlier death then that's also for the better.

Naive and devoid of reality? (3, Insightful)

Sloppy (14984) | about a year ago | (#43883221)

Google's stance on a 60day turnaround of vulnerability fixes from discovery, and a 7-day turnaround of fixes for actively exploited unpatched vulnerabilities, is rather naive and devoid of commercial reality.

I think what you're saying, is that if someone is going around stabbing people in the heart, and if a doctor says these victims all need immediate medical attention (even the victims which are in isolated areas far from hospitals), then that doctor is being naive and devoid of medical reality.

I personally think you should quit blaming the doctor for the unfairness and horror that is inherent in the situation. Declaring the urgency of a problem being addressed, isn't "naive". It's not naive, even if addressing the problem is incredibly hard or even if it's effectively impossible.

If the doctor truly thinks the victims all really will get "immediate medical attention" then he'd be naive. But advising it isn't naive. Yelling at people "get that victim to the ER as fast as you can!" isn't naive. Telling people that heart stab wounds are very serious, isn't naive.

And the analogy with Google here, is that you just got stabbed in the heart, they're advising and hoping you get immediate medical attention, and 7 days from now, if your wife asks Google if they've seen you lately, they're going to tell your wife, "I heard he got stabbed in the heart last week. You took him to the hospital, right? If not, you better get on that, right now." You're concerned Google is going to scare your wife?! Be concerned that you're not at the hospital yet!

You think Google is being naive with unreasonably high expectations, but the need for those high expectations isn't their fault!

Re:Naive and devoid of reality? (0)

Anonymous Coward | about a year ago | (#43883441)

The medical analogy clearly fails because by disclosing the vulnerability, the researchers are informing malware writers as well as the general public. Yes, in some cases some malware writers would already have been aware of the vulnerability, but the disclosure is a broadcast to *every* bad guy out there to fire up their IDE's. This is ethical question around disclosure.

Nice double standard (2)

russotto (537200) | about a year ago | (#43883409)

Google's standards are too hard to be realistic for the rest of the industry, but the standards for Google itself are 0 security defects ever. How does that work again?

(disclosure: I work for Google. I don't speak for them.)

Re:Nice double standard (0)

Anonymous Coward | about a year ago | (#43883509)

Google has a bunch of condescending assholes in the security team, like Tavis Ormandy and Brandon Downey that take care of it, yet they were completely pwn3d by the Chinese.

Re:Nice double standard (0)

Anonymous Coward | about a year ago | (#43886087)

That Tavis guy sounds like a complete asshole:

http://seclists.org/fulldisclosure/2010/Jun/236

The whole thread is disgusting, but Tavis' contribution is that of an arrogant prick.

Re:Nice double standard (2)

NotBorg (829820) | about a year ago | (#43884815)

The whole reason for not disclosing is to prevent the bad guys from knowing about it. The words "actively exploited" means the bad guys know about it. Ergo there's no reason not to disclose it. Waiting seven days is not too short because it's pointless to wait at all under said circumstance.

Re:Nice double standard (0)

Anonymous Coward | about a year ago | (#43887349)

Ummm. A vuln is being actively used in a targeted attack against a single customer. # of people affected: 1 company (say 100K people if it's a big company).

Google learns about the vuln and releases it 7 days after notifying the vendor.

The metasploit folks build a prepackaged exploit and ship it the next day.

Now hundreds of millions of customers are affected by the bug.

How is this better again?

Re:Nice double standard (0)

Anonymous Coward | about a year ago | (#43887943)

Uh, Google didn't say they'd release an exploit or even describe the specific vulnerability.

Re:Nice double standard (1)

ColaBlizzard (2870167) | about a year ago | (#43885381)

The standards for *THE* leading IT company are always higher. Et tu, Brute? => His standard was set higher by Caesar :)

dumb (1)

Tom (822) | about a year ago | (#43883503)

What a dumb opinion piece.

The main difference between a client-installed application and a web-app these days is that a patch on a web application is available as soon as you deploy it, while the patch for the client application needs to be downloaded and installed, which is mostly done automatically.

So, in terms of time, the difference is on the order of minutes, hours at most.

Is it more difficult to create and/or test updates for clients or for browsers? Hard to say, but the difference isn't fundamental. On the one hand you have to test a variety of OS versions, maybe hardware versions. and environments. But you have to do that for any update and for development, so you already have your test lab up and running by the time you need to roll out a security fix. On the other hand, you have to test a variety of browsers and OS versions... and again you already have... basically, same thing with a different test lab.

Will customers, especially corporate customers, delay the deployment by running their own internal tests and/or waiting for their next internal patch day? Sure, but that's not your problem.

Can you afford to delay a security fix? No. Ever since Code Red, Slammer and my own and other people's work on various worst-case scenarios for flash worms, we know that a remotely exploitable issue that allows code execution can be used to infect 90% of the vulnerable systems in less time than it takes humans to react with any temporary workaround such as additional firewall rules or service shutdowns.

And before someone starts the argument: That is true for undisclosed 0-days as well. If some whitehat found it, chances are good some blackhat has also found it already and is at least working on an exploit.

irrelevant industry (1)

SkunkPussy (85271) | about a year ago | (#43883559)

Industry is irrelvant, if your customers are being actively exploited you are or ought to be liable if you don't fix it as fast as possible

FiR5t (-1)

Anonymous Coward | about a year ago | (#43883569)

fanatic known lubrication. You This pos7 brought People's faces is

Google isn't industry standard (0)

Anonymous Coward | about a year ago | (#43883599)

Google isn't taking into account the time required of customers to test and install the patch. There are commercial software systems consisting of more than one product that must be carefully deployed to avoid downtime and a fix isn't a patch, but an upgrade. This isn't even accounting for hardware based vulnerabilities.

Sixty days mean the vendor must aim for at least 30 days to create fixes for all affected releases on every platform and pass quality assurance while allocating resources away from planned time-dependent projects. It would be a fire drill every time. And pray that the vulnerability isn't a design issue.

It shouldn't be shocking to find most commercial software contains vulnerabilities. Even with active exploitation, publicly publishing vulnerability information before a vendor can create a fix will mean higher levels of exploitation. And once that hole is fixed, the crackers will find the next one. The cycle continues.

Any company that performs software security auditing will find many times the vulnerabilities that are reported by security researchers. Google, no doubt, audits their code, but there are still vulnerabilities being reported by independent researchers. They haven't figured it out even with their resources.

Google receives quarterly net profit in the billions and cannot imagine the perspective of the software company with razor thin revenues. It has the same feel as a multimillionaire saying the poor should just make more money and that will solve their resource issues.

It's scary the OP thought anyone would agree. (1)

Anonymous Coward | about a year ago | (#43883673)

The debate has always been about whether to give any advance notification at all. That's why full-disclosure@lists.grok.org.uk is called "full disclosure". That's the spectrum of behaviour that's allowed, anything up to 0, "notify vendor and public simultaneously":

  https://en.wikipedia.org/wiki/Full_disclosure#Arguments_against_Coordinated_Disclosure

Even if not for that, the complaining seems to rest on a vague suggestion that Google's acting somewhat in their own interest instead of always doing the most altruistic thing. Fine so far. Google will probably listen to such a complaint. But that's not what this really is. The reason Google's disclosure practice makes their security look better than your security is that their security _is_ better than your security, so this devolves into "why isn't Google acting in my interest instead of user's interest?" derp.

Vendor's processes not relevant (4, Insightful)

Todd Knarr (15451) | about a year ago | (#43883681)

As a user, I don't care about the vendor's ability to fix it quickly. Really I don't. That's their problem. My problem is that my systems are vulnerable to compromise and I have to do something about it. I need to know what the vulnerability is, in enough detail to understand it myself, and I need to know the possible workarounds (not just the vendor's recommended one(s), which is another reason I need to know what the vulnerability actually is so I can understand all the other possible ways of dealing with it). I need to evaluate my options and take whatever steps I need to to protect my systems. If the vendor needs a month to get the fix through their change-control process, I still need to protect my systems today.

The vendor's advice will be based on their most-likely scenario. Problem is that my situation may be radically different from the vendor's most-likely one. There's definitely going to be local considerations even if my situation's one the vendor's workaround covers. I need to understand the vulnerability to be able to evaluate it intelligently. It may not even be relevant to my setup. If it is, I may have less-intrusive workarounds (eg. for the SSH OTP authentication bug, if we've got a purely internal network that isn't accessible to the outside world or the Windows desktop portion of the network it may be less intrusive to just monitor for attempted exploits and defer doing anything until I see someone having gotten past the air gap rather than changing an authentication method that a lot of people depend on and that can't be exploited easily without being physically in the building). And if I need to take drastic steps like disabling the vulnerable SSH authentication method, I may have clients who insist they must be able to use it (maybe because their systems are based on it and they need my systems to integrate with their authentication because I'm providing services to them) and I need to be able to intelligently discuss exactly what's wrong and why it's simply not possible to use that method without being vulnerable and we've got to change to a different method despite the disruption. I can't do that unless I understand the vulnerability.

Notice that in all the above I haven't mentioned the vendor at all. Like I said, the vendor isn't relevant at all. It's my systems that're vulnerable and me that has to do something about it. If the vendor already has a fix then well and good, but if they don't it doesn't change my situation. When vendors say they need more time, they're asking me to leave my systems vulnerable without telling me they're vulnerable. Sorry, but no. Not, that is, unless they're willing to shoulder 100% of all the costs resulting from that vulnerability being exploited. Not just direct costs, things like the costs of lost business and clean-up if the vulnerability is exploited and liabilities I may incur because of the compromise. If a vendor isn't willing to take on that liability, then they don't get to tell me I shouldn't have the information I need to protect myself from that liability. If they don't like it... this is the sound of the world's smallest violin, playing the world's saddest song just for them.

Re:Vendor's processes not relevant (1)

swillden (191260) | about a year ago | (#43885907)

When vendors say they need more time, they're asking me to leave my systems vulnerable without telling me they're vulnerable. Sorry, but no. Not, that is, unless they're willing to shoulder 100% of all the costs resulting from that vulnerability being exploited.

The bit that you're ignoring is that by telling you about the vulnerability they're also telling all the black hats about it. So while your systems are vulnerable either way, the choice is between you and all the hackers knowing or you and most of the hackers not knowing. Whether this increases or decreases your actual exposure depends on who is interested in attacking you and whether or not they already have this exploit.

While you may be capable of implementing countermeasures to limit your vulnerability until a patch is published, that doesn't mean everyone is. On balance, is it better to hold exploits close until fixes are available? There are valid arguments on both sides, but on balance I tend to side with keeping things quiet for a bit while the vendors get a fix out.

Re:Vendor's processes not relevant (1)

Todd Knarr (15451) | about a year ago | (#43886007)

The bit that you're ignoring is that by telling you about the vulnerability they're also telling all the black hats about it.

Except that the black hats already know about it, and are actively using it to compromise systems. Telling me about it doesn't give the black hats any information they didn't already have. The only way keeping the information confidential helps me is if it's something the black hats can't possibly figure out without being told, and the track record says there's no such thing. I have yet to see a vulnerability that wasn't known and at least a proof-of-concept exploit in circulation for before the vendor disclosed it, I've seen plenty where the vendor only disclosed after major compromises attributable to that vulnerability became public knowledge.

As far as not everyone being able to protect a system goes, if you're unable to do basic system administration then you shouldn't be administering a system. You hire people who are qualified to do it for you. Don't expect me to agree to leave my systems vulnerable without my knowing about it just to appease that handful of dolts who're too stupid or too cheap to hire competent help for the job. In this day and age there's no excuse for not knowing you need competent admins running your systems.

Re:Vendor's processes not relevant (0)

Anonymous Coward | about a year ago | (#43886743)

I think you've both got part of it, but neither has all of it.

Many commentors are implying that if one black hat has the exploit they all have it. This is unlikely. Yet everyone here seems to think that "in the wild" and "actively exploited" means that every single black hat is coming with guns blazing. There could be some signficant time to get the fixes in. This time could be highly variable and related to cultural/commercial factors in the black hat community.

On the other hand, congratulations for being a competent admin (I'll take your word that you are) but not everyone is. And even for those that are, their organizations can prevent them from doing the right thing. It sucks but that's reality many places. And since when was there ever an "excuse for not knowing you need competent admins"? Basic good management says that was always true and good management itself can be in short supply.

My suggestion? The vendors could inform their customers about unfixed security exploits and suggest mitigations. You can state that the information is confidential and ask (or demand) that it be kept as such. It's a calculated risk and maybe some black hats will get their hands on it. As previously stated it's already in the wild so it's not like the black hats didn't, at least theoretically, have an alternate means to get the information. For the vast majority of customers the very last thing they want is to expose themselves to additional risk. Therefore you've got the customer's self-interest working on your side. This then leaves the stupid/rogues/disgruntled as an information leakage vector.

Note that this is not the same thing as publically announcing the exploit. That's a whole different ball game. I'm talking about privileged communications between a vendor and their customers.

However there are lots of vendors who are too insecure, or have lousy customer relations, to take this step. And there are customers who are naive or hypersensitive and will react badly (OMG our vendor software has BUGS?!).

Re:Vendor's processes not relevant (1)

Todd Knarr (15451) | about a year ago | (#43886889)

First problem is that sure, not every black hat will have the details, but any black hat may. In this age of automated toolsets for attackers, once one black hat figures it out and includes it in a toolset it's only one update away from being in the arsenal of any attacker using that toolset. In that environment you can't wait for them to prove they have the information, by the time that happens you're compromised and cleaning up and suffering all the financial, legal and PR liabilities that come with being breached. You have to assume that if a vulnerability exists then the guy attacking you has the tools to exploit it, and if it turns out after the fact that he didn't then you're still secure.

As far as not everyone being a competent admin and organizations having policies that prevent doing the right thing, tough. This was all well-understood 30 years ago when I was starting out, and I find as I grow older that I've less and less patience for people who expect me to leave myself vulnerable just because some people lack common sense.

As far as disclosing only to vendor's customers, down that path leads "We'll inform you of vulnerabilities, but only if you subscribe to our Ultra Platinum Plus support package for only $YOURFIRSTBORN per year. Oh, and the terms are that you can't say anything that might show us or our products in a bad light.". As far as I'm concerned, once the vulnerability's in the wild and actively being exploited the vendor's a side-show. Whatever they do is nice and all, but priority #1 is making sure that the people who'll be targets of the black hats know what they need to know to protect themselves. If the vendor wants to exercise any control, they need to start being proactive and finding the vulnerabilities before they get into the wild. 'cause frankly most of the vulnerabilities I'm seeing these days are from basic stuff that should have been dealt with before the code ever went to QA. Things like "All externally-supplied data is presumed hostile until proven otherwise." or "If a service is physically accessible, someone will attempt to access it. See preceding about externally-supplied data.".

Re:Vendor's processes not relevant (1)

swillden (191260) | about a year ago | (#43891257)

Except that the black hats already know about it

Some do. Which is a distinction I clearly drew in the post you responded to, apparently without reading all of it.

Re:Vendor's processes not relevant (1)

Todd Knarr (15451) | about a year ago | (#43892101)

Some do. Which is a distinction I clearly drew in the post you responded to, apparently without reading all of it.

No, because it's a distinction you can't draw. There's a line from a Dr. Who episode: "Not every shadow, just any shadow.". The same applies to attackers. Not every attacker will know the details, but any attacker may. And since by the time you know whether any particular attacker knows the details or not it's too late to defend, you have to assume that any attacker knows and defend yourself before the attack starts.

And in this age of attack toolkits that get regular updates, once one attacker knows the details and creates the attack modules any other attacker is just one toolkit update away from having the attack too.

AC has no clue (0)

Anonymous Coward | about a year ago | (#43883699)

It would be nice if Anonymous Coward who posted actually knew what it's like to produce software. "A simple code change" can often have unexpected repercussions, including but not limited to expose additional (sometimes worse) bugs/vulns, and can impact usability. This is why commercial companies have QA teams. And it's not like automated QA is instant or complete, and it's not like manual QA is instant. 80 days to turn around a fix is completely reasonable for some fixes. Google fixes bugs all the time and I'm sure that they picked 80 days based on the actual time it takes them to fix other bugs.

OP should get a job in the real world before s/he complains about how the real world works.

Well obviously... (0)

Anonymous Coward | about a year ago | (#43883893)

What an idiot. Is every security defect supposed to be one line of obviously incorrect code whose fix requires no QA / review to ensure that it doesn't introduce new security defects? I guess so since Google should catch all of them in the first place...

Somebody call them a waaaaaambulance... (1)

Anonymous Coward | about a year ago | (#43884073)

Pitiful cries of "*SNIFFFF* Butttt, butttt, we've been using paying customers as guinea pigs for decades, we'd actually have to pay someone to test our shit code now!!"
"That's just not fair, we charge more because we have to pay stockholders and multi-million dollar bonuses to the dickheads who make our fucktarded decisions!"
"Without paying testers, how else can we rake in more cash, more quickly, especially when our EULAs make it so they cannot sue us when our security software kills their PC!"

Waaaah! My business model sucks! (1)

Anonymous Coward | about a year ago | (#43884177)

Let's cut through several layers of meta-issue here and get to the heart of the matter, and then work our way back outwards to the issues this article is so wrong about:

1) Software Development is arguably one of the most difficult, complex, and daunting tasks humankind can endeavor at. That is, assuming you care about factors like quality, efficiency, security, and correctness. What makes it so difficult is that you're molding things out of the most flexible clay ever dreamed of. Physics is pretty much the only limit on pouring out any machination your brain can dream up, good or bad. Only a very small slice of the human population has the right sort of brain to do it well. Of those, only a small percentage bother to take up this profession and endeavor to improve their skills over time. Of those, only a small percentage are still writing software a decade or two later, when they're finally seasoned enough to do it *really* well. Many burn out from bad jobs, or just build up enough cash and resume length to shift out to retirement and/or management and/or running businesses.

2) Given the above, there are very few active, highly skilled, and seasoned developers at any given point in time. As our world increasingly comes to depend on software, however, we keep needing more and more developers. There is vastly more demand for software than there are quality software developer hours to go around. If this were accurately reflected in salaries, good developers would make much more money than they currently do, perhaps by an order of magnitude. They're worth that much easily. The reason it isn't accurately reflected is that many non-developer decision-makers who start/run businesses fail to understand the vast guld of difference between a random junior coder or random H1-B import and a truly talented developer. They figure they can trade one good one for 5 bad ones at a cheaper rate and everything's fine. So that's the market...

3) ...Which brings us around to most software sucking. Most software is written by people who have no business doing so. They cling to certain practices and ideologies hoping they can systematically improve software quality without improving developer quality. Here's a hint: there is *no* formal process that can squeeze high-quality code out of low-quality developers. None. Period.

In the modern era, everything is connected. All software is connected. This means security suddenly becomes even more critical. Security can't happen without quality and correctness. If you can't write secure, good, software, you are to blame for releasing your ugly, buggy, insecure mess. You charged customers money for that piece of shit, and they reasonably expect your bugs to be minimal and your security response to be swift.

If your business model does not include (a) putting out reasonably high- quality/security code from the get-go and (b) very quickly resolving any security bugs that arise, then your model is broken and you deserve to go out of business.

And here's a hint: it's far easier to release bugfixes quicker and keep a handle on security issues when your code is running in-house instead of out on N-million customer computers/devices. Therefore it makes sense for a smart company to shift as much code in-house as possible. This is why every website isn't a separate desktop application, for example. That just wouldn't work in the modern world. You want to minimize the device software. Keeping a small core device codebase secure is relatively easy (e.g. Android, ChromeOS), and then have it access service software over standard protocols and keep the service software "in the cloud" (ugh) where you can keep a close professional eye on security incidents and respond to them swiftly (near-realtime) without worrying about how you get an update to N-million devices and not miss any.

So basically, Google's business model solves the real problems in today's world well. They hire really smart guys and they keep as much code as close to the eyes and hands of the smart guys as they can. They work even harder on the quality of the minimal software they're forced to put on devices. Your business model sucks. You're hiring bad developers, pushing out oodles of bad code to customers, getting bit by security fallout, and not able to respond to the incidents in a reasonable timeframe. It's not Google's fault that your business model sucks and your customers are in trouble.

Because all of Google's products are web sites... (2)

swillden (191260) | about a year ago | (#43884239)

I guess Chrome, ChromeOS, Android, numerous Android apps, Google Earth, Google Drive, Picasa and Google's many other traditional installable software products don't count.

What? (0)

Anonymous Coward | about a year ago | (#43884377)

Is the suggestion seriously that Google's products should somehow be 100% secure? Do we live in a magical dream world where complex software products can be produced without any possibility of security issues? That's insane.

Microsoft, software for idiots (0)

Anonymous Coward | about a year ago | (#43890877)

We all know where this is coming from. Microsoft can't fix their so-called software because their organization is designed to crank out shovelware for retards, not professional-grade tools for serious adults.

What more proof could you need that Microsoft "products" should be avoided like the radioactive plague that they are?

People who approve of Microsoft software for professional environments should be fired for incompetence.

Thick client is too hard (0)

Anonymous Coward | about a year ago | (#43892267)

> As a web services company it is much easier for Google to develop and roll out fixes promptly — but for 95+% of the rest of the world's software development companies making thick-client, server and device-specific software this is unrealistic.

If you are unable to sell safe bicycles, do not sell bicycles. It really is that simple.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...